Test Report: Docker_Linux_crio 22353

                    
                      dccbb7bb926f2ef30a57d8898bfc971889daa155:2025-12-29:43039
                    
                

Test fail (26/332)

x
+
TestAddons/serial/Volcano (0.25s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:852: skipping: crio not supported
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-264018 addons disable volcano --alsologtostderr -v=1
addons_test.go:1055: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-264018 addons disable volcano --alsologtostderr -v=1: exit status 11 (245.524928ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1229 06:47:37.939737   22105 out.go:360] Setting OutFile to fd 1 ...
	I1229 06:47:37.940011   22105 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1229 06:47:37.940023   22105 out.go:374] Setting ErrFile to fd 2...
	I1229 06:47:37.940027   22105 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1229 06:47:37.940253   22105 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22353-9207/.minikube/bin
	I1229 06:47:37.940497   22105 mustload.go:66] Loading cluster: addons-264018
	I1229 06:47:37.940793   22105 config.go:182] Loaded profile config "addons-264018": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1229 06:47:37.940810   22105 addons.go:622] checking whether the cluster is paused
	I1229 06:47:37.940887   22105 config.go:182] Loaded profile config "addons-264018": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1229 06:47:37.940905   22105 host.go:66] Checking if "addons-264018" exists ...
	I1229 06:47:37.941296   22105 cli_runner.go:164] Run: docker container inspect addons-264018 --format={{.State.Status}}
	I1229 06:47:37.960029   22105 ssh_runner.go:195] Run: systemctl --version
	I1229 06:47:37.960101   22105 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-264018
	I1229 06:47:37.977067   22105 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22353-9207/.minikube/machines/addons-264018/id_rsa Username:docker}
	I1229 06:47:38.074570   22105 cri.go:61] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1229 06:47:38.074667   22105 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1229 06:47:38.104822   22105 cri.go:96] found id: "a6a0581d8c61611e2f272b96343a268fe5c70f7029e38edfe79b93a5fd1c926f"
	I1229 06:47:38.104846   22105 cri.go:96] found id: "40f1bbbc2f1bc41ffc575f885bace3f777cfb7eef12de99365091343767cae35"
	I1229 06:47:38.104860   22105 cri.go:96] found id: "cf76d1d0070f881a9ffa8a6c04533147ff62c7b1514e1c54068befd5af071e6e"
	I1229 06:47:38.104866   22105 cri.go:96] found id: "191723d7168d895c6bf51e40836a2fca9afd4a59b8bdb556490f53c792ed7d42"
	I1229 06:47:38.104886   22105 cri.go:96] found id: "ae96a02e9cf4dc9de5d7df316ab70ff8a9d5d492040546a516cfaf1820f86bad"
	I1229 06:47:38.104902   22105 cri.go:96] found id: "c0c3e5a743acf0570907049fa852e80d8eef97b9bad8644c6ff7a8e19a9c7733"
	I1229 06:47:38.104908   22105 cri.go:96] found id: "f3b99ec55e372e2b5c844e254b9ac4579df2d959ab772039faba6a6b067f14b3"
	I1229 06:47:38.104918   22105 cri.go:96] found id: "ebcf1d7d5c969c114009537b9defcb8f0588535464fbc0a2ff842fefb447bb8c"
	I1229 06:47:38.104928   22105 cri.go:96] found id: "c9b16a3ab994c277bfed1b0bf551b2af5be8fe356cf284ce509b41b234e1bbc2"
	I1229 06:47:38.104942   22105 cri.go:96] found id: "4bfc41143f5c8fbc4ac026d705315ecf51fb6fe6d2b5c89b603528b3b12a74db"
	I1229 06:47:38.104951   22105 cri.go:96] found id: "e4899f3db6ad29c94b5952375d6ee208724e8ad29644873f0cd737a68e8a703a"
	I1229 06:47:38.104955   22105 cri.go:96] found id: "292ecef6acbc2acee8ef472e9ea145feebe4df5df46f16f5454e0f099419d6ac"
	I1229 06:47:38.104959   22105 cri.go:96] found id: "b719d17a16ec647d0d77415a2263349a1c5df8d59d393b88bb52526b8c006bde"
	I1229 06:47:38.104961   22105 cri.go:96] found id: "f6f13110641eb941db49bd781fcb8fe293b7d6c4cd5c427879803c2ba59617a1"
	I1229 06:47:38.104967   22105 cri.go:96] found id: "17994c226aacad57fbda97a02fed4553e91bd134ce6d53975246176759124769"
	I1229 06:47:38.104975   22105 cri.go:96] found id: "dc2d3d8fc63a89092d1fb7c5c78aa3b5961a80f87adc9e764d27f5097df557c8"
	I1229 06:47:38.104978   22105 cri.go:96] found id: "21f189ec37313b547a78e6b7eefa6f39267001240605aa04be22ce21ebed4a7b"
	I1229 06:47:38.104983   22105 cri.go:96] found id: "eeb0fd8ef162839fa207c203bea491f6c9a658c71049e04fca885b95bea92e12"
	I1229 06:47:38.104986   22105 cri.go:96] found id: "33b335c36f36593b5f7bf67397f810f69aa60256086010a88acab17521368f9f"
	I1229 06:47:38.104989   22105 cri.go:96] found id: "4a3c44dcac1f9a83e3a787226cced9a0ab00baf1994255347f9606435fbde789"
	I1229 06:47:38.104992   22105 cri.go:96] found id: "51ed82b33b15063ed9584df91a5d6d235c8817d11e47a9a38a1d56e037f2ed6e"
	I1229 06:47:38.104995   22105 cri.go:96] found id: "3cbc1d555be4a7bc9b343c6dd9de17ccc7a1450486f6885704f6cf2ccfab7ebb"
	I1229 06:47:38.104997   22105 cri.go:96] found id: "76eadfaf130b7086d5045d8a47b4ca38794f2e222f8f835d379eb06fbf27c686"
	I1229 06:47:38.105000   22105 cri.go:96] found id: "1fdad9cd679e891d20b4bee4f017f2f3bb45002271534d814ea8b3133140ff54"
	I1229 06:47:38.105002   22105 cri.go:96] found id: ""
	I1229 06:47:38.105044   22105 ssh_runner.go:195] Run: sudo runc list -f json
	I1229 06:47:38.119363   22105 out.go:203] 
	W1229 06:47:38.120764   22105 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-29T06:47:38Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-29T06:47:38Z" level=error msg="open /run/runc: no such file or directory"
	
	W1229 06:47:38.120785   22105 out.go:285] * 
	* 
	W1229 06:47:38.121552   22105 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_9bd16c244da2144137a37071fb77e06a574610a0_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_9bd16c244da2144137a37071fb77e06a574610a0_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1229 06:47:38.122630   22105 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1057: failed to disable volcano addon: args "out/minikube-linux-amd64 -p addons-264018 addons disable volcano --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/serial/Volcano (0.25s)

                                                
                                    
x
+
TestAddons/parallel/Registry (13.58s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:384: registry stabilized in 3.003332ms
addons_test.go:386: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:353: "registry-788cd7d5bc-8clqr" [2d242195-9ec9-4edb-a76d-7692909e715b] Running
addons_test.go:386: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.001892061s
addons_test.go:389: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:353: "registry-proxy-tq9sm" [c285867e-5e87-4d61-b445-882e6c785822] Running
addons_test.go:389: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.002863371s
addons_test.go:394: (dbg) Run:  kubectl --context addons-264018 delete po -l run=registry-test --now
addons_test.go:399: (dbg) Run:  kubectl --context addons-264018 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:399: (dbg) Done: kubectl --context addons-264018 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (3.080239327s)
addons_test.go:413: (dbg) Run:  out/minikube-linux-amd64 -p addons-264018 ip
2025/12/29 06:48:00 [DEBUG] GET http://192.168.49.2:5000
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-264018 addons disable registry --alsologtostderr -v=1
addons_test.go:1055: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-264018 addons disable registry --alsologtostderr -v=1: exit status 11 (250.662026ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1229 06:48:00.306825   24277 out.go:360] Setting OutFile to fd 1 ...
	I1229 06:48:00.307100   24277 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1229 06:48:00.307114   24277 out.go:374] Setting ErrFile to fd 2...
	I1229 06:48:00.307121   24277 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1229 06:48:00.307462   24277 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22353-9207/.minikube/bin
	I1229 06:48:00.307800   24277 mustload.go:66] Loading cluster: addons-264018
	I1229 06:48:00.308257   24277 config.go:182] Loaded profile config "addons-264018": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1229 06:48:00.308283   24277 addons.go:622] checking whether the cluster is paused
	I1229 06:48:00.308419   24277 config.go:182] Loaded profile config "addons-264018": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1229 06:48:00.308451   24277 host.go:66] Checking if "addons-264018" exists ...
	I1229 06:48:00.308931   24277 cli_runner.go:164] Run: docker container inspect addons-264018 --format={{.State.Status}}
	I1229 06:48:00.328546   24277 ssh_runner.go:195] Run: systemctl --version
	I1229 06:48:00.328613   24277 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-264018
	I1229 06:48:00.348942   24277 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22353-9207/.minikube/machines/addons-264018/id_rsa Username:docker}
	I1229 06:48:00.445743   24277 cri.go:61] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1229 06:48:00.445836   24277 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1229 06:48:00.475767   24277 cri.go:96] found id: "a6a0581d8c61611e2f272b96343a268fe5c70f7029e38edfe79b93a5fd1c926f"
	I1229 06:48:00.475790   24277 cri.go:96] found id: "40f1bbbc2f1bc41ffc575f885bace3f777cfb7eef12de99365091343767cae35"
	I1229 06:48:00.475794   24277 cri.go:96] found id: "cf76d1d0070f881a9ffa8a6c04533147ff62c7b1514e1c54068befd5af071e6e"
	I1229 06:48:00.475797   24277 cri.go:96] found id: "191723d7168d895c6bf51e40836a2fca9afd4a59b8bdb556490f53c792ed7d42"
	I1229 06:48:00.475800   24277 cri.go:96] found id: "ae96a02e9cf4dc9de5d7df316ab70ff8a9d5d492040546a516cfaf1820f86bad"
	I1229 06:48:00.475804   24277 cri.go:96] found id: "c0c3e5a743acf0570907049fa852e80d8eef97b9bad8644c6ff7a8e19a9c7733"
	I1229 06:48:00.475807   24277 cri.go:96] found id: "f3b99ec55e372e2b5c844e254b9ac4579df2d959ab772039faba6a6b067f14b3"
	I1229 06:48:00.475809   24277 cri.go:96] found id: "ebcf1d7d5c969c114009537b9defcb8f0588535464fbc0a2ff842fefb447bb8c"
	I1229 06:48:00.475812   24277 cri.go:96] found id: "c9b16a3ab994c277bfed1b0bf551b2af5be8fe356cf284ce509b41b234e1bbc2"
	I1229 06:48:00.475824   24277 cri.go:96] found id: "4bfc41143f5c8fbc4ac026d705315ecf51fb6fe6d2b5c89b603528b3b12a74db"
	I1229 06:48:00.475828   24277 cri.go:96] found id: "e4899f3db6ad29c94b5952375d6ee208724e8ad29644873f0cd737a68e8a703a"
	I1229 06:48:00.475831   24277 cri.go:96] found id: "292ecef6acbc2acee8ef472e9ea145feebe4df5df46f16f5454e0f099419d6ac"
	I1229 06:48:00.475834   24277 cri.go:96] found id: "b719d17a16ec647d0d77415a2263349a1c5df8d59d393b88bb52526b8c006bde"
	I1229 06:48:00.475837   24277 cri.go:96] found id: "f6f13110641eb941db49bd781fcb8fe293b7d6c4cd5c427879803c2ba59617a1"
	I1229 06:48:00.475840   24277 cri.go:96] found id: "17994c226aacad57fbda97a02fed4553e91bd134ce6d53975246176759124769"
	I1229 06:48:00.475844   24277 cri.go:96] found id: "dc2d3d8fc63a89092d1fb7c5c78aa3b5961a80f87adc9e764d27f5097df557c8"
	I1229 06:48:00.475848   24277 cri.go:96] found id: "21f189ec37313b547a78e6b7eefa6f39267001240605aa04be22ce21ebed4a7b"
	I1229 06:48:00.475851   24277 cri.go:96] found id: "eeb0fd8ef162839fa207c203bea491f6c9a658c71049e04fca885b95bea92e12"
	I1229 06:48:00.475855   24277 cri.go:96] found id: "33b335c36f36593b5f7bf67397f810f69aa60256086010a88acab17521368f9f"
	I1229 06:48:00.475861   24277 cri.go:96] found id: "4a3c44dcac1f9a83e3a787226cced9a0ab00baf1994255347f9606435fbde789"
	I1229 06:48:00.475864   24277 cri.go:96] found id: "51ed82b33b15063ed9584df91a5d6d235c8817d11e47a9a38a1d56e037f2ed6e"
	I1229 06:48:00.475867   24277 cri.go:96] found id: "3cbc1d555be4a7bc9b343c6dd9de17ccc7a1450486f6885704f6cf2ccfab7ebb"
	I1229 06:48:00.475870   24277 cri.go:96] found id: "76eadfaf130b7086d5045d8a47b4ca38794f2e222f8f835d379eb06fbf27c686"
	I1229 06:48:00.475873   24277 cri.go:96] found id: "1fdad9cd679e891d20b4bee4f017f2f3bb45002271534d814ea8b3133140ff54"
	I1229 06:48:00.475876   24277 cri.go:96] found id: ""
	I1229 06:48:00.475925   24277 ssh_runner.go:195] Run: sudo runc list -f json
	I1229 06:48:00.489772   24277 out.go:203] 
	W1229 06:48:00.490951   24277 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-29T06:48:00Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-29T06:48:00Z" level=error msg="open /run/runc: no such file or directory"
	
	W1229 06:48:00.490968   24277 out.go:285] * 
	* 
	W1229 06:48:00.491813   24277 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_94fa7435cdb0fda2540861b9b71556c8cae5c5f1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_94fa7435cdb0fda2540861b9b71556c8cae5c5f1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1229 06:48:00.492940   24277 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1057: failed to disable registry addon: args "out/minikube-linux-amd64 -p addons-264018 addons disable registry --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/Registry (13.58s)

                                                
                                    
x
+
TestAddons/parallel/RegistryCreds (0.4s)

                                                
                                                
=== RUN   TestAddons/parallel/RegistryCreds
=== PAUSE TestAddons/parallel/RegistryCreds

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/RegistryCreds
addons_test.go:325: registry-creds stabilized in 2.935552ms
addons_test.go:327: (dbg) Run:  out/minikube-linux-amd64 addons configure registry-creds -f ./testdata/addons_testconfig.json -p addons-264018
addons_test.go:334: (dbg) Run:  kubectl --context addons-264018 -n kube-system get secret -o yaml
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-264018 addons disable registry-creds --alsologtostderr -v=1
addons_test.go:1055: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-264018 addons disable registry-creds --alsologtostderr -v=1: exit status 11 (240.934192ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1229 06:48:02.960718   25079 out.go:360] Setting OutFile to fd 1 ...
	I1229 06:48:02.960839   25079 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1229 06:48:02.960848   25079 out.go:374] Setting ErrFile to fd 2...
	I1229 06:48:02.960852   25079 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1229 06:48:02.961037   25079 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22353-9207/.minikube/bin
	I1229 06:48:02.961306   25079 mustload.go:66] Loading cluster: addons-264018
	I1229 06:48:02.962036   25079 config.go:182] Loaded profile config "addons-264018": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1229 06:48:02.962057   25079 addons.go:622] checking whether the cluster is paused
	I1229 06:48:02.962147   25079 config.go:182] Loaded profile config "addons-264018": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1229 06:48:02.962158   25079 host.go:66] Checking if "addons-264018" exists ...
	I1229 06:48:02.962565   25079 cli_runner.go:164] Run: docker container inspect addons-264018 --format={{.State.Status}}
	I1229 06:48:02.980185   25079 ssh_runner.go:195] Run: systemctl --version
	I1229 06:48:02.980263   25079 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-264018
	I1229 06:48:03.001744   25079 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22353-9207/.minikube/machines/addons-264018/id_rsa Username:docker}
	I1229 06:48:03.097615   25079 cri.go:61] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1229 06:48:03.097733   25079 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1229 06:48:03.127851   25079 cri.go:96] found id: "a6a0581d8c61611e2f272b96343a268fe5c70f7029e38edfe79b93a5fd1c926f"
	I1229 06:48:03.127870   25079 cri.go:96] found id: "40f1bbbc2f1bc41ffc575f885bace3f777cfb7eef12de99365091343767cae35"
	I1229 06:48:03.127874   25079 cri.go:96] found id: "cf76d1d0070f881a9ffa8a6c04533147ff62c7b1514e1c54068befd5af071e6e"
	I1229 06:48:03.127877   25079 cri.go:96] found id: "191723d7168d895c6bf51e40836a2fca9afd4a59b8bdb556490f53c792ed7d42"
	I1229 06:48:03.127880   25079 cri.go:96] found id: "ae96a02e9cf4dc9de5d7df316ab70ff8a9d5d492040546a516cfaf1820f86bad"
	I1229 06:48:03.127884   25079 cri.go:96] found id: "c0c3e5a743acf0570907049fa852e80d8eef97b9bad8644c6ff7a8e19a9c7733"
	I1229 06:48:03.127886   25079 cri.go:96] found id: "f3b99ec55e372e2b5c844e254b9ac4579df2d959ab772039faba6a6b067f14b3"
	I1229 06:48:03.127889   25079 cri.go:96] found id: "ebcf1d7d5c969c114009537b9defcb8f0588535464fbc0a2ff842fefb447bb8c"
	I1229 06:48:03.127891   25079 cri.go:96] found id: "c9b16a3ab994c277bfed1b0bf551b2af5be8fe356cf284ce509b41b234e1bbc2"
	I1229 06:48:03.127899   25079 cri.go:96] found id: "4bfc41143f5c8fbc4ac026d705315ecf51fb6fe6d2b5c89b603528b3b12a74db"
	I1229 06:48:03.127902   25079 cri.go:96] found id: "e4899f3db6ad29c94b5952375d6ee208724e8ad29644873f0cd737a68e8a703a"
	I1229 06:48:03.127904   25079 cri.go:96] found id: "292ecef6acbc2acee8ef472e9ea145feebe4df5df46f16f5454e0f099419d6ac"
	I1229 06:48:03.127907   25079 cri.go:96] found id: "b719d17a16ec647d0d77415a2263349a1c5df8d59d393b88bb52526b8c006bde"
	I1229 06:48:03.127909   25079 cri.go:96] found id: "f6f13110641eb941db49bd781fcb8fe293b7d6c4cd5c427879803c2ba59617a1"
	I1229 06:48:03.127912   25079 cri.go:96] found id: "17994c226aacad57fbda97a02fed4553e91bd134ce6d53975246176759124769"
	I1229 06:48:03.127935   25079 cri.go:96] found id: "dc2d3d8fc63a89092d1fb7c5c78aa3b5961a80f87adc9e764d27f5097df557c8"
	I1229 06:48:03.127940   25079 cri.go:96] found id: "21f189ec37313b547a78e6b7eefa6f39267001240605aa04be22ce21ebed4a7b"
	I1229 06:48:03.127944   25079 cri.go:96] found id: "eeb0fd8ef162839fa207c203bea491f6c9a658c71049e04fca885b95bea92e12"
	I1229 06:48:03.127947   25079 cri.go:96] found id: "33b335c36f36593b5f7bf67397f810f69aa60256086010a88acab17521368f9f"
	I1229 06:48:03.127950   25079 cri.go:96] found id: "4a3c44dcac1f9a83e3a787226cced9a0ab00baf1994255347f9606435fbde789"
	I1229 06:48:03.127955   25079 cri.go:96] found id: "51ed82b33b15063ed9584df91a5d6d235c8817d11e47a9a38a1d56e037f2ed6e"
	I1229 06:48:03.127958   25079 cri.go:96] found id: "3cbc1d555be4a7bc9b343c6dd9de17ccc7a1450486f6885704f6cf2ccfab7ebb"
	I1229 06:48:03.127961   25079 cri.go:96] found id: "76eadfaf130b7086d5045d8a47b4ca38794f2e222f8f835d379eb06fbf27c686"
	I1229 06:48:03.127963   25079 cri.go:96] found id: "1fdad9cd679e891d20b4bee4f017f2f3bb45002271534d814ea8b3133140ff54"
	I1229 06:48:03.127966   25079 cri.go:96] found id: ""
	I1229 06:48:03.128003   25079 ssh_runner.go:195] Run: sudo runc list -f json
	I1229 06:48:03.140989   25079 out.go:203] 
	W1229 06:48:03.141973   25079 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-29T06:48:03Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-29T06:48:03Z" level=error msg="open /run/runc: no such file or directory"
	
	W1229 06:48:03.141990   25079 out.go:285] * 
	* 
	W1229 06:48:03.142702   25079 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_ac42ae7bb4bac5cd909a08f6506d602b3d2ccf6c_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_ac42ae7bb4bac5cd909a08f6506d602b3d2ccf6c_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1229 06:48:03.143621   25079 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1057: failed to disable registry-creds addon: args "out/minikube-linux-amd64 -p addons-264018 addons disable registry-creds --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/RegistryCreds (0.40s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (10.68s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:211: (dbg) Run:  kubectl --context addons-264018 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:236: (dbg) Run:  kubectl --context addons-264018 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:249: (dbg) Run:  kubectl --context addons-264018 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:254: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:353: "nginx" [f587f4a6-db1f-454c-ae34-01138a0b757c] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:353: "nginx" [f587f4a6-db1f-454c-ae34-01138a0b757c] Running
addons_test.go:254: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 9.003364337s
I1229 06:48:09.941388   12733 kapi.go:150] Service nginx in namespace default found.
addons_test.go:266: (dbg) Run:  out/minikube-linux-amd64 -p addons-264018 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:290: (dbg) Run:  kubectl --context addons-264018 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:295: (dbg) Run:  out/minikube-linux-amd64 -p addons-264018 ip
addons_test.go:301: (dbg) Run:  nslookup hello-john.test 192.168.49.2
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-264018 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:1055: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-264018 addons disable ingress-dns --alsologtostderr -v=1: exit status 11 (244.857911ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1229 06:48:10.726050   25963 out.go:360] Setting OutFile to fd 1 ...
	I1229 06:48:10.726182   25963 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1229 06:48:10.726193   25963 out.go:374] Setting ErrFile to fd 2...
	I1229 06:48:10.726197   25963 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1229 06:48:10.726382   25963 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22353-9207/.minikube/bin
	I1229 06:48:10.726639   25963 mustload.go:66] Loading cluster: addons-264018
	I1229 06:48:10.726930   25963 config.go:182] Loaded profile config "addons-264018": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1229 06:48:10.726947   25963 addons.go:622] checking whether the cluster is paused
	I1229 06:48:10.727031   25963 config.go:182] Loaded profile config "addons-264018": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1229 06:48:10.727043   25963 host.go:66] Checking if "addons-264018" exists ...
	I1229 06:48:10.727385   25963 cli_runner.go:164] Run: docker container inspect addons-264018 --format={{.State.Status}}
	I1229 06:48:10.746618   25963 ssh_runner.go:195] Run: systemctl --version
	I1229 06:48:10.746666   25963 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-264018
	I1229 06:48:10.765617   25963 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22353-9207/.minikube/machines/addons-264018/id_rsa Username:docker}
	I1229 06:48:10.863171   25963 cri.go:61] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1229 06:48:10.863244   25963 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1229 06:48:10.892765   25963 cri.go:96] found id: "14db244c6bfa5c219e63921179c4967da60c14cf78830c545174b0743ce83be6"
	I1229 06:48:10.892793   25963 cri.go:96] found id: "a6a0581d8c61611e2f272b96343a268fe5c70f7029e38edfe79b93a5fd1c926f"
	I1229 06:48:10.892797   25963 cri.go:96] found id: "40f1bbbc2f1bc41ffc575f885bace3f777cfb7eef12de99365091343767cae35"
	I1229 06:48:10.892800   25963 cri.go:96] found id: "cf76d1d0070f881a9ffa8a6c04533147ff62c7b1514e1c54068befd5af071e6e"
	I1229 06:48:10.892803   25963 cri.go:96] found id: "191723d7168d895c6bf51e40836a2fca9afd4a59b8bdb556490f53c792ed7d42"
	I1229 06:48:10.892807   25963 cri.go:96] found id: "ae96a02e9cf4dc9de5d7df316ab70ff8a9d5d492040546a516cfaf1820f86bad"
	I1229 06:48:10.892810   25963 cri.go:96] found id: "c0c3e5a743acf0570907049fa852e80d8eef97b9bad8644c6ff7a8e19a9c7733"
	I1229 06:48:10.892813   25963 cri.go:96] found id: "f3b99ec55e372e2b5c844e254b9ac4579df2d959ab772039faba6a6b067f14b3"
	I1229 06:48:10.892815   25963 cri.go:96] found id: "ebcf1d7d5c969c114009537b9defcb8f0588535464fbc0a2ff842fefb447bb8c"
	I1229 06:48:10.892825   25963 cri.go:96] found id: "c9b16a3ab994c277bfed1b0bf551b2af5be8fe356cf284ce509b41b234e1bbc2"
	I1229 06:48:10.892828   25963 cri.go:96] found id: "4bfc41143f5c8fbc4ac026d705315ecf51fb6fe6d2b5c89b603528b3b12a74db"
	I1229 06:48:10.892831   25963 cri.go:96] found id: "e4899f3db6ad29c94b5952375d6ee208724e8ad29644873f0cd737a68e8a703a"
	I1229 06:48:10.892833   25963 cri.go:96] found id: "292ecef6acbc2acee8ef472e9ea145feebe4df5df46f16f5454e0f099419d6ac"
	I1229 06:48:10.892836   25963 cri.go:96] found id: "b719d17a16ec647d0d77415a2263349a1c5df8d59d393b88bb52526b8c006bde"
	I1229 06:48:10.892838   25963 cri.go:96] found id: "f6f13110641eb941db49bd781fcb8fe293b7d6c4cd5c427879803c2ba59617a1"
	I1229 06:48:10.892849   25963 cri.go:96] found id: "17994c226aacad57fbda97a02fed4553e91bd134ce6d53975246176759124769"
	I1229 06:48:10.892853   25963 cri.go:96] found id: "dc2d3d8fc63a89092d1fb7c5c78aa3b5961a80f87adc9e764d27f5097df557c8"
	I1229 06:48:10.892857   25963 cri.go:96] found id: "21f189ec37313b547a78e6b7eefa6f39267001240605aa04be22ce21ebed4a7b"
	I1229 06:48:10.892860   25963 cri.go:96] found id: "eeb0fd8ef162839fa207c203bea491f6c9a658c71049e04fca885b95bea92e12"
	I1229 06:48:10.892863   25963 cri.go:96] found id: "33b335c36f36593b5f7bf67397f810f69aa60256086010a88acab17521368f9f"
	I1229 06:48:10.892873   25963 cri.go:96] found id: "4a3c44dcac1f9a83e3a787226cced9a0ab00baf1994255347f9606435fbde789"
	I1229 06:48:10.892876   25963 cri.go:96] found id: "51ed82b33b15063ed9584df91a5d6d235c8817d11e47a9a38a1d56e037f2ed6e"
	I1229 06:48:10.892879   25963 cri.go:96] found id: "3cbc1d555be4a7bc9b343c6dd9de17ccc7a1450486f6885704f6cf2ccfab7ebb"
	I1229 06:48:10.892882   25963 cri.go:96] found id: "76eadfaf130b7086d5045d8a47b4ca38794f2e222f8f835d379eb06fbf27c686"
	I1229 06:48:10.892885   25963 cri.go:96] found id: "1fdad9cd679e891d20b4bee4f017f2f3bb45002271534d814ea8b3133140ff54"
	I1229 06:48:10.892887   25963 cri.go:96] found id: ""
	I1229 06:48:10.892932   25963 ssh_runner.go:195] Run: sudo runc list -f json
	I1229 06:48:10.907378   25963 out.go:203] 
	W1229 06:48:10.908590   25963 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-29T06:48:10Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-29T06:48:10Z" level=error msg="open /run/runc: no such file or directory"
	
	W1229 06:48:10.908612   25963 out.go:285] * 
	* 
	W1229 06:48:10.909331   25963 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_4116e8848b7c0e6a40fa9061a5ca6da2e0eb6ead_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_4116e8848b7c0e6a40fa9061a5ca6da2e0eb6ead_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1229 06:48:10.910448   25963 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1057: failed to disable ingress-dns addon: args "out/minikube-linux-amd64 -p addons-264018 addons disable ingress-dns --alsologtostderr -v=1": exit status 11
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-264018 addons disable ingress --alsologtostderr -v=1
addons_test.go:1055: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-264018 addons disable ingress --alsologtostderr -v=1: exit status 11 (258.476132ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1229 06:48:10.982152   26316 out.go:360] Setting OutFile to fd 1 ...
	I1229 06:48:10.982323   26316 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1229 06:48:10.982335   26316 out.go:374] Setting ErrFile to fd 2...
	I1229 06:48:10.982339   26316 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1229 06:48:10.982523   26316 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22353-9207/.minikube/bin
	I1229 06:48:10.982771   26316 mustload.go:66] Loading cluster: addons-264018
	I1229 06:48:10.983139   26316 config.go:182] Loaded profile config "addons-264018": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1229 06:48:10.983157   26316 addons.go:622] checking whether the cluster is paused
	I1229 06:48:10.983256   26316 config.go:182] Loaded profile config "addons-264018": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1229 06:48:10.983269   26316 host.go:66] Checking if "addons-264018" exists ...
	I1229 06:48:10.983638   26316 cli_runner.go:164] Run: docker container inspect addons-264018 --format={{.State.Status}}
	I1229 06:48:11.006087   26316 ssh_runner.go:195] Run: systemctl --version
	I1229 06:48:11.006175   26316 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-264018
	I1229 06:48:11.027563   26316 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22353-9207/.minikube/machines/addons-264018/id_rsa Username:docker}
	I1229 06:48:11.124393   26316 cri.go:61] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1229 06:48:11.124469   26316 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1229 06:48:11.153977   26316 cri.go:96] found id: "14db244c6bfa5c219e63921179c4967da60c14cf78830c545174b0743ce83be6"
	I1229 06:48:11.154022   26316 cri.go:96] found id: "a6a0581d8c61611e2f272b96343a268fe5c70f7029e38edfe79b93a5fd1c926f"
	I1229 06:48:11.154029   26316 cri.go:96] found id: "40f1bbbc2f1bc41ffc575f885bace3f777cfb7eef12de99365091343767cae35"
	I1229 06:48:11.154035   26316 cri.go:96] found id: "cf76d1d0070f881a9ffa8a6c04533147ff62c7b1514e1c54068befd5af071e6e"
	I1229 06:48:11.154039   26316 cri.go:96] found id: "191723d7168d895c6bf51e40836a2fca9afd4a59b8bdb556490f53c792ed7d42"
	I1229 06:48:11.154043   26316 cri.go:96] found id: "ae96a02e9cf4dc9de5d7df316ab70ff8a9d5d492040546a516cfaf1820f86bad"
	I1229 06:48:11.154047   26316 cri.go:96] found id: "c0c3e5a743acf0570907049fa852e80d8eef97b9bad8644c6ff7a8e19a9c7733"
	I1229 06:48:11.154050   26316 cri.go:96] found id: "f3b99ec55e372e2b5c844e254b9ac4579df2d959ab772039faba6a6b067f14b3"
	I1229 06:48:11.154052   26316 cri.go:96] found id: "ebcf1d7d5c969c114009537b9defcb8f0588535464fbc0a2ff842fefb447bb8c"
	I1229 06:48:11.154064   26316 cri.go:96] found id: "c9b16a3ab994c277bfed1b0bf551b2af5be8fe356cf284ce509b41b234e1bbc2"
	I1229 06:48:11.154071   26316 cri.go:96] found id: "4bfc41143f5c8fbc4ac026d705315ecf51fb6fe6d2b5c89b603528b3b12a74db"
	I1229 06:48:11.154075   26316 cri.go:96] found id: "e4899f3db6ad29c94b5952375d6ee208724e8ad29644873f0cd737a68e8a703a"
	I1229 06:48:11.154082   26316 cri.go:96] found id: "292ecef6acbc2acee8ef472e9ea145feebe4df5df46f16f5454e0f099419d6ac"
	I1229 06:48:11.154085   26316 cri.go:96] found id: "b719d17a16ec647d0d77415a2263349a1c5df8d59d393b88bb52526b8c006bde"
	I1229 06:48:11.154088   26316 cri.go:96] found id: "f6f13110641eb941db49bd781fcb8fe293b7d6c4cd5c427879803c2ba59617a1"
	I1229 06:48:11.154101   26316 cri.go:96] found id: "17994c226aacad57fbda97a02fed4553e91bd134ce6d53975246176759124769"
	I1229 06:48:11.154113   26316 cri.go:96] found id: "dc2d3d8fc63a89092d1fb7c5c78aa3b5961a80f87adc9e764d27f5097df557c8"
	I1229 06:48:11.154117   26316 cri.go:96] found id: "21f189ec37313b547a78e6b7eefa6f39267001240605aa04be22ce21ebed4a7b"
	I1229 06:48:11.154120   26316 cri.go:96] found id: "eeb0fd8ef162839fa207c203bea491f6c9a658c71049e04fca885b95bea92e12"
	I1229 06:48:11.154123   26316 cri.go:96] found id: "33b335c36f36593b5f7bf67397f810f69aa60256086010a88acab17521368f9f"
	I1229 06:48:11.154130   26316 cri.go:96] found id: "4a3c44dcac1f9a83e3a787226cced9a0ab00baf1994255347f9606435fbde789"
	I1229 06:48:11.154136   26316 cri.go:96] found id: "51ed82b33b15063ed9584df91a5d6d235c8817d11e47a9a38a1d56e037f2ed6e"
	I1229 06:48:11.154139   26316 cri.go:96] found id: "3cbc1d555be4a7bc9b343c6dd9de17ccc7a1450486f6885704f6cf2ccfab7ebb"
	I1229 06:48:11.154144   26316 cri.go:96] found id: "76eadfaf130b7086d5045d8a47b4ca38794f2e222f8f835d379eb06fbf27c686"
	I1229 06:48:11.154147   26316 cri.go:96] found id: "1fdad9cd679e891d20b4bee4f017f2f3bb45002271534d814ea8b3133140ff54"
	I1229 06:48:11.154150   26316 cri.go:96] found id: ""
	I1229 06:48:11.154198   26316 ssh_runner.go:195] Run: sudo runc list -f json
	I1229 06:48:11.167735   26316 out.go:203] 
	W1229 06:48:11.168864   26316 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-29T06:48:11Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-29T06:48:11Z" level=error msg="open /run/runc: no such file or directory"
	
	W1229 06:48:11.168886   26316 out.go:285] * 
	* 
	W1229 06:48:11.169602   26316 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_62553deefc570c97f2052ef703df7b8905a654d6_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_62553deefc570c97f2052ef703df7b8905a654d6_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1229 06:48:11.170828   26316 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1057: failed to disable ingress addon: args "out/minikube-linux-amd64 -p addons-264018 addons disable ingress --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/Ingress (10.68s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (5.25s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:825: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:353: "gadget-xgnkq" [aa7b8600-5986-44ac-88ea-e848f1db2159] Running
addons_test.go:825: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 5.003558648s
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-264018 addons disable inspektor-gadget --alsologtostderr -v=1
addons_test.go:1055: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-264018 addons disable inspektor-gadget --alsologtostderr -v=1: exit status 11 (245.311018ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1229 06:48:05.366928   25632 out.go:360] Setting OutFile to fd 1 ...
	I1229 06:48:05.367184   25632 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1229 06:48:05.367193   25632 out.go:374] Setting ErrFile to fd 2...
	I1229 06:48:05.367197   25632 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1229 06:48:05.367393   25632 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22353-9207/.minikube/bin
	I1229 06:48:05.367660   25632 mustload.go:66] Loading cluster: addons-264018
	I1229 06:48:05.367954   25632 config.go:182] Loaded profile config "addons-264018": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1229 06:48:05.367977   25632 addons.go:622] checking whether the cluster is paused
	I1229 06:48:05.368061   25632 config.go:182] Loaded profile config "addons-264018": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1229 06:48:05.368079   25632 host.go:66] Checking if "addons-264018" exists ...
	I1229 06:48:05.368420   25632 cli_runner.go:164] Run: docker container inspect addons-264018 --format={{.State.Status}}
	I1229 06:48:05.386048   25632 ssh_runner.go:195] Run: systemctl --version
	I1229 06:48:05.386094   25632 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-264018
	I1229 06:48:05.403842   25632 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22353-9207/.minikube/machines/addons-264018/id_rsa Username:docker}
	I1229 06:48:05.501108   25632 cri.go:61] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1229 06:48:05.501249   25632 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1229 06:48:05.532569   25632 cri.go:96] found id: "14db244c6bfa5c219e63921179c4967da60c14cf78830c545174b0743ce83be6"
	I1229 06:48:05.532617   25632 cri.go:96] found id: "a6a0581d8c61611e2f272b96343a268fe5c70f7029e38edfe79b93a5fd1c926f"
	I1229 06:48:05.532625   25632 cri.go:96] found id: "40f1bbbc2f1bc41ffc575f885bace3f777cfb7eef12de99365091343767cae35"
	I1229 06:48:05.532630   25632 cri.go:96] found id: "cf76d1d0070f881a9ffa8a6c04533147ff62c7b1514e1c54068befd5af071e6e"
	I1229 06:48:05.532635   25632 cri.go:96] found id: "191723d7168d895c6bf51e40836a2fca9afd4a59b8bdb556490f53c792ed7d42"
	I1229 06:48:05.532641   25632 cri.go:96] found id: "ae96a02e9cf4dc9de5d7df316ab70ff8a9d5d492040546a516cfaf1820f86bad"
	I1229 06:48:05.532646   25632 cri.go:96] found id: "c0c3e5a743acf0570907049fa852e80d8eef97b9bad8644c6ff7a8e19a9c7733"
	I1229 06:48:05.532649   25632 cri.go:96] found id: "f3b99ec55e372e2b5c844e254b9ac4579df2d959ab772039faba6a6b067f14b3"
	I1229 06:48:05.532652   25632 cri.go:96] found id: "ebcf1d7d5c969c114009537b9defcb8f0588535464fbc0a2ff842fefb447bb8c"
	I1229 06:48:05.532662   25632 cri.go:96] found id: "c9b16a3ab994c277bfed1b0bf551b2af5be8fe356cf284ce509b41b234e1bbc2"
	I1229 06:48:05.532669   25632 cri.go:96] found id: "4bfc41143f5c8fbc4ac026d705315ecf51fb6fe6d2b5c89b603528b3b12a74db"
	I1229 06:48:05.532672   25632 cri.go:96] found id: "e4899f3db6ad29c94b5952375d6ee208724e8ad29644873f0cd737a68e8a703a"
	I1229 06:48:05.532674   25632 cri.go:96] found id: "292ecef6acbc2acee8ef472e9ea145feebe4df5df46f16f5454e0f099419d6ac"
	I1229 06:48:05.532677   25632 cri.go:96] found id: "b719d17a16ec647d0d77415a2263349a1c5df8d59d393b88bb52526b8c006bde"
	I1229 06:48:05.532680   25632 cri.go:96] found id: "f6f13110641eb941db49bd781fcb8fe293b7d6c4cd5c427879803c2ba59617a1"
	I1229 06:48:05.532687   25632 cri.go:96] found id: "17994c226aacad57fbda97a02fed4553e91bd134ce6d53975246176759124769"
	I1229 06:48:05.532690   25632 cri.go:96] found id: "dc2d3d8fc63a89092d1fb7c5c78aa3b5961a80f87adc9e764d27f5097df557c8"
	I1229 06:48:05.532694   25632 cri.go:96] found id: "21f189ec37313b547a78e6b7eefa6f39267001240605aa04be22ce21ebed4a7b"
	I1229 06:48:05.532697   25632 cri.go:96] found id: "eeb0fd8ef162839fa207c203bea491f6c9a658c71049e04fca885b95bea92e12"
	I1229 06:48:05.532702   25632 cri.go:96] found id: "33b335c36f36593b5f7bf67397f810f69aa60256086010a88acab17521368f9f"
	I1229 06:48:05.532707   25632 cri.go:96] found id: "4a3c44dcac1f9a83e3a787226cced9a0ab00baf1994255347f9606435fbde789"
	I1229 06:48:05.532712   25632 cri.go:96] found id: "51ed82b33b15063ed9584df91a5d6d235c8817d11e47a9a38a1d56e037f2ed6e"
	I1229 06:48:05.532715   25632 cri.go:96] found id: "3cbc1d555be4a7bc9b343c6dd9de17ccc7a1450486f6885704f6cf2ccfab7ebb"
	I1229 06:48:05.532718   25632 cri.go:96] found id: "76eadfaf130b7086d5045d8a47b4ca38794f2e222f8f835d379eb06fbf27c686"
	I1229 06:48:05.532726   25632 cri.go:96] found id: "1fdad9cd679e891d20b4bee4f017f2f3bb45002271534d814ea8b3133140ff54"
	I1229 06:48:05.532730   25632 cri.go:96] found id: ""
	I1229 06:48:05.532785   25632 ssh_runner.go:195] Run: sudo runc list -f json
	I1229 06:48:05.547515   25632 out.go:203] 
	W1229 06:48:05.548492   25632 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-29T06:48:05Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-29T06:48:05Z" level=error msg="open /run/runc: no such file or directory"
	
	W1229 06:48:05.548520   25632 out.go:285] * 
	* 
	W1229 06:48:05.549518   25632 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_07218961934993dd21acc63caaf1aa08873c018e_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_07218961934993dd21acc63caaf1aa08873c018e_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1229 06:48:05.550595   25632 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1057: failed to disable inspektor-gadget addon: args "out/minikube-linux-amd64 -p addons-264018 addons disable inspektor-gadget --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/InspektorGadget (5.25s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (5.3s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:457: metrics-server stabilized in 2.798336ms
addons_test.go:459: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:353: "metrics-server-5778bb4788-l88w7" [b73304b2-0c01-492a-92ae-84e7287f9acc] Running
addons_test.go:459: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.006332619s
addons_test.go:465: (dbg) Run:  kubectl --context addons-264018 top pods -n kube-system
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-264018 addons disable metrics-server --alsologtostderr -v=1
addons_test.go:1055: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-264018 addons disable metrics-server --alsologtostderr -v=1: exit status 11 (233.826806ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1229 06:48:02.567589   24986 out.go:360] Setting OutFile to fd 1 ...
	I1229 06:48:02.567708   24986 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1229 06:48:02.567721   24986 out.go:374] Setting ErrFile to fd 2...
	I1229 06:48:02.567728   24986 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1229 06:48:02.567937   24986 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22353-9207/.minikube/bin
	I1229 06:48:02.568244   24986 mustload.go:66] Loading cluster: addons-264018
	I1229 06:48:02.568606   24986 config.go:182] Loaded profile config "addons-264018": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1229 06:48:02.568628   24986 addons.go:622] checking whether the cluster is paused
	I1229 06:48:02.568731   24986 config.go:182] Loaded profile config "addons-264018": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1229 06:48:02.568744   24986 host.go:66] Checking if "addons-264018" exists ...
	I1229 06:48:02.569108   24986 cli_runner.go:164] Run: docker container inspect addons-264018 --format={{.State.Status}}
	I1229 06:48:02.587127   24986 ssh_runner.go:195] Run: systemctl --version
	I1229 06:48:02.587183   24986 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-264018
	I1229 06:48:02.604087   24986 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22353-9207/.minikube/machines/addons-264018/id_rsa Username:docker}
	I1229 06:48:02.699449   24986 cri.go:61] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1229 06:48:02.699547   24986 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1229 06:48:02.727937   24986 cri.go:96] found id: "a6a0581d8c61611e2f272b96343a268fe5c70f7029e38edfe79b93a5fd1c926f"
	I1229 06:48:02.727983   24986 cri.go:96] found id: "40f1bbbc2f1bc41ffc575f885bace3f777cfb7eef12de99365091343767cae35"
	I1229 06:48:02.727988   24986 cri.go:96] found id: "cf76d1d0070f881a9ffa8a6c04533147ff62c7b1514e1c54068befd5af071e6e"
	I1229 06:48:02.727992   24986 cri.go:96] found id: "191723d7168d895c6bf51e40836a2fca9afd4a59b8bdb556490f53c792ed7d42"
	I1229 06:48:02.727995   24986 cri.go:96] found id: "ae96a02e9cf4dc9de5d7df316ab70ff8a9d5d492040546a516cfaf1820f86bad"
	I1229 06:48:02.727999   24986 cri.go:96] found id: "c0c3e5a743acf0570907049fa852e80d8eef97b9bad8644c6ff7a8e19a9c7733"
	I1229 06:48:02.728002   24986 cri.go:96] found id: "f3b99ec55e372e2b5c844e254b9ac4579df2d959ab772039faba6a6b067f14b3"
	I1229 06:48:02.728004   24986 cri.go:96] found id: "ebcf1d7d5c969c114009537b9defcb8f0588535464fbc0a2ff842fefb447bb8c"
	I1229 06:48:02.728007   24986 cri.go:96] found id: "c9b16a3ab994c277bfed1b0bf551b2af5be8fe356cf284ce509b41b234e1bbc2"
	I1229 06:48:02.728017   24986 cri.go:96] found id: "4bfc41143f5c8fbc4ac026d705315ecf51fb6fe6d2b5c89b603528b3b12a74db"
	I1229 06:48:02.728020   24986 cri.go:96] found id: "e4899f3db6ad29c94b5952375d6ee208724e8ad29644873f0cd737a68e8a703a"
	I1229 06:48:02.728023   24986 cri.go:96] found id: "292ecef6acbc2acee8ef472e9ea145feebe4df5df46f16f5454e0f099419d6ac"
	I1229 06:48:02.728025   24986 cri.go:96] found id: "b719d17a16ec647d0d77415a2263349a1c5df8d59d393b88bb52526b8c006bde"
	I1229 06:48:02.728028   24986 cri.go:96] found id: "f6f13110641eb941db49bd781fcb8fe293b7d6c4cd5c427879803c2ba59617a1"
	I1229 06:48:02.728031   24986 cri.go:96] found id: "17994c226aacad57fbda97a02fed4553e91bd134ce6d53975246176759124769"
	I1229 06:48:02.728042   24986 cri.go:96] found id: "dc2d3d8fc63a89092d1fb7c5c78aa3b5961a80f87adc9e764d27f5097df557c8"
	I1229 06:48:02.728046   24986 cri.go:96] found id: "21f189ec37313b547a78e6b7eefa6f39267001240605aa04be22ce21ebed4a7b"
	I1229 06:48:02.728050   24986 cri.go:96] found id: "eeb0fd8ef162839fa207c203bea491f6c9a658c71049e04fca885b95bea92e12"
	I1229 06:48:02.728053   24986 cri.go:96] found id: "33b335c36f36593b5f7bf67397f810f69aa60256086010a88acab17521368f9f"
	I1229 06:48:02.728056   24986 cri.go:96] found id: "4a3c44dcac1f9a83e3a787226cced9a0ab00baf1994255347f9606435fbde789"
	I1229 06:48:02.728061   24986 cri.go:96] found id: "51ed82b33b15063ed9584df91a5d6d235c8817d11e47a9a38a1d56e037f2ed6e"
	I1229 06:48:02.728064   24986 cri.go:96] found id: "3cbc1d555be4a7bc9b343c6dd9de17ccc7a1450486f6885704f6cf2ccfab7ebb"
	I1229 06:48:02.728067   24986 cri.go:96] found id: "76eadfaf130b7086d5045d8a47b4ca38794f2e222f8f835d379eb06fbf27c686"
	I1229 06:48:02.728070   24986 cri.go:96] found id: "1fdad9cd679e891d20b4bee4f017f2f3bb45002271534d814ea8b3133140ff54"
	I1229 06:48:02.728073   24986 cri.go:96] found id: ""
	I1229 06:48:02.728127   24986 ssh_runner.go:195] Run: sudo runc list -f json
	I1229 06:48:02.742028   24986 out.go:203] 
	W1229 06:48:02.743121   24986 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-29T06:48:02Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-29T06:48:02Z" level=error msg="open /run/runc: no such file or directory"
	
	W1229 06:48:02.743149   24986 out.go:285] * 
	* 
	W1229 06:48:02.743886   24986 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_9e377edc2b59264359e9c26f81b048e390fa608a_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_9e377edc2b59264359e9c26f81b048e390fa608a_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1229 06:48:02.745044   24986 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1057: failed to disable metrics-server addon: args "out/minikube-linux-amd64 -p addons-264018 addons disable metrics-server --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/MetricsServer (5.30s)

                                                
                                    
x
+
TestAddons/parallel/CSI (47.28s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
I1229 06:47:54.747998   12733 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
I1229 06:47:54.751162   12733 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I1229 06:47:54.751193   12733 kapi.go:107] duration metric: took 3.212791ms to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
addons_test.go:551: csi-hostpath-driver pods stabilized in 3.223464ms
addons_test.go:554: (dbg) Run:  kubectl --context addons-264018 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:559: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:403: (dbg) Run:  kubectl --context addons-264018 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-264018 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-264018 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-264018 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-264018 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-264018 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-264018 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-264018 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-264018 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-264018 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:564: (dbg) Run:  kubectl --context addons-264018 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:569: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:353: "task-pv-pod" [16ca2034-2d5c-4189-a593-b8c4f2854579] Pending
helpers_test.go:353: "task-pv-pod" [16ca2034-2d5c-4189-a593-b8c4f2854579] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:353: "task-pv-pod" [16ca2034-2d5c-4189-a593-b8c4f2854579] Running
addons_test.go:569: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 8.002667537s
addons_test.go:574: (dbg) Run:  kubectl --context addons-264018 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:579: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:428: (dbg) Run:  kubectl --context addons-264018 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:428: (dbg) Run:  kubectl --context addons-264018 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:584: (dbg) Run:  kubectl --context addons-264018 delete pod task-pv-pod
addons_test.go:590: (dbg) Run:  kubectl --context addons-264018 delete pvc hpvc
addons_test.go:596: (dbg) Run:  kubectl --context addons-264018 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:601: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:403: (dbg) Run:  kubectl --context addons-264018 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-264018 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-264018 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-264018 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-264018 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-264018 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-264018 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-264018 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-264018 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-264018 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-264018 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-264018 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-264018 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-264018 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-264018 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-264018 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-264018 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-264018 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-264018 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-264018 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:606: (dbg) Run:  kubectl --context addons-264018 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:611: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:353: "task-pv-pod-restore" [be8dd71d-280d-4824-8820-0a152381a424] Pending
helpers_test.go:353: "task-pv-pod-restore" [be8dd71d-280d-4824-8820-0a152381a424] Running
addons_test.go:611: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 7.003420864s
addons_test.go:616: (dbg) Run:  kubectl --context addons-264018 delete pod task-pv-pod-restore
addons_test.go:620: (dbg) Run:  kubectl --context addons-264018 delete pvc hpvc-restore
addons_test.go:624: (dbg) Run:  kubectl --context addons-264018 delete volumesnapshot new-snapshot-demo
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-264018 addons disable volumesnapshots --alsologtostderr -v=1
addons_test.go:1055: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-264018 addons disable volumesnapshots --alsologtostderr -v=1: exit status 11 (238.672322ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1229 06:48:41.591244   27116 out.go:360] Setting OutFile to fd 1 ...
	I1229 06:48:41.591357   27116 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1229 06:48:41.591365   27116 out.go:374] Setting ErrFile to fd 2...
	I1229 06:48:41.591369   27116 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1229 06:48:41.591527   27116 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22353-9207/.minikube/bin
	I1229 06:48:41.591758   27116 mustload.go:66] Loading cluster: addons-264018
	I1229 06:48:41.592058   27116 config.go:182] Loaded profile config "addons-264018": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1229 06:48:41.592075   27116 addons.go:622] checking whether the cluster is paused
	I1229 06:48:41.592153   27116 config.go:182] Loaded profile config "addons-264018": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1229 06:48:41.592166   27116 host.go:66] Checking if "addons-264018" exists ...
	I1229 06:48:41.592508   27116 cli_runner.go:164] Run: docker container inspect addons-264018 --format={{.State.Status}}
	I1229 06:48:41.609866   27116 ssh_runner.go:195] Run: systemctl --version
	I1229 06:48:41.609926   27116 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-264018
	I1229 06:48:41.626487   27116 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22353-9207/.minikube/machines/addons-264018/id_rsa Username:docker}
	I1229 06:48:41.722931   27116 cri.go:61] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1229 06:48:41.723038   27116 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1229 06:48:41.753164   27116 cri.go:96] found id: "14db244c6bfa5c219e63921179c4967da60c14cf78830c545174b0743ce83be6"
	I1229 06:48:41.753185   27116 cri.go:96] found id: "a6a0581d8c61611e2f272b96343a268fe5c70f7029e38edfe79b93a5fd1c926f"
	I1229 06:48:41.753189   27116 cri.go:96] found id: "40f1bbbc2f1bc41ffc575f885bace3f777cfb7eef12de99365091343767cae35"
	I1229 06:48:41.753192   27116 cri.go:96] found id: "cf76d1d0070f881a9ffa8a6c04533147ff62c7b1514e1c54068befd5af071e6e"
	I1229 06:48:41.753195   27116 cri.go:96] found id: "191723d7168d895c6bf51e40836a2fca9afd4a59b8bdb556490f53c792ed7d42"
	I1229 06:48:41.753198   27116 cri.go:96] found id: "ae96a02e9cf4dc9de5d7df316ab70ff8a9d5d492040546a516cfaf1820f86bad"
	I1229 06:48:41.753201   27116 cri.go:96] found id: "c0c3e5a743acf0570907049fa852e80d8eef97b9bad8644c6ff7a8e19a9c7733"
	I1229 06:48:41.753204   27116 cri.go:96] found id: "f3b99ec55e372e2b5c844e254b9ac4579df2d959ab772039faba6a6b067f14b3"
	I1229 06:48:41.753207   27116 cri.go:96] found id: "ebcf1d7d5c969c114009537b9defcb8f0588535464fbc0a2ff842fefb447bb8c"
	I1229 06:48:41.753213   27116 cri.go:96] found id: "c9b16a3ab994c277bfed1b0bf551b2af5be8fe356cf284ce509b41b234e1bbc2"
	I1229 06:48:41.753242   27116 cri.go:96] found id: "4bfc41143f5c8fbc4ac026d705315ecf51fb6fe6d2b5c89b603528b3b12a74db"
	I1229 06:48:41.753247   27116 cri.go:96] found id: "e4899f3db6ad29c94b5952375d6ee208724e8ad29644873f0cd737a68e8a703a"
	I1229 06:48:41.753251   27116 cri.go:96] found id: "292ecef6acbc2acee8ef472e9ea145feebe4df5df46f16f5454e0f099419d6ac"
	I1229 06:48:41.753255   27116 cri.go:96] found id: "b719d17a16ec647d0d77415a2263349a1c5df8d59d393b88bb52526b8c006bde"
	I1229 06:48:41.753261   27116 cri.go:96] found id: "f6f13110641eb941db49bd781fcb8fe293b7d6c4cd5c427879803c2ba59617a1"
	I1229 06:48:41.753268   27116 cri.go:96] found id: "17994c226aacad57fbda97a02fed4553e91bd134ce6d53975246176759124769"
	I1229 06:48:41.753273   27116 cri.go:96] found id: "dc2d3d8fc63a89092d1fb7c5c78aa3b5961a80f87adc9e764d27f5097df557c8"
	I1229 06:48:41.753279   27116 cri.go:96] found id: "21f189ec37313b547a78e6b7eefa6f39267001240605aa04be22ce21ebed4a7b"
	I1229 06:48:41.753289   27116 cri.go:96] found id: "eeb0fd8ef162839fa207c203bea491f6c9a658c71049e04fca885b95bea92e12"
	I1229 06:48:41.753292   27116 cri.go:96] found id: "33b335c36f36593b5f7bf67397f810f69aa60256086010a88acab17521368f9f"
	I1229 06:48:41.753298   27116 cri.go:96] found id: "4a3c44dcac1f9a83e3a787226cced9a0ab00baf1994255347f9606435fbde789"
	I1229 06:48:41.753304   27116 cri.go:96] found id: "51ed82b33b15063ed9584df91a5d6d235c8817d11e47a9a38a1d56e037f2ed6e"
	I1229 06:48:41.753307   27116 cri.go:96] found id: "3cbc1d555be4a7bc9b343c6dd9de17ccc7a1450486f6885704f6cf2ccfab7ebb"
	I1229 06:48:41.753309   27116 cri.go:96] found id: "76eadfaf130b7086d5045d8a47b4ca38794f2e222f8f835d379eb06fbf27c686"
	I1229 06:48:41.753312   27116 cri.go:96] found id: "1fdad9cd679e891d20b4bee4f017f2f3bb45002271534d814ea8b3133140ff54"
	I1229 06:48:41.753315   27116 cri.go:96] found id: ""
	I1229 06:48:41.753361   27116 ssh_runner.go:195] Run: sudo runc list -f json
	I1229 06:48:41.769395   27116 out.go:203] 
	W1229 06:48:41.770641   27116 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-29T06:48:41Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-29T06:48:41Z" level=error msg="open /run/runc: no such file or directory"
	
	W1229 06:48:41.770659   27116 out.go:285] * 
	* 
	W1229 06:48:41.771418   27116 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_f6150db7515caf82d8c4c5baeba9fd21f738a7e0_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_f6150db7515caf82d8c4c5baeba9fd21f738a7e0_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1229 06:48:41.772621   27116 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1057: failed to disable volumesnapshots addon: args "out/minikube-linux-amd64 -p addons-264018 addons disable volumesnapshots --alsologtostderr -v=1": exit status 11
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-264018 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:1055: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-264018 addons disable csi-hostpath-driver --alsologtostderr -v=1: exit status 11 (250.596474ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1229 06:48:41.837196   27184 out.go:360] Setting OutFile to fd 1 ...
	I1229 06:48:41.837354   27184 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1229 06:48:41.837365   27184 out.go:374] Setting ErrFile to fd 2...
	I1229 06:48:41.837369   27184 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1229 06:48:41.837574   27184 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22353-9207/.minikube/bin
	I1229 06:48:41.837809   27184 mustload.go:66] Loading cluster: addons-264018
	I1229 06:48:41.838112   27184 config.go:182] Loaded profile config "addons-264018": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1229 06:48:41.838129   27184 addons.go:622] checking whether the cluster is paused
	I1229 06:48:41.838210   27184 config.go:182] Loaded profile config "addons-264018": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1229 06:48:41.838244   27184 host.go:66] Checking if "addons-264018" exists ...
	I1229 06:48:41.838606   27184 cli_runner.go:164] Run: docker container inspect addons-264018 --format={{.State.Status}}
	I1229 06:48:41.857658   27184 ssh_runner.go:195] Run: systemctl --version
	I1229 06:48:41.857728   27184 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-264018
	I1229 06:48:41.874899   27184 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22353-9207/.minikube/machines/addons-264018/id_rsa Username:docker}
	I1229 06:48:41.975656   27184 cri.go:61] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1229 06:48:41.975739   27184 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1229 06:48:42.006033   27184 cri.go:96] found id: "14db244c6bfa5c219e63921179c4967da60c14cf78830c545174b0743ce83be6"
	I1229 06:48:42.006058   27184 cri.go:96] found id: "a6a0581d8c61611e2f272b96343a268fe5c70f7029e38edfe79b93a5fd1c926f"
	I1229 06:48:42.006063   27184 cri.go:96] found id: "40f1bbbc2f1bc41ffc575f885bace3f777cfb7eef12de99365091343767cae35"
	I1229 06:48:42.006067   27184 cri.go:96] found id: "cf76d1d0070f881a9ffa8a6c04533147ff62c7b1514e1c54068befd5af071e6e"
	I1229 06:48:42.006070   27184 cri.go:96] found id: "191723d7168d895c6bf51e40836a2fca9afd4a59b8bdb556490f53c792ed7d42"
	I1229 06:48:42.006074   27184 cri.go:96] found id: "ae96a02e9cf4dc9de5d7df316ab70ff8a9d5d492040546a516cfaf1820f86bad"
	I1229 06:48:42.006077   27184 cri.go:96] found id: "c0c3e5a743acf0570907049fa852e80d8eef97b9bad8644c6ff7a8e19a9c7733"
	I1229 06:48:42.006082   27184 cri.go:96] found id: "f3b99ec55e372e2b5c844e254b9ac4579df2d959ab772039faba6a6b067f14b3"
	I1229 06:48:42.006085   27184 cri.go:96] found id: "ebcf1d7d5c969c114009537b9defcb8f0588535464fbc0a2ff842fefb447bb8c"
	I1229 06:48:42.006095   27184 cri.go:96] found id: "c9b16a3ab994c277bfed1b0bf551b2af5be8fe356cf284ce509b41b234e1bbc2"
	I1229 06:48:42.006099   27184 cri.go:96] found id: "4bfc41143f5c8fbc4ac026d705315ecf51fb6fe6d2b5c89b603528b3b12a74db"
	I1229 06:48:42.006102   27184 cri.go:96] found id: "e4899f3db6ad29c94b5952375d6ee208724e8ad29644873f0cd737a68e8a703a"
	I1229 06:48:42.006105   27184 cri.go:96] found id: "292ecef6acbc2acee8ef472e9ea145feebe4df5df46f16f5454e0f099419d6ac"
	I1229 06:48:42.006108   27184 cri.go:96] found id: "b719d17a16ec647d0d77415a2263349a1c5df8d59d393b88bb52526b8c006bde"
	I1229 06:48:42.006111   27184 cri.go:96] found id: "f6f13110641eb941db49bd781fcb8fe293b7d6c4cd5c427879803c2ba59617a1"
	I1229 06:48:42.006125   27184 cri.go:96] found id: "17994c226aacad57fbda97a02fed4553e91bd134ce6d53975246176759124769"
	I1229 06:48:42.006129   27184 cri.go:96] found id: "dc2d3d8fc63a89092d1fb7c5c78aa3b5961a80f87adc9e764d27f5097df557c8"
	I1229 06:48:42.006133   27184 cri.go:96] found id: "21f189ec37313b547a78e6b7eefa6f39267001240605aa04be22ce21ebed4a7b"
	I1229 06:48:42.006135   27184 cri.go:96] found id: "eeb0fd8ef162839fa207c203bea491f6c9a658c71049e04fca885b95bea92e12"
	I1229 06:48:42.006138   27184 cri.go:96] found id: "33b335c36f36593b5f7bf67397f810f69aa60256086010a88acab17521368f9f"
	I1229 06:48:42.006144   27184 cri.go:96] found id: "4a3c44dcac1f9a83e3a787226cced9a0ab00baf1994255347f9606435fbde789"
	I1229 06:48:42.006150   27184 cri.go:96] found id: "51ed82b33b15063ed9584df91a5d6d235c8817d11e47a9a38a1d56e037f2ed6e"
	I1229 06:48:42.006153   27184 cri.go:96] found id: "3cbc1d555be4a7bc9b343c6dd9de17ccc7a1450486f6885704f6cf2ccfab7ebb"
	I1229 06:48:42.006156   27184 cri.go:96] found id: "76eadfaf130b7086d5045d8a47b4ca38794f2e222f8f835d379eb06fbf27c686"
	I1229 06:48:42.006159   27184 cri.go:96] found id: "1fdad9cd679e891d20b4bee4f017f2f3bb45002271534d814ea8b3133140ff54"
	I1229 06:48:42.006164   27184 cri.go:96] found id: ""
	I1229 06:48:42.006205   27184 ssh_runner.go:195] Run: sudo runc list -f json
	I1229 06:48:42.019910   27184 out.go:203] 
	W1229 06:48:42.021272   27184 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-29T06:48:42Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-29T06:48:42Z" level=error msg="open /run/runc: no such file or directory"
	
	W1229 06:48:42.021294   27184 out.go:285] * 
	* 
	W1229 06:48:42.022055   27184 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_913eef9b964ccef8b5b536327192b81f4aff5da9_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_913eef9b964ccef8b5b536327192b81f4aff5da9_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1229 06:48:42.023409   27184 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1057: failed to disable csi-hostpath-driver addon: args "out/minikube-linux-amd64 -p addons-264018 addons disable csi-hostpath-driver --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/CSI (47.28s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (2.56s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:810: (dbg) Run:  out/minikube-linux-amd64 addons enable headlamp -p addons-264018 --alsologtostderr -v=1
addons_test.go:810: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable headlamp -p addons-264018 --alsologtostderr -v=1: exit status 11 (256.221341ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1229 06:47:52.244042   22713 out.go:360] Setting OutFile to fd 1 ...
	I1229 06:47:52.244370   22713 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1229 06:47:52.244381   22713 out.go:374] Setting ErrFile to fd 2...
	I1229 06:47:52.244386   22713 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1229 06:47:52.244592   22713 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22353-9207/.minikube/bin
	I1229 06:47:52.244869   22713 mustload.go:66] Loading cluster: addons-264018
	I1229 06:47:52.245240   22713 config.go:182] Loaded profile config "addons-264018": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1229 06:47:52.245257   22713 addons.go:622] checking whether the cluster is paused
	I1229 06:47:52.245342   22713 config.go:182] Loaded profile config "addons-264018": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1229 06:47:52.245352   22713 host.go:66] Checking if "addons-264018" exists ...
	I1229 06:47:52.245742   22713 cli_runner.go:164] Run: docker container inspect addons-264018 --format={{.State.Status}}
	I1229 06:47:52.265548   22713 ssh_runner.go:195] Run: systemctl --version
	I1229 06:47:52.265611   22713 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-264018
	I1229 06:47:52.282969   22713 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22353-9207/.minikube/machines/addons-264018/id_rsa Username:docker}
	I1229 06:47:52.382937   22713 cri.go:61] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1229 06:47:52.383017   22713 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1229 06:47:52.413549   22713 cri.go:96] found id: "a6a0581d8c61611e2f272b96343a268fe5c70f7029e38edfe79b93a5fd1c926f"
	I1229 06:47:52.413568   22713 cri.go:96] found id: "40f1bbbc2f1bc41ffc575f885bace3f777cfb7eef12de99365091343767cae35"
	I1229 06:47:52.413572   22713 cri.go:96] found id: "cf76d1d0070f881a9ffa8a6c04533147ff62c7b1514e1c54068befd5af071e6e"
	I1229 06:47:52.413575   22713 cri.go:96] found id: "191723d7168d895c6bf51e40836a2fca9afd4a59b8bdb556490f53c792ed7d42"
	I1229 06:47:52.413578   22713 cri.go:96] found id: "ae96a02e9cf4dc9de5d7df316ab70ff8a9d5d492040546a516cfaf1820f86bad"
	I1229 06:47:52.413581   22713 cri.go:96] found id: "c0c3e5a743acf0570907049fa852e80d8eef97b9bad8644c6ff7a8e19a9c7733"
	I1229 06:47:52.413584   22713 cri.go:96] found id: "f3b99ec55e372e2b5c844e254b9ac4579df2d959ab772039faba6a6b067f14b3"
	I1229 06:47:52.413586   22713 cri.go:96] found id: "ebcf1d7d5c969c114009537b9defcb8f0588535464fbc0a2ff842fefb447bb8c"
	I1229 06:47:52.413589   22713 cri.go:96] found id: "c9b16a3ab994c277bfed1b0bf551b2af5be8fe356cf284ce509b41b234e1bbc2"
	I1229 06:47:52.413594   22713 cri.go:96] found id: "4bfc41143f5c8fbc4ac026d705315ecf51fb6fe6d2b5c89b603528b3b12a74db"
	I1229 06:47:52.413603   22713 cri.go:96] found id: "e4899f3db6ad29c94b5952375d6ee208724e8ad29644873f0cd737a68e8a703a"
	I1229 06:47:52.413606   22713 cri.go:96] found id: "292ecef6acbc2acee8ef472e9ea145feebe4df5df46f16f5454e0f099419d6ac"
	I1229 06:47:52.413609   22713 cri.go:96] found id: "b719d17a16ec647d0d77415a2263349a1c5df8d59d393b88bb52526b8c006bde"
	I1229 06:47:52.413625   22713 cri.go:96] found id: "f6f13110641eb941db49bd781fcb8fe293b7d6c4cd5c427879803c2ba59617a1"
	I1229 06:47:52.413633   22713 cri.go:96] found id: "17994c226aacad57fbda97a02fed4553e91bd134ce6d53975246176759124769"
	I1229 06:47:52.413653   22713 cri.go:96] found id: "dc2d3d8fc63a89092d1fb7c5c78aa3b5961a80f87adc9e764d27f5097df557c8"
	I1229 06:47:52.413661   22713 cri.go:96] found id: "21f189ec37313b547a78e6b7eefa6f39267001240605aa04be22ce21ebed4a7b"
	I1229 06:47:52.413668   22713 cri.go:96] found id: "eeb0fd8ef162839fa207c203bea491f6c9a658c71049e04fca885b95bea92e12"
	I1229 06:47:52.413671   22713 cri.go:96] found id: "33b335c36f36593b5f7bf67397f810f69aa60256086010a88acab17521368f9f"
	I1229 06:47:52.413674   22713 cri.go:96] found id: "4a3c44dcac1f9a83e3a787226cced9a0ab00baf1994255347f9606435fbde789"
	I1229 06:47:52.413677   22713 cri.go:96] found id: "51ed82b33b15063ed9584df91a5d6d235c8817d11e47a9a38a1d56e037f2ed6e"
	I1229 06:47:52.413679   22713 cri.go:96] found id: "3cbc1d555be4a7bc9b343c6dd9de17ccc7a1450486f6885704f6cf2ccfab7ebb"
	I1229 06:47:52.413682   22713 cri.go:96] found id: "76eadfaf130b7086d5045d8a47b4ca38794f2e222f8f835d379eb06fbf27c686"
	I1229 06:47:52.413685   22713 cri.go:96] found id: "1fdad9cd679e891d20b4bee4f017f2f3bb45002271534d814ea8b3133140ff54"
	I1229 06:47:52.413688   22713 cri.go:96] found id: ""
	I1229 06:47:52.413724   22713 ssh_runner.go:195] Run: sudo runc list -f json
	I1229 06:47:52.431083   22713 out.go:203] 
	W1229 06:47:52.432342   22713 out.go:285] X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-29T06:47:52Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-29T06:47:52Z" level=error msg="open /run/runc: no such file or directory"
	
	W1229 06:47:52.432372   22713 out.go:285] * 
	* 
	W1229 06:47:52.433186   22713 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_af3b8a9ce4f102efc219f1404c9eed7a69cbf2d5_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_af3b8a9ce4f102efc219f1404c9eed7a69cbf2d5_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1229 06:47:52.434615   22713 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:812: failed to enable headlamp addon: args: "out/minikube-linux-amd64 addons enable headlamp -p addons-264018 --alsologtostderr -v=1": exit status 11
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestAddons/parallel/Headlamp]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestAddons/parallel/Headlamp]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect addons-264018
helpers_test.go:244: (dbg) docker inspect addons-264018:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "34b216e11419ad50a620e733b599c3f8a62a982701c3207bab796b0d28b86541",
	        "Created": "2025-12-29T06:46:28.65356978Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 14737,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-29T06:46:28.685939191Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:a2c772d8c9235c5e8a66a3d0af7f5184436aa141d74f90a5659fe0d594ea14d5",
	        "ResolvConfPath": "/var/lib/docker/containers/34b216e11419ad50a620e733b599c3f8a62a982701c3207bab796b0d28b86541/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/34b216e11419ad50a620e733b599c3f8a62a982701c3207bab796b0d28b86541/hostname",
	        "HostsPath": "/var/lib/docker/containers/34b216e11419ad50a620e733b599c3f8a62a982701c3207bab796b0d28b86541/hosts",
	        "LogPath": "/var/lib/docker/containers/34b216e11419ad50a620e733b599c3f8a62a982701c3207bab796b0d28b86541/34b216e11419ad50a620e733b599c3f8a62a982701c3207bab796b0d28b86541-json.log",
	        "Name": "/addons-264018",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-264018:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "addons-264018",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "34b216e11419ad50a620e733b599c3f8a62a982701c3207bab796b0d28b86541",
	                "LowerDir": "/var/lib/docker/overlay2/314cf7bbdaa7f4b827a9a5ee021fc15df525af69ff85890a55f6b5471b8568fe-init/diff:/var/lib/docker/overlay2/3c9e424a3298c821d4195922375c499065668db3230de879006c717dc268a220/diff",
	                "MergedDir": "/var/lib/docker/overlay2/314cf7bbdaa7f4b827a9a5ee021fc15df525af69ff85890a55f6b5471b8568fe/merged",
	                "UpperDir": "/var/lib/docker/overlay2/314cf7bbdaa7f4b827a9a5ee021fc15df525af69ff85890a55f6b5471b8568fe/diff",
	                "WorkDir": "/var/lib/docker/overlay2/314cf7bbdaa7f4b827a9a5ee021fc15df525af69ff85890a55f6b5471b8568fe/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "addons-264018",
	                "Source": "/var/lib/docker/volumes/addons-264018/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-264018",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766979815-22353@sha256:20dad5895b49b986a1253c0faab60865204843ac97fd3a6e6210da5896244409",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-264018",
	                "name.minikube.sigs.k8s.io": "addons-264018",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "dcfe839990c215fda8beffae568cd7a08f04301560262f66eca53380b8aa5ba2",
	            "SandboxKey": "/var/run/docker/netns/dcfe839990c2",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32768"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32769"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32772"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32770"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32771"
	                    }
	                ]
	            },
	            "Networks": {
	                "addons-264018": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "cdfe5025fee564625b9b9435006b021b5d30cafee3b69d640c52922fbbe1e08a",
	                    "EndpointID": "ed92501be341c05f60421f6bd85fae30f132e9a655d26ce6d4a2851fd8d461cb",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "MacAddress": "52:40:27:d7:d2:59",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "addons-264018",
	                        "34b216e11419"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-264018 -n addons-264018
helpers_test.go:253: <<< TestAddons/parallel/Headlamp FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestAddons/parallel/Headlamp]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p addons-264018 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-amd64 -p addons-264018 logs -n 25: (1.142837558s)
helpers_test.go:261: TestAddons/parallel/Headlamp logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                                                                                                                                                                   ARGS                                                                                                                                                                                                                                   │        PROFILE         │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -o=json --download-only -p download-only-887932 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                │ download-only-887932   │ jenkins │ v1.37.0 │ 29 Dec 25 06:45 UTC │                     │
	│ delete  │ --all                                                                                                                                                                                                                                                                                                                                                                                                                                                                    │ minikube               │ jenkins │ v1.37.0 │ 29 Dec 25 06:46 UTC │ 29 Dec 25 06:46 UTC │
	│ delete  │ -p download-only-887932                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-887932   │ jenkins │ v1.37.0 │ 29 Dec 25 06:46 UTC │ 29 Dec 25 06:46 UTC │
	│ start   │ -o=json --download-only -p download-only-722440 --force --alsologtostderr --kubernetes-version=v1.35.0 --container-runtime=crio --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                │ download-only-722440   │ jenkins │ v1.37.0 │ 29 Dec 25 06:46 UTC │                     │
	│ delete  │ --all                                                                                                                                                                                                                                                                                                                                                                                                                                                                    │ minikube               │ jenkins │ v1.37.0 │ 29 Dec 25 06:46 UTC │ 29 Dec 25 06:46 UTC │
	│ delete  │ -p download-only-722440                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-722440   │ jenkins │ v1.37.0 │ 29 Dec 25 06:46 UTC │ 29 Dec 25 06:46 UTC │
	│ delete  │ -p download-only-887932                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-887932   │ jenkins │ v1.37.0 │ 29 Dec 25 06:46 UTC │ 29 Dec 25 06:46 UTC │
	│ delete  │ -p download-only-722440                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-722440   │ jenkins │ v1.37.0 │ 29 Dec 25 06:46 UTC │ 29 Dec 25 06:46 UTC │
	│ start   │ --download-only -p download-docker-517975 --alsologtostderr --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                                                                                    │ download-docker-517975 │ jenkins │ v1.37.0 │ 29 Dec 25 06:46 UTC │                     │
	│ delete  │ -p download-docker-517975                                                                                                                                                                                                                                                                                                                                                                                                                                                │ download-docker-517975 │ jenkins │ v1.37.0 │ 29 Dec 25 06:46 UTC │ 29 Dec 25 06:46 UTC │
	│ start   │ --download-only -p binary-mirror-518856 --alsologtostderr --binary-mirror http://127.0.0.1:40905 --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                                               │ binary-mirror-518856   │ jenkins │ v1.37.0 │ 29 Dec 25 06:46 UTC │                     │
	│ delete  │ -p binary-mirror-518856                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ binary-mirror-518856   │ jenkins │ v1.37.0 │ 29 Dec 25 06:46 UTC │ 29 Dec 25 06:46 UTC │
	│ addons  │ disable dashboard -p addons-264018                                                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-264018          │ jenkins │ v1.37.0 │ 29 Dec 25 06:46 UTC │                     │
	│ addons  │ enable dashboard -p addons-264018                                                                                                                                                                                                                                                                                                                                                                                                                                        │ addons-264018          │ jenkins │ v1.37.0 │ 29 Dec 25 06:46 UTC │                     │
	│ start   │ -p addons-264018 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher │ addons-264018          │ jenkins │ v1.37.0 │ 29 Dec 25 06:46 UTC │ 29 Dec 25 06:47 UTC │
	│ addons  │ addons-264018 addons disable volcano --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                              │ addons-264018          │ jenkins │ v1.37.0 │ 29 Dec 25 06:47 UTC │                     │
	│ addons  │ addons-264018 addons disable gcp-auth --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-264018          │ jenkins │ v1.37.0 │ 29 Dec 25 06:47 UTC │                     │
	│ addons  │ addons-264018 addons disable amd-gpu-device-plugin --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                │ addons-264018          │ jenkins │ v1.37.0 │ 29 Dec 25 06:47 UTC │                     │
	│ addons  │ addons-264018 addons disable yakd --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-264018          │ jenkins │ v1.37.0 │ 29 Dec 25 06:47 UTC │                     │
	│ addons  │ addons-264018 addons disable nvidia-device-plugin --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-264018          │ jenkins │ v1.37.0 │ 29 Dec 25 06:47 UTC │                     │
	│ addons  │ enable headlamp -p addons-264018 --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                  │ addons-264018          │ jenkins │ v1.37.0 │ 29 Dec 25 06:47 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/29 06:46:05
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1229 06:46:05.027286   14074 out.go:360] Setting OutFile to fd 1 ...
	I1229 06:46:05.027511   14074 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1229 06:46:05.027518   14074 out.go:374] Setting ErrFile to fd 2...
	I1229 06:46:05.027523   14074 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1229 06:46:05.027694   14074 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22353-9207/.minikube/bin
	I1229 06:46:05.028208   14074 out.go:368] Setting JSON to false
	I1229 06:46:05.029019   14074 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":1717,"bootTime":1766989048,"procs":173,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1229 06:46:05.029066   14074 start.go:143] virtualization: kvm guest
	I1229 06:46:05.031100   14074 out.go:179] * [addons-264018] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1229 06:46:05.032517   14074 notify.go:221] Checking for updates...
	I1229 06:46:05.032543   14074 out.go:179]   - MINIKUBE_LOCATION=22353
	I1229 06:46:05.033947   14074 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1229 06:46:05.035047   14074 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22353-9207/kubeconfig
	I1229 06:46:05.036148   14074 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22353-9207/.minikube
	I1229 06:46:05.040672   14074 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1229 06:46:05.041787   14074 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1229 06:46:05.043075   14074 driver.go:422] Setting default libvirt URI to qemu:///system
	I1229 06:46:05.064979   14074 docker.go:124] docker version: linux-29.1.3:Docker Engine - Community
	I1229 06:46:05.065066   14074 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1229 06:46:05.118817   14074 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:26 OomKillDisable:false NGoroutines:49 SystemTime:2025-12-29 06:46:05.109960745 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:dea7da592f5d1d2b7755e3a161be07f43fad8f75 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1229 06:46:05.118909   14074 docker.go:319] overlay module found
	I1229 06:46:05.120566   14074 out.go:179] * Using the docker driver based on user configuration
	I1229 06:46:05.121749   14074 start.go:309] selected driver: docker
	I1229 06:46:05.121760   14074 start.go:928] validating driver "docker" against <nil>
	I1229 06:46:05.121771   14074 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1229 06:46:05.122292   14074 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1229 06:46:05.175416   14074 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:26 OomKillDisable:false NGoroutines:49 SystemTime:2025-12-29 06:46:05.166835729 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:dea7da592f5d1d2b7755e3a161be07f43fad8f75 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1229 06:46:05.175570   14074 start_flags.go:333] no existing cluster config was found, will generate one from the flags 
	I1229 06:46:05.175816   14074 start_flags.go:1019] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1229 06:46:05.177407   14074 out.go:179] * Using Docker driver with root privileges
	I1229 06:46:05.178517   14074 cni.go:84] Creating CNI manager for ""
	I1229 06:46:05.178571   14074 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1229 06:46:05.178581   14074 start_flags.go:342] Found "CNI" CNI - setting NetworkPlugin=cni
	I1229 06:46:05.178651   14074 start.go:353] cluster config:
	{Name:addons-264018 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766979815-22353@sha256:20dad5895b49b986a1253c0faab60865204843ac97fd3a6e6210da5896244409 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:addons-264018 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:
AutoPauseInterval:1m0s Rosetta:false}
	I1229 06:46:05.179700   14074 out.go:179] * Starting "addons-264018" primary control-plane node in "addons-264018" cluster
	I1229 06:46:05.180732   14074 cache.go:134] Beginning downloading kic base image for docker with crio
	I1229 06:46:05.181929   14074 out.go:179] * Pulling base image v0.0.48-1766979815-22353 ...
	I1229 06:46:05.183025   14074 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime crio
	I1229 06:46:05.183053   14074 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22353-9207/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-cri-o-overlay-amd64.tar.lz4
	I1229 06:46:05.183062   14074 cache.go:65] Caching tarball of preloaded images
	I1229 06:46:05.183119   14074 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766979815-22353@sha256:20dad5895b49b986a1253c0faab60865204843ac97fd3a6e6210da5896244409 in local docker daemon
	I1229 06:46:05.183159   14074 preload.go:251] Found /home/jenkins/minikube-integration/22353-9207/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1229 06:46:05.183174   14074 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0 on crio
	I1229 06:46:05.183579   14074 profile.go:143] Saving config to /home/jenkins/minikube-integration/22353-9207/.minikube/profiles/addons-264018/config.json ...
	I1229 06:46:05.183605   14074 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22353-9207/.minikube/profiles/addons-264018/config.json: {Name:mk1a64f1f4c7a4a2286ee8a22cdb5bc7340167fb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1229 06:46:05.200040   14074 cache.go:163] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766979815-22353@sha256:20dad5895b49b986a1253c0faab60865204843ac97fd3a6e6210da5896244409 to local cache
	I1229 06:46:05.200157   14074 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766979815-22353@sha256:20dad5895b49b986a1253c0faab60865204843ac97fd3a6e6210da5896244409 in local cache directory
	I1229 06:46:05.200177   14074 image.go:68] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766979815-22353@sha256:20dad5895b49b986a1253c0faab60865204843ac97fd3a6e6210da5896244409 in local cache directory, skipping pull
	I1229 06:46:05.200187   14074 image.go:137] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766979815-22353@sha256:20dad5895b49b986a1253c0faab60865204843ac97fd3a6e6210da5896244409 exists in cache, skipping pull
	I1229 06:46:05.200196   14074 cache.go:166] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766979815-22353@sha256:20dad5895b49b986a1253c0faab60865204843ac97fd3a6e6210da5896244409 as a tarball
	I1229 06:46:05.200205   14074 cache.go:176] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766979815-22353@sha256:20dad5895b49b986a1253c0faab60865204843ac97fd3a6e6210da5896244409 from local cache
	I1229 06:46:17.951715   14074 cache.go:178] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766979815-22353@sha256:20dad5895b49b986a1253c0faab60865204843ac97fd3a6e6210da5896244409 from cached tarball
	I1229 06:46:17.951757   14074 cache.go:243] Successfully downloaded all kic artifacts
	I1229 06:46:17.951800   14074 start.go:360] acquireMachinesLock for addons-264018: {Name:mk224fa6181881b93fb8fbe089b065d9ccf5abb8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1229 06:46:17.951899   14074 start.go:364] duration metric: took 78.391µs to acquireMachinesLock for "addons-264018"
	I1229 06:46:17.951929   14074 start.go:93] Provisioning new machine with config: &{Name:addons-264018 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766979815-22353@sha256:20dad5895b49b986a1253c0faab60865204843ac97fd3a6e6210da5896244409 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:addons-264018 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false} &{Name: IP: Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1229 06:46:17.952007   14074 start.go:125] createHost starting for "" (driver="docker")
	I1229 06:46:17.953746   14074 out.go:252] * Creating docker container (CPUs=2, Memory=4096MB) ...
	I1229 06:46:17.953942   14074 start.go:159] libmachine.API.Create for "addons-264018" (driver="docker")
	I1229 06:46:17.953971   14074 client.go:173] LocalClient.Create starting
	I1229 06:46:17.954081   14074 main.go:144] libmachine: Creating CA: /home/jenkins/minikube-integration/22353-9207/.minikube/certs/ca.pem
	I1229 06:46:18.122447   14074 main.go:144] libmachine: Creating client certificate: /home/jenkins/minikube-integration/22353-9207/.minikube/certs/cert.pem
	I1229 06:46:18.192675   14074 cli_runner.go:164] Run: docker network inspect addons-264018 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1229 06:46:18.210969   14074 cli_runner.go:211] docker network inspect addons-264018 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1229 06:46:18.211036   14074 network_create.go:284] running [docker network inspect addons-264018] to gather additional debugging logs...
	I1229 06:46:18.211054   14074 cli_runner.go:164] Run: docker network inspect addons-264018
	W1229 06:46:18.226666   14074 cli_runner.go:211] docker network inspect addons-264018 returned with exit code 1
	I1229 06:46:18.226694   14074 network_create.go:287] error running [docker network inspect addons-264018]: docker network inspect addons-264018: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-264018 not found
	I1229 06:46:18.226706   14074 network_create.go:289] output of [docker network inspect addons-264018]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-264018 not found
	
	** /stderr **
	I1229 06:46:18.226809   14074 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1229 06:46:18.242721   14074 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001fbf0b0}
	I1229 06:46:18.242766   14074 network_create.go:124] attempt to create docker network addons-264018 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1229 06:46:18.242813   14074 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-264018 addons-264018
	I1229 06:46:18.288297   14074 network_create.go:108] docker network addons-264018 192.168.49.0/24 created
	I1229 06:46:18.288324   14074 kic.go:121] calculated static IP "192.168.49.2" for the "addons-264018" container
	I1229 06:46:18.288396   14074 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1229 06:46:18.304571   14074 cli_runner.go:164] Run: docker volume create addons-264018 --label name.minikube.sigs.k8s.io=addons-264018 --label created_by.minikube.sigs.k8s.io=true
	I1229 06:46:18.321369   14074 oci.go:103] Successfully created a docker volume addons-264018
	I1229 06:46:18.321462   14074 cli_runner.go:164] Run: docker run --rm --name addons-264018-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-264018 --entrypoint /usr/bin/test -v addons-264018:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766979815-22353@sha256:20dad5895b49b986a1253c0faab60865204843ac97fd3a6e6210da5896244409 -d /var/lib
	I1229 06:46:24.895976   14074 cli_runner.go:217] Completed: docker run --rm --name addons-264018-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-264018 --entrypoint /usr/bin/test -v addons-264018:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766979815-22353@sha256:20dad5895b49b986a1253c0faab60865204843ac97fd3a6e6210da5896244409 -d /var/lib: (6.574462818s)
	I1229 06:46:24.896006   14074 oci.go:107] Successfully prepared a docker volume addons-264018
	I1229 06:46:24.896067   14074 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime crio
	I1229 06:46:24.896085   14074 kic.go:194] Starting extracting preloaded images to volume ...
	I1229 06:46:24.896133   14074 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22353-9207/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v addons-264018:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766979815-22353@sha256:20dad5895b49b986a1253c0faab60865204843ac97fd3a6e6210da5896244409 -I lz4 -xf /preloaded.tar -C /extractDir
	I1229 06:46:28.582407   14074 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22353-9207/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v addons-264018:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766979815-22353@sha256:20dad5895b49b986a1253c0faab60865204843ac97fd3a6e6210da5896244409 -I lz4 -xf /preloaded.tar -C /extractDir: (3.686219904s)
	I1229 06:46:28.582441   14074 kic.go:203] duration metric: took 3.686353208s to extract preloaded images to volume ...
	W1229 06:46:28.582545   14074 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1229 06:46:28.582587   14074 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1229 06:46:28.582642   14074 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1229 06:46:28.638011   14074 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-264018 --name addons-264018 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-264018 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-264018 --network addons-264018 --ip 192.168.49.2 --volume addons-264018:/var --security-opt apparmor=unconfined --memory=4096mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766979815-22353@sha256:20dad5895b49b986a1253c0faab60865204843ac97fd3a6e6210da5896244409
	I1229 06:46:28.918512   14074 cli_runner.go:164] Run: docker container inspect addons-264018 --format={{.State.Running}}
	I1229 06:46:28.937314   14074 cli_runner.go:164] Run: docker container inspect addons-264018 --format={{.State.Status}}
	I1229 06:46:28.955254   14074 cli_runner.go:164] Run: docker exec addons-264018 stat /var/lib/dpkg/alternatives/iptables
	I1229 06:46:29.002472   14074 oci.go:144] the created container "addons-264018" has a running status.
	I1229 06:46:29.002502   14074 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/22353-9207/.minikube/machines/addons-264018/id_rsa...
	I1229 06:46:29.026358   14074 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22353-9207/.minikube/machines/addons-264018/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1229 06:46:29.053681   14074 cli_runner.go:164] Run: docker container inspect addons-264018 --format={{.State.Status}}
	I1229 06:46:29.074397   14074 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1229 06:46:29.074428   14074 kic_runner.go:114] Args: [docker exec --privileged addons-264018 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1229 06:46:29.119740   14074 cli_runner.go:164] Run: docker container inspect addons-264018 --format={{.State.Status}}
	I1229 06:46:29.139831   14074 machine.go:94] provisionDockerMachine start ...
	I1229 06:46:29.139995   14074 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-264018
	I1229 06:46:29.160292   14074 main.go:144] libmachine: Using SSH client type: native
	I1229 06:46:29.160554   14074 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e300] 0x850fa0 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I1229 06:46:29.160569   14074 main.go:144] libmachine: About to run SSH command:
	hostname
	I1229 06:46:29.162039   14074 main.go:144] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:50308->127.0.0.1:32768: read: connection reset by peer
	I1229 06:46:32.297072   14074 main.go:144] libmachine: SSH cmd err, output: <nil>: addons-264018
	
	I1229 06:46:32.297097   14074 ubuntu.go:182] provisioning hostname "addons-264018"
	I1229 06:46:32.297151   14074 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-264018
	I1229 06:46:32.314181   14074 main.go:144] libmachine: Using SSH client type: native
	I1229 06:46:32.314493   14074 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e300] 0x850fa0 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I1229 06:46:32.314510   14074 main.go:144] libmachine: About to run SSH command:
	sudo hostname addons-264018 && echo "addons-264018" | sudo tee /etc/hostname
	I1229 06:46:32.457316   14074 main.go:144] libmachine: SSH cmd err, output: <nil>: addons-264018
	
	I1229 06:46:32.457407   14074 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-264018
	I1229 06:46:32.477127   14074 main.go:144] libmachine: Using SSH client type: native
	I1229 06:46:32.477362   14074 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e300] 0x850fa0 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I1229 06:46:32.477385   14074 main.go:144] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-264018' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-264018/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-264018' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1229 06:46:32.610443   14074 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	I1229 06:46:32.610470   14074 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22353-9207/.minikube CaCertPath:/home/jenkins/minikube-integration/22353-9207/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22353-9207/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22353-9207/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22353-9207/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22353-9207/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22353-9207/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22353-9207/.minikube}
	I1229 06:46:32.610496   14074 ubuntu.go:190] setting up certificates
	I1229 06:46:32.610516   14074 provision.go:84] configureAuth start
	I1229 06:46:32.610577   14074 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-264018
	I1229 06:46:32.627061   14074 provision.go:143] copyHostCerts
	I1229 06:46:32.627128   14074 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22353-9207/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22353-9207/.minikube/ca.pem (1082 bytes)
	I1229 06:46:32.627283   14074 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22353-9207/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22353-9207/.minikube/cert.pem (1123 bytes)
	I1229 06:46:32.627382   14074 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22353-9207/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22353-9207/.minikube/key.pem (1675 bytes)
	I1229 06:46:32.627449   14074 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22353-9207/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22353-9207/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22353-9207/.minikube/certs/ca-key.pem org=jenkins.addons-264018 san=[127.0.0.1 192.168.49.2 addons-264018 localhost minikube]
	I1229 06:46:32.753600   14074 provision.go:177] copyRemoteCerts
	I1229 06:46:32.753664   14074 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1229 06:46:32.753698   14074 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-264018
	I1229 06:46:32.771149   14074 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22353-9207/.minikube/machines/addons-264018/id_rsa Username:docker}
	I1229 06:46:32.867248   14074 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-9207/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1229 06:46:32.885924   14074 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-9207/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1229 06:46:32.902423   14074 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-9207/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1229 06:46:32.919977   14074 provision.go:87] duration metric: took 309.435546ms to configureAuth
	I1229 06:46:32.920005   14074 ubuntu.go:206] setting minikube options for container-runtime
	I1229 06:46:32.920165   14074 config.go:182] Loaded profile config "addons-264018": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1229 06:46:32.920286   14074 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-264018
	I1229 06:46:32.937294   14074 main.go:144] libmachine: Using SSH client type: native
	I1229 06:46:32.937485   14074 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e300] 0x850fa0 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I1229 06:46:32.937499   14074 main.go:144] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1229 06:46:33.210788   14074 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1229 06:46:33.210811   14074 machine.go:97] duration metric: took 4.070885133s to provisionDockerMachine
	I1229 06:46:33.210822   14074 client.go:176] duration metric: took 15.256841318s to LocalClient.Create
	I1229 06:46:33.210837   14074 start.go:167] duration metric: took 15.256900372s to libmachine.API.Create "addons-264018"
	I1229 06:46:33.210845   14074 start.go:293] postStartSetup for "addons-264018" (driver="docker")
	I1229 06:46:33.210854   14074 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1229 06:46:33.210904   14074 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1229 06:46:33.210936   14074 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-264018
	I1229 06:46:33.228366   14074 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22353-9207/.minikube/machines/addons-264018/id_rsa Username:docker}
	I1229 06:46:33.326976   14074 ssh_runner.go:195] Run: cat /etc/os-release
	I1229 06:46:33.330381   14074 main.go:144] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1229 06:46:33.330407   14074 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1229 06:46:33.330418   14074 filesync.go:126] Scanning /home/jenkins/minikube-integration/22353-9207/.minikube/addons for local assets ...
	I1229 06:46:33.330469   14074 filesync.go:126] Scanning /home/jenkins/minikube-integration/22353-9207/.minikube/files for local assets ...
	I1229 06:46:33.330490   14074 start.go:296] duration metric: took 119.639803ms for postStartSetup
	I1229 06:46:33.330760   14074 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-264018
	I1229 06:46:33.347963   14074 profile.go:143] Saving config to /home/jenkins/minikube-integration/22353-9207/.minikube/profiles/addons-264018/config.json ...
	I1229 06:46:33.348273   14074 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1229 06:46:33.348318   14074 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-264018
	I1229 06:46:33.364426   14074 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22353-9207/.minikube/machines/addons-264018/id_rsa Username:docker}
	I1229 06:46:33.457191   14074 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1229 06:46:33.461541   14074 start.go:128] duration metric: took 15.509522091s to createHost
	I1229 06:46:33.461566   14074 start.go:83] releasing machines lock for "addons-264018", held for 15.509651662s
	I1229 06:46:33.461629   14074 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-264018
	I1229 06:46:33.478541   14074 ssh_runner.go:195] Run: cat /version.json
	I1229 06:46:33.478590   14074 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-264018
	I1229 06:46:33.478636   14074 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1229 06:46:33.478707   14074 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-264018
	I1229 06:46:33.496250   14074 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22353-9207/.minikube/machines/addons-264018/id_rsa Username:docker}
	I1229 06:46:33.496959   14074 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22353-9207/.minikube/machines/addons-264018/id_rsa Username:docker}
	I1229 06:46:33.642536   14074 ssh_runner.go:195] Run: systemctl --version
	I1229 06:46:33.648630   14074 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1229 06:46:33.680947   14074 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1229 06:46:33.685348   14074 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1229 06:46:33.685403   14074 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1229 06:46:33.709783   14074 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1229 06:46:33.709809   14074 start.go:496] detecting cgroup driver to use...
	I1229 06:46:33.709843   14074 detect.go:190] detected "systemd" cgroup driver on host os
	I1229 06:46:33.709892   14074 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1229 06:46:33.724902   14074 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1229 06:46:33.736661   14074 docker.go:218] disabling cri-docker service (if available) ...
	I1229 06:46:33.736706   14074 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1229 06:46:33.751929   14074 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1229 06:46:33.767948   14074 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1229 06:46:33.845889   14074 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1229 06:46:33.933117   14074 docker.go:234] disabling docker service ...
	I1229 06:46:33.933200   14074 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1229 06:46:33.950285   14074 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1229 06:46:33.961907   14074 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1229 06:46:34.041499   14074 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1229 06:46:34.120010   14074 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1229 06:46:34.132122   14074 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1229 06:46:34.145235   14074 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1229 06:46:34.145295   14074 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1229 06:46:34.154719   14074 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1229 06:46:34.154775   14074 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1229 06:46:34.162736   14074 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1229 06:46:34.170595   14074 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1229 06:46:34.178598   14074 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1229 06:46:34.185818   14074 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1229 06:46:34.193931   14074 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1229 06:46:34.206563   14074 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1229 06:46:34.214551   14074 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1229 06:46:34.221165   14074 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 1
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1229 06:46:34.221208   14074 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1229 06:46:34.232369   14074 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1229 06:46:34.239998   14074 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1229 06:46:34.318680   14074 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1229 06:46:34.448817   14074 start.go:553] Will wait 60s for socket path /var/run/crio/crio.sock
	I1229 06:46:34.448918   14074 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1229 06:46:34.452838   14074 start.go:574] Will wait 60s for crictl version
	I1229 06:46:34.452894   14074 ssh_runner.go:195] Run: which crictl
	I1229 06:46:34.456350   14074 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1229 06:46:34.480330   14074 start.go:590] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.35.0
	RuntimeApiVersion:  v1
	I1229 06:46:34.480450   14074 ssh_runner.go:195] Run: crio --version
	I1229 06:46:34.506421   14074 ssh_runner.go:195] Run: crio --version
	I1229 06:46:34.533165   14074 out.go:179] * Preparing Kubernetes v1.35.0 on CRI-O 1.35.0 ...
	I1229 06:46:34.534379   14074 cli_runner.go:164] Run: docker network inspect addons-264018 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1229 06:46:34.551275   14074 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1229 06:46:34.555426   14074 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1229 06:46:34.565602   14074 kubeadm.go:884] updating cluster {Name:addons-264018 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766979815-22353@sha256:20dad5895b49b986a1253c0faab60865204843ac97fd3a6e6210da5896244409 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:addons-264018 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketV
MnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false} ...
	I1229 06:46:34.565696   14074 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime crio
	I1229 06:46:34.565749   14074 ssh_runner.go:195] Run: sudo crictl images --output json
	I1229 06:46:34.599726   14074 crio.go:561] all images are preloaded for cri-o runtime.
	I1229 06:46:34.599749   14074 crio.go:433] Images already preloaded, skipping extraction
	I1229 06:46:34.599796   14074 ssh_runner.go:195] Run: sudo crictl images --output json
	I1229 06:46:34.624142   14074 crio.go:561] all images are preloaded for cri-o runtime.
	I1229 06:46:34.624164   14074 cache_images.go:86] Images are preloaded, skipping loading
	I1229 06:46:34.624171   14074 kubeadm.go:935] updating node { 192.168.49.2 8443 v1.35.0 crio true true} ...
	I1229 06:46:34.624265   14074 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=addons-264018 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0 ClusterName:addons-264018 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1229 06:46:34.624330   14074 ssh_runner.go:195] Run: crio config
	I1229 06:46:34.667410   14074 cni.go:84] Creating CNI manager for ""
	I1229 06:46:34.667430   14074 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1229 06:46:34.667446   14074 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1229 06:46:34.667468   14074 kubeadm.go:197] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.35.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-264018 NodeName:addons-264018 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernet
es/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1229 06:46:34.667576   14074 kubeadm.go:203] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-264018"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1229 06:46:34.667632   14074 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0
	I1229 06:46:34.675406   14074 binaries.go:51] Found k8s binaries, skipping transfer
	I1229 06:46:34.675465   14074 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1229 06:46:34.682669   14074 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I1229 06:46:34.694196   14074 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1229 06:46:34.708191   14074 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2209 bytes)
	I1229 06:46:34.720020   14074 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1229 06:46:34.723321   14074 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1229 06:46:34.732172   14074 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1229 06:46:34.807246   14074 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1229 06:46:34.830570   14074 certs.go:69] Setting up /home/jenkins/minikube-integration/22353-9207/.minikube/profiles/addons-264018 for IP: 192.168.49.2
	I1229 06:46:34.830593   14074 certs.go:195] generating shared ca certs ...
	I1229 06:46:34.830614   14074 certs.go:227] acquiring lock for ca certs: {Name:mk9ea2caa33885d3eff30cd6b31b0954826457bd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1229 06:46:34.830753   14074 certs.go:241] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/22353-9207/.minikube/ca.key
	I1229 06:46:34.935877   14074 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22353-9207/.minikube/ca.crt ...
	I1229 06:46:34.935908   14074 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22353-9207/.minikube/ca.crt: {Name:mkdd6992f69e04ad46c022482d6ae092729e2268 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1229 06:46:34.936074   14074 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22353-9207/.minikube/ca.key ...
	I1229 06:46:34.936085   14074 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22353-9207/.minikube/ca.key: {Name:mk1d90cc8ee70a7b925a650f1f372c86cd88c0ce Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1229 06:46:34.936153   14074 certs.go:241] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22353-9207/.minikube/proxy-client-ca.key
	I1229 06:46:35.007988   14074 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22353-9207/.minikube/proxy-client-ca.crt ...
	I1229 06:46:35.008018   14074 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22353-9207/.minikube/proxy-client-ca.crt: {Name:mke902017ff90abeff457f8b4e6d4b2b20ab4f69 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1229 06:46:35.008167   14074 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22353-9207/.minikube/proxy-client-ca.key ...
	I1229 06:46:35.008180   14074 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22353-9207/.minikube/proxy-client-ca.key: {Name:mka2428343f246fb6e98bbcacdad5397b5e937e0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1229 06:46:35.008280   14074 certs.go:257] generating profile certs ...
	I1229 06:46:35.008335   14074 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22353-9207/.minikube/profiles/addons-264018/client.key
	I1229 06:46:35.008349   14074 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22353-9207/.minikube/profiles/addons-264018/client.crt with IP's: []
	I1229 06:46:35.063546   14074 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22353-9207/.minikube/profiles/addons-264018/client.crt ...
	I1229 06:46:35.063571   14074 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22353-9207/.minikube/profiles/addons-264018/client.crt: {Name:mkef412ba19cdb9ac0889aba9613bfaf37bc1cf1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1229 06:46:35.063730   14074 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22353-9207/.minikube/profiles/addons-264018/client.key ...
	I1229 06:46:35.063741   14074 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22353-9207/.minikube/profiles/addons-264018/client.key: {Name:mkdbbc986854aa19ac1b5a46230d5e1b5310108e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1229 06:46:35.063819   14074 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22353-9207/.minikube/profiles/addons-264018/apiserver.key.6d6ba777
	I1229 06:46:35.063837   14074 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22353-9207/.minikube/profiles/addons-264018/apiserver.crt.6d6ba777 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I1229 06:46:35.171501   14074 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22353-9207/.minikube/profiles/addons-264018/apiserver.crt.6d6ba777 ...
	I1229 06:46:35.171528   14074 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22353-9207/.minikube/profiles/addons-264018/apiserver.crt.6d6ba777: {Name:mke11ff0b1cf1b294cadbfe213a54985fee3fb71 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1229 06:46:35.171683   14074 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22353-9207/.minikube/profiles/addons-264018/apiserver.key.6d6ba777 ...
	I1229 06:46:35.171695   14074 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22353-9207/.minikube/profiles/addons-264018/apiserver.key.6d6ba777: {Name:mk95ebbe909271d71aaab900780c23e86e4f25c7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1229 06:46:35.171765   14074 certs.go:382] copying /home/jenkins/minikube-integration/22353-9207/.minikube/profiles/addons-264018/apiserver.crt.6d6ba777 -> /home/jenkins/minikube-integration/22353-9207/.minikube/profiles/addons-264018/apiserver.crt
	I1229 06:46:35.171836   14074 certs.go:386] copying /home/jenkins/minikube-integration/22353-9207/.minikube/profiles/addons-264018/apiserver.key.6d6ba777 -> /home/jenkins/minikube-integration/22353-9207/.minikube/profiles/addons-264018/apiserver.key
	I1229 06:46:35.171883   14074 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22353-9207/.minikube/profiles/addons-264018/proxy-client.key
	I1229 06:46:35.171900   14074 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22353-9207/.minikube/profiles/addons-264018/proxy-client.crt with IP's: []
	I1229 06:46:35.271297   14074 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22353-9207/.minikube/profiles/addons-264018/proxy-client.crt ...
	I1229 06:46:35.271327   14074 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22353-9207/.minikube/profiles/addons-264018/proxy-client.crt: {Name:mkf21d8249bc1883b5eeca8eb2b39b678a4100c7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1229 06:46:35.271487   14074 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22353-9207/.minikube/profiles/addons-264018/proxy-client.key ...
	I1229 06:46:35.271499   14074 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22353-9207/.minikube/profiles/addons-264018/proxy-client.key: {Name:mk71e89460c50c49417b1a17e4ed554ef460682f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1229 06:46:35.271664   14074 certs.go:484] found cert: /home/jenkins/minikube-integration/22353-9207/.minikube/certs/ca-key.pem (1675 bytes)
	I1229 06:46:35.271699   14074 certs.go:484] found cert: /home/jenkins/minikube-integration/22353-9207/.minikube/certs/ca.pem (1082 bytes)
	I1229 06:46:35.271725   14074 certs.go:484] found cert: /home/jenkins/minikube-integration/22353-9207/.minikube/certs/cert.pem (1123 bytes)
	I1229 06:46:35.271749   14074 certs.go:484] found cert: /home/jenkins/minikube-integration/22353-9207/.minikube/certs/key.pem (1675 bytes)
	I1229 06:46:35.272325   14074 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-9207/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1229 06:46:35.290318   14074 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-9207/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1229 06:46:35.307488   14074 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-9207/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1229 06:46:35.324328   14074 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-9207/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1229 06:46:35.340810   14074 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-9207/.minikube/profiles/addons-264018/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1229 06:46:35.357449   14074 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-9207/.minikube/profiles/addons-264018/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1229 06:46:35.374816   14074 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-9207/.minikube/profiles/addons-264018/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1229 06:46:35.391550   14074 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-9207/.minikube/profiles/addons-264018/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1229 06:46:35.407781   14074 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-9207/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1229 06:46:35.425764   14074 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (722 bytes)
	I1229 06:46:35.437638   14074 ssh_runner.go:195] Run: openssl version
	I1229 06:46:35.443336   14074 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1229 06:46:35.450241   14074 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1229 06:46:35.459464   14074 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1229 06:46:35.462894   14074 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 29 06:46 /usr/share/ca-certificates/minikubeCA.pem
	I1229 06:46:35.462937   14074 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1229 06:46:35.496295   14074 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1229 06:46:35.504058   14074 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1229 06:46:35.511460   14074 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1229 06:46:35.514690   14074 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1229 06:46:35.514737   14074 kubeadm.go:401] StartCluster: {Name:addons-264018 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766979815-22353@sha256:20dad5895b49b986a1253c0faab60865204843ac97fd3a6e6210da5896244409 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:addons-264018 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMne
tClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1229 06:46:35.514805   14074 cri.go:61] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1229 06:46:35.514845   14074 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1229 06:46:35.540038   14074 cri.go:96] found id: ""
	I1229 06:46:35.540109   14074 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1229 06:46:35.547930   14074 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1229 06:46:35.555200   14074 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1229 06:46:35.555358   14074 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1229 06:46:35.562504   14074 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1229 06:46:35.562523   14074 kubeadm.go:158] found existing configuration files:
	
	I1229 06:46:35.562562   14074 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1229 06:46:35.569484   14074 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1229 06:46:35.569537   14074 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1229 06:46:35.576056   14074 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1229 06:46:35.582938   14074 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1229 06:46:35.582980   14074 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1229 06:46:35.589676   14074 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1229 06:46:35.597368   14074 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1229 06:46:35.597422   14074 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1229 06:46:35.605683   14074 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1229 06:46:35.612983   14074 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1229 06:46:35.613029   14074 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1229 06:46:35.619894   14074 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1229 06:46:35.653275   14074 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0
	I1229 06:46:35.653332   14074 kubeadm.go:319] [preflight] Running pre-flight checks
	I1229 06:46:35.726842   14074 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1229 06:46:35.726988   14074 kubeadm.go:319] KERNEL_VERSION: 6.8.0-1045-gcp
	I1229 06:46:35.727051   14074 kubeadm.go:319] OS: Linux
	I1229 06:46:35.727124   14074 kubeadm.go:319] CGROUPS_CPU: enabled
	I1229 06:46:35.727241   14074 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1229 06:46:35.727338   14074 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1229 06:46:35.727410   14074 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1229 06:46:35.727465   14074 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1229 06:46:35.727533   14074 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1229 06:46:35.727582   14074 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1229 06:46:35.727626   14074 kubeadm.go:319] CGROUPS_IO: enabled
	I1229 06:46:35.781656   14074 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1229 06:46:35.781761   14074 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1229 06:46:35.781862   14074 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1229 06:46:35.789679   14074 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1229 06:46:35.792391   14074 out.go:252]   - Generating certificates and keys ...
	I1229 06:46:35.792487   14074 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1229 06:46:35.792623   14074 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1229 06:46:35.914597   14074 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1229 06:46:35.957501   14074 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1229 06:46:36.004002   14074 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1229 06:46:36.083676   14074 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1229 06:46:36.244625   14074 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1229 06:46:36.244785   14074 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [addons-264018 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1229 06:46:36.270296   14074 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1229 06:46:36.270429   14074 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [addons-264018 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1229 06:46:36.344966   14074 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1229 06:46:36.381151   14074 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1229 06:46:36.469541   14074 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1229 06:46:36.469605   14074 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1229 06:46:36.543612   14074 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1229 06:46:36.640338   14074 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1229 06:46:36.741020   14074 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1229 06:46:36.801746   14074 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1229 06:46:36.910049   14074 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1229 06:46:36.910474   14074 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1229 06:46:36.913957   14074 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1229 06:46:36.915409   14074 out.go:252]   - Booting up control plane ...
	I1229 06:46:36.915494   14074 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1229 06:46:36.915590   14074 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1229 06:46:36.917129   14074 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1229 06:46:36.930399   14074 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1229 06:46:36.930522   14074 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1229 06:46:36.936470   14074 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1229 06:46:36.936726   14074 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1229 06:46:36.936782   14074 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1229 06:46:37.029840   14074 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1229 06:46:37.030017   14074 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1229 06:46:37.531380   14074 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 501.723199ms
	I1229 06:46:37.534320   14074 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1229 06:46:37.534437   14074 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	I1229 06:46:37.534557   14074 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1229 06:46:37.534645   14074 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1229 06:46:38.040563   14074 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 506.100504ms
	I1229 06:46:39.412448   14074 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 1.877971676s
	I1229 06:46:41.035543   14074 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 3.501175017s
	I1229 06:46:41.050009   14074 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1229 06:46:41.058969   14074 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1229 06:46:41.067045   14074 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1229 06:46:41.067372   14074 kubeadm.go:319] [mark-control-plane] Marking the node addons-264018 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1229 06:46:41.074878   14074 kubeadm.go:319] [bootstrap-token] Using token: 698a3d.4viwxngnx74hq03g
	I1229 06:46:41.076028   14074 out.go:252]   - Configuring RBAC rules ...
	I1229 06:46:41.076174   14074 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1229 06:46:41.078979   14074 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1229 06:46:41.083204   14074 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1229 06:46:41.085309   14074 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1229 06:46:41.088459   14074 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1229 06:46:41.090632   14074 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1229 06:46:41.440135   14074 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1229 06:46:41.858035   14074 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1229 06:46:42.440637   14074 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1229 06:46:42.441480   14074 kubeadm.go:319] 
	I1229 06:46:42.441542   14074 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1229 06:46:42.441577   14074 kubeadm.go:319] 
	I1229 06:46:42.441710   14074 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1229 06:46:42.441725   14074 kubeadm.go:319] 
	I1229 06:46:42.441763   14074 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1229 06:46:42.441848   14074 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1229 06:46:42.441923   14074 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1229 06:46:42.441937   14074 kubeadm.go:319] 
	I1229 06:46:42.442010   14074 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1229 06:46:42.442020   14074 kubeadm.go:319] 
	I1229 06:46:42.442084   14074 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1229 06:46:42.442097   14074 kubeadm.go:319] 
	I1229 06:46:42.442175   14074 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1229 06:46:42.442308   14074 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1229 06:46:42.442406   14074 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1229 06:46:42.442416   14074 kubeadm.go:319] 
	I1229 06:46:42.442558   14074 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1229 06:46:42.442701   14074 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1229 06:46:42.442716   14074 kubeadm.go:319] 
	I1229 06:46:42.442795   14074 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token 698a3d.4viwxngnx74hq03g \
	I1229 06:46:42.442896   14074 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:d97bd7e33b97d9ccbd4ec53c25ea13f0ec6aabee3d7ac32cc2829ba2c3b93811 \
	I1229 06:46:42.442920   14074 kubeadm.go:319] 	--control-plane 
	I1229 06:46:42.442924   14074 kubeadm.go:319] 
	I1229 06:46:42.442999   14074 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1229 06:46:42.443006   14074 kubeadm.go:319] 
	I1229 06:46:42.443078   14074 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token 698a3d.4viwxngnx74hq03g \
	I1229 06:46:42.443247   14074 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:d97bd7e33b97d9ccbd4ec53c25ea13f0ec6aabee3d7ac32cc2829ba2c3b93811 
	I1229 06:46:42.444700   14074 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1045-gcp\n", err: exit status 1
	I1229 06:46:42.444840   14074 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1229 06:46:42.444866   14074 cni.go:84] Creating CNI manager for ""
	I1229 06:46:42.444879   14074 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1229 06:46:42.447156   14074 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1229 06:46:42.448440   14074 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1229 06:46:42.452330   14074 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.35.0/kubectl ...
	I1229 06:46:42.452343   14074 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2620 bytes)
	I1229 06:46:42.464629   14074 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1229 06:46:42.685828   14074 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1229 06:46:42.685926   14074 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1229 06:46:42.685941   14074 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-264018 minikube.k8s.io/updated_at=2025_12_29T06_46_42_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=306c26d738eaa3534a776d7c684ed563998a48b8 minikube.k8s.io/name=addons-264018 minikube.k8s.io/primary=true
	I1229 06:46:42.756578   14074 ops.go:34] apiserver oom_adj: -16
	I1229 06:46:42.756583   14074 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1229 06:46:43.257587   14074 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1229 06:46:43.757638   14074 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1229 06:46:44.257277   14074 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1229 06:46:44.756812   14074 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1229 06:46:45.257619   14074 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1229 06:46:45.757317   14074 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1229 06:46:46.256889   14074 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1229 06:46:46.756658   14074 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1229 06:46:47.256987   14074 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1229 06:46:47.317831   14074 kubeadm.go:1114] duration metric: took 4.631971963s to wait for elevateKubeSystemPrivileges
	I1229 06:46:47.317887   14074 kubeadm.go:403] duration metric: took 11.803134936s to StartCluster
	I1229 06:46:47.317916   14074 settings.go:142] acquiring lock: {Name:mkc0a676e8e9f981283a1b55a28ae5261bf35d87 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1229 06:46:47.318029   14074 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22353-9207/kubeconfig
	I1229 06:46:47.318378   14074 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22353-9207/kubeconfig: {Name:mkdd37af1bd6d0f9cc034a64c07c8d33ee5c10ac Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1229 06:46:47.318575   14074 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1229 06:46:47.318632   14074 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1229 06:46:47.318655   14074 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:true auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:true storage-provisioner:true storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I1229 06:46:47.318770   14074 addons.go:70] Setting csi-hostpath-driver=true in profile "addons-264018"
	I1229 06:46:47.318778   14074 addons.go:70] Setting yakd=true in profile "addons-264018"
	I1229 06:46:47.318798   14074 addons.go:239] Setting addon yakd=true in "addons-264018"
	I1229 06:46:47.318794   14074 addons.go:70] Setting amd-gpu-device-plugin=true in profile "addons-264018"
	I1229 06:46:47.318818   14074 addons.go:239] Setting addon csi-hostpath-driver=true in "addons-264018"
	I1229 06:46:47.318829   14074 host.go:66] Checking if "addons-264018" exists ...
	I1229 06:46:47.318829   14074 config.go:182] Loaded profile config "addons-264018": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1229 06:46:47.318837   14074 addons.go:239] Setting addon amd-gpu-device-plugin=true in "addons-264018"
	I1229 06:46:47.318822   14074 addons.go:70] Setting cloud-spanner=true in profile "addons-264018"
	I1229 06:46:47.318844   14074 host.go:66] Checking if "addons-264018" exists ...
	I1229 06:46:47.318852   14074 addons.go:239] Setting addon cloud-spanner=true in "addons-264018"
	I1229 06:46:47.318876   14074 host.go:66] Checking if "addons-264018" exists ...
	I1229 06:46:47.318876   14074 host.go:66] Checking if "addons-264018" exists ...
	I1229 06:46:47.318909   14074 addons.go:70] Setting nvidia-device-plugin=true in profile "addons-264018"
	I1229 06:46:47.318922   14074 addons.go:239] Setting addon nvidia-device-plugin=true in "addons-264018"
	I1229 06:46:47.318943   14074 host.go:66] Checking if "addons-264018" exists ...
	I1229 06:46:47.318954   14074 addons.go:70] Setting storage-provisioner=true in profile "addons-264018"
	I1229 06:46:47.318989   14074 addons.go:239] Setting addon storage-provisioner=true in "addons-264018"
	I1229 06:46:47.319014   14074 host.go:66] Checking if "addons-264018" exists ...
	I1229 06:46:47.319060   14074 addons.go:70] Setting ingress=true in profile "addons-264018"
	I1229 06:46:47.319080   14074 addons.go:239] Setting addon ingress=true in "addons-264018"
	I1229 06:46:47.319097   14074 addons.go:70] Setting default-storageclass=true in profile "addons-264018"
	I1229 06:46:47.319112   14074 host.go:66] Checking if "addons-264018" exists ...
	I1229 06:46:47.319113   14074 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "addons-264018"
	I1229 06:46:47.319366   14074 cli_runner.go:164] Run: docker container inspect addons-264018 --format={{.State.Status}}
	I1229 06:46:47.319372   14074 cli_runner.go:164] Run: docker container inspect addons-264018 --format={{.State.Status}}
	I1229 06:46:47.319380   14074 cli_runner.go:164] Run: docker container inspect addons-264018 --format={{.State.Status}}
	I1229 06:46:47.319383   14074 cli_runner.go:164] Run: docker container inspect addons-264018 --format={{.State.Status}}
	I1229 06:46:47.319401   14074 cli_runner.go:164] Run: docker container inspect addons-264018 --format={{.State.Status}}
	I1229 06:46:47.319423   14074 cli_runner.go:164] Run: docker container inspect addons-264018 --format={{.State.Status}}
	I1229 06:46:47.319430   14074 cli_runner.go:164] Run: docker container inspect addons-264018 --format={{.State.Status}}
	I1229 06:46:47.319895   14074 cli_runner.go:164] Run: docker container inspect addons-264018 --format={{.State.Status}}
	I1229 06:46:47.319973   14074 addons.go:70] Setting storage-provisioner-rancher=true in profile "addons-264018"
	I1229 06:46:47.319996   14074 addons_storage_classes.go:34] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-264018"
	I1229 06:46:47.320041   14074 addons.go:70] Setting inspektor-gadget=true in profile "addons-264018"
	I1229 06:46:47.320060   14074 addons.go:239] Setting addon inspektor-gadget=true in "addons-264018"
	I1229 06:46:47.320084   14074 host.go:66] Checking if "addons-264018" exists ...
	I1229 06:46:47.320344   14074 addons.go:70] Setting volcano=true in profile "addons-264018"
	I1229 06:46:47.320371   14074 addons.go:239] Setting addon volcano=true in "addons-264018"
	I1229 06:46:47.320398   14074 host.go:66] Checking if "addons-264018" exists ...
	I1229 06:46:47.320509   14074 addons.go:70] Setting ingress-dns=true in profile "addons-264018"
	I1229 06:46:47.320534   14074 addons.go:239] Setting addon ingress-dns=true in "addons-264018"
	I1229 06:46:47.320562   14074 host.go:66] Checking if "addons-264018" exists ...
	I1229 06:46:47.320750   14074 addons.go:70] Setting volumesnapshots=true in profile "addons-264018"
	I1229 06:46:47.320784   14074 addons.go:239] Setting addon volumesnapshots=true in "addons-264018"
	I1229 06:46:47.320810   14074 host.go:66] Checking if "addons-264018" exists ...
	I1229 06:46:47.320983   14074 addons.go:70] Setting registry=true in profile "addons-264018"
	I1229 06:46:47.321005   14074 addons.go:239] Setting addon registry=true in "addons-264018"
	I1229 06:46:47.321025   14074 host.go:66] Checking if "addons-264018" exists ...
	I1229 06:46:47.321195   14074 addons.go:70] Setting registry-creds=true in profile "addons-264018"
	I1229 06:46:47.321212   14074 addons.go:239] Setting addon registry-creds=true in "addons-264018"
	I1229 06:46:47.321248   14074 host.go:66] Checking if "addons-264018" exists ...
	I1229 06:46:47.321416   14074 out.go:179] * Verifying Kubernetes components...
	I1229 06:46:47.321544   14074 addons.go:70] Setting gcp-auth=true in profile "addons-264018"
	I1229 06:46:47.321570   14074 mustload.go:66] Loading cluster: addons-264018
	I1229 06:46:47.321626   14074 addons.go:70] Setting metrics-server=true in profile "addons-264018"
	I1229 06:46:47.321650   14074 addons.go:239] Setting addon metrics-server=true in "addons-264018"
	I1229 06:46:47.321672   14074 host.go:66] Checking if "addons-264018" exists ...
	I1229 06:46:47.322896   14074 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1229 06:46:47.332919   14074 cli_runner.go:164] Run: docker container inspect addons-264018 --format={{.State.Status}}
	I1229 06:46:47.332963   14074 cli_runner.go:164] Run: docker container inspect addons-264018 --format={{.State.Status}}
	I1229 06:46:47.333264   14074 config.go:182] Loaded profile config "addons-264018": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1229 06:46:47.333539   14074 cli_runner.go:164] Run: docker container inspect addons-264018 --format={{.State.Status}}
	I1229 06:46:47.333862   14074 cli_runner.go:164] Run: docker container inspect addons-264018 --format={{.State.Status}}
	I1229 06:46:47.334174   14074 cli_runner.go:164] Run: docker container inspect addons-264018 --format={{.State.Status}}
	I1229 06:46:47.348188   14074 cli_runner.go:164] Run: docker container inspect addons-264018 --format={{.State.Status}}
	I1229 06:46:47.348901   14074 cli_runner.go:164] Run: docker container inspect addons-264018 --format={{.State.Status}}
	I1229 06:46:47.350631   14074 cli_runner.go:164] Run: docker container inspect addons-264018 --format={{.State.Status}}
	I1229 06:46:47.350855   14074 cli_runner.go:164] Run: docker container inspect addons-264018 --format={{.State.Status}}
	I1229 06:46:47.361382   14074 out.go:179]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.46
	I1229 06:46:47.361510   14074 out.go:179]   - Using image ghcr.io/manusa/yakd:0.0.7
	I1229 06:46:47.365028   14074 addons.go:436] installing /etc/kubernetes/addons/deployment.yaml
	I1229 06:46:47.365049   14074 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I1229 06:46:47.365129   14074 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-264018
	I1229 06:46:47.365428   14074 addons.go:436] installing /etc/kubernetes/addons/yakd-ns.yaml
	I1229 06:46:47.365444   14074 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I1229 06:46:47.365537   14074 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-264018
	I1229 06:46:47.390402   14074 out.go:179]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.18.1
	I1229 06:46:47.392399   14074 addons.go:436] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1229 06:46:47.392434   14074 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I1229 06:46:47.392495   14074 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-264018
	I1229 06:46:47.398209   14074 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	W1229 06:46:47.401343   14074 out.go:285] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I1229 06:46:47.401657   14074 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1229 06:46:47.401673   14074 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1229 06:46:47.401735   14074 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-264018
	I1229 06:46:47.413189   14074 out.go:179]   - Using image docker.io/upmcenterprises/registry-creds:1.10
	I1229 06:46:47.414536   14074 addons.go:436] installing /etc/kubernetes/addons/registry-creds-rc.yaml
	I1229 06:46:47.414553   14074 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-creds-rc.yaml (3306 bytes)
	I1229 06:46:47.414622   14074 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-264018
	I1229 06:46:47.423552   14074 out.go:179]   - Using image docker.io/kicbase/minikube-ingress-dns:0.0.4
	I1229 06:46:47.423769   14074 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I1229 06:46:47.424140   14074 out.go:179]   - Using image registry.k8s.io/ingress-nginx/controller:v1.14.1
	I1229 06:46:47.425325   14074 addons.go:436] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1229 06:46:47.429937   14074 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2889 bytes)
	I1229 06:46:47.430008   14074 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-264018
	I1229 06:46:47.430486   14074 host.go:66] Checking if "addons-264018" exists ...
	I1229 06:46:47.426156   14074 out.go:179]   - Using image docker.io/rocm/k8s-device-plugin:1.25.2.8
	I1229 06:46:47.428509   14074 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.35.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1229 06:46:47.433306   14074 addons.go:436] installing /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1229 06:46:47.433325   14074 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/amd-gpu-device-plugin.yaml (1868 bytes)
	I1229 06:46:47.433376   14074 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-264018
	I1229 06:46:47.433754   14074 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.5
	I1229 06:46:47.434978   14074 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I1229 06:46:47.435974   14074 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.5
	I1229 06:46:47.436847   14074 out.go:179]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I1229 06:46:47.437280   14074 addons.go:436] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I1229 06:46:47.437706   14074 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16257 bytes)
	I1229 06:46:47.437850   14074 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-264018
	I1229 06:46:47.439205   14074 out.go:179]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.8.0
	I1229 06:46:47.439244   14074 out.go:179]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I1229 06:46:47.440308   14074 addons.go:436] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1229 06:46:47.440393   14074 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1229 06:46:47.440523   14074 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-264018
	I1229 06:46:47.440348   14074 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I1229 06:46:47.442227   14074 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I1229 06:46:47.443907   14074 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I1229 06:46:47.448769   14074 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I1229 06:46:47.449746   14074 addons.go:436] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I1229 06:46:47.449765   14074 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I1229 06:46:47.449820   14074 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-264018
	I1229 06:46:47.450005   14074 addons.go:239] Setting addon storage-provisioner-rancher=true in "addons-264018"
	I1229 06:46:47.452950   14074 host.go:66] Checking if "addons-264018" exists ...
	I1229 06:46:47.457536   14074 cli_runner.go:164] Run: docker container inspect addons-264018 --format={{.State.Status}}
	I1229 06:46:47.457955   14074 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22353-9207/.minikube/machines/addons-264018/id_rsa Username:docker}
	I1229 06:46:47.459231   14074 out.go:179]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.47.0
	I1229 06:46:47.461204   14074 addons.go:436] installing /etc/kubernetes/addons/ig-deployment.yaml
	I1229 06:46:47.461541   14074 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-deployment.yaml (15034 bytes)
	I1229 06:46:47.464491   14074 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-264018
	I1229 06:46:47.465999   14074 addons.go:239] Setting addon default-storageclass=true in "addons-264018"
	I1229 06:46:47.466040   14074 host.go:66] Checking if "addons-264018" exists ...
	I1229 06:46:47.466533   14074 cli_runner.go:164] Run: docker container inspect addons-264018 --format={{.State.Status}}
	I1229 06:46:47.467652   14074 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22353-9207/.minikube/machines/addons-264018/id_rsa Username:docker}
	I1229 06:46:47.470181   14074 out.go:179]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I1229 06:46:47.476407   14074 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22353-9207/.minikube/machines/addons-264018/id_rsa Username:docker}
	I1229 06:46:47.476871   14074 out.go:179]   - Using image docker.io/registry:3.0.0
	I1229 06:46:47.476993   14074 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I1229 06:46:47.477004   14074 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I1229 06:46:47.477064   14074 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-264018
	I1229 06:46:47.486191   14074 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22353-9207/.minikube/machines/addons-264018/id_rsa Username:docker}
	I1229 06:46:47.487262   14074 out.go:179]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.9
	I1229 06:46:47.489461   14074 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22353-9207/.minikube/machines/addons-264018/id_rsa Username:docker}
	I1229 06:46:47.490917   14074 addons.go:436] installing /etc/kubernetes/addons/registry-rc.yaml
	I1229 06:46:47.490942   14074 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I1229 06:46:47.491014   14074 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-264018
	I1229 06:46:47.495405   14074 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22353-9207/.minikube/machines/addons-264018/id_rsa Username:docker}
	I1229 06:46:47.517594   14074 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22353-9207/.minikube/machines/addons-264018/id_rsa Username:docker}
	I1229 06:46:47.538736   14074 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22353-9207/.minikube/machines/addons-264018/id_rsa Username:docker}
	I1229 06:46:47.540337   14074 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1229 06:46:47.540355   14074 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1229 06:46:47.540406   14074 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-264018
	I1229 06:46:47.540705   14074 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1229 06:46:47.545351   14074 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22353-9207/.minikube/machines/addons-264018/id_rsa Username:docker}
	I1229 06:46:47.548365   14074 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22353-9207/.minikube/machines/addons-264018/id_rsa Username:docker}
	I1229 06:46:47.548445   14074 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22353-9207/.minikube/machines/addons-264018/id_rsa Username:docker}
	I1229 06:46:47.558650   14074 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22353-9207/.minikube/machines/addons-264018/id_rsa Username:docker}
	I1229 06:46:47.567079   14074 out.go:179]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I1229 06:46:47.568439   14074 out.go:179]   - Using image docker.io/busybox:stable
	I1229 06:46:47.569637   14074 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1229 06:46:47.569785   14074 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I1229 06:46:47.569915   14074 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-264018
	I1229 06:46:47.579846   14074 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22353-9207/.minikube/machines/addons-264018/id_rsa Username:docker}
	I1229 06:46:47.580009   14074 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22353-9207/.minikube/machines/addons-264018/id_rsa Username:docker}
	W1229 06:46:47.585472   14074 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1229 06:46:47.585530   14074 retry.go:84] will retry after 200ms: ssh: handshake failed: EOF
	W1229 06:46:47.585672   14074 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1229 06:46:47.610447   14074 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22353-9207/.minikube/machines/addons-264018/id_rsa Username:docker}
	I1229 06:46:47.666702   14074 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1229 06:46:47.672961   14074 addons.go:436] installing /etc/kubernetes/addons/yakd-sa.yaml
	I1229 06:46:47.672981   14074 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I1229 06:46:47.676422   14074 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I1229 06:46:47.677373   14074 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml
	I1229 06:46:47.678790   14074 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1229 06:46:47.686883   14074 addons.go:436] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1229 06:46:47.686901   14074 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I1229 06:46:47.696338   14074 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/ig-deployment.yaml
	I1229 06:46:47.700949   14074 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I1229 06:46:47.701166   14074 addons.go:436] installing /etc/kubernetes/addons/yakd-crb.yaml
	I1229 06:46:47.701181   14074 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I1229 06:46:47.713059   14074 addons.go:436] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1229 06:46:47.713205   14074 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1229 06:46:47.713981   14074 addons.go:436] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I1229 06:46:47.713997   14074 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I1229 06:46:47.728787   14074 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1229 06:46:47.736420   14074 addons.go:436] installing /etc/kubernetes/addons/registry-svc.yaml
	I1229 06:46:47.736441   14074 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I1229 06:46:47.739154   14074 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1229 06:46:47.742332   14074 addons.go:436] installing /etc/kubernetes/addons/yakd-svc.yaml
	I1229 06:46:47.742403   14074 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I1229 06:46:47.755501   14074 addons.go:436] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1229 06:46:47.755523   14074 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1229 06:46:47.755723   14074 addons.go:436] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I1229 06:46:47.755749   14074 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I1229 06:46:47.763303   14074 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1229 06:46:47.777325   14074 addons.go:436] installing /etc/kubernetes/addons/registry-proxy.yaml
	I1229 06:46:47.777345   14074 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I1229 06:46:47.792092   14074 addons.go:436] installing /etc/kubernetes/addons/yakd-dp.yaml
	I1229 06:46:47.792114   14074 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2013 bytes)
	I1229 06:46:47.801131   14074 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1229 06:46:47.821852   14074 addons.go:436] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I1229 06:46:47.821895   14074 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I1229 06:46:47.845786   14074 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I1229 06:46:47.861990   14074 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I1229 06:46:47.868620   14074 node_ready.go:35] waiting up to 6m0s for node "addons-264018" to be "Ready" ...
	I1229 06:46:47.869337   14074 start.go:987] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I1229 06:46:47.871027   14074 addons.go:436] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I1229 06:46:47.871085   14074 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I1229 06:46:47.966969   14074 addons.go:436] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I1229 06:46:47.967181   14074 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I1229 06:46:48.016988   14074 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1229 06:46:48.020273   14074 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I1229 06:46:48.020355   14074 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I1229 06:46:48.089521   14074 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I1229 06:46:48.089560   14074 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I1229 06:46:48.123820   14074 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I1229 06:46:48.123842   14074 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I1229 06:46:48.199483   14074 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I1229 06:46:48.199509   14074 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I1229 06:46:48.222636   14074 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I1229 06:46:48.222662   14074 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I1229 06:46:48.238451   14074 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1229 06:46:48.238478   14074 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I1229 06:46:48.276793   14074 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1229 06:46:48.311024   14074 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I1229 06:46:48.311132   14074 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I1229 06:46:48.351083   14074 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I1229 06:46:48.351120   14074 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I1229 06:46:48.376424   14074 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-264018" context rescaled to 1 replicas
	I1229 06:46:48.406612   14074 addons.go:436] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I1229 06:46:48.406779   14074 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I1229 06:46:48.442576   14074 addons.go:436] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1229 06:46:48.442600   14074 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I1229 06:46:48.488652   14074 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1229 06:46:48.801154   14074 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/ig-deployment.yaml: (1.104768806s)
	I1229 06:46:49.046449   14074 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (1.3454619s)
	I1229 06:46:49.046492   14074 addons.go:495] Verifying addon ingress=true in "addons-264018"
	I1229 06:46:49.046561   14074 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (1.317740973s)
	I1229 06:46:49.046617   14074 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml: (1.307443871s)
	I1229 06:46:49.046675   14074 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (1.283351501s)
	I1229 06:46:49.046786   14074 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.245616615s)
	I1229 06:46:49.046820   14074 addons.go:495] Verifying addon metrics-server=true in "addons-264018"
	I1229 06:46:49.046883   14074 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (1.201044483s)
	I1229 06:46:49.046950   14074 addons.go:495] Verifying addon registry=true in "addons-264018"
	I1229 06:46:49.047020   14074 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (1.184946333s)
	I1229 06:46:49.047106   14074 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.030089027s)
	I1229 06:46:49.048157   14074 out.go:179] * Verifying ingress addon...
	I1229 06:46:49.048964   14074 out.go:179] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-264018 service yakd-dashboard -n yakd-dashboard
	
	I1229 06:46:49.049016   14074 out.go:179] * Verifying registry addon...
	I1229 06:46:49.050671   14074 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I1229 06:46:49.051488   14074 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	W1229 06:46:49.053709   14074 out.go:285] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error while marking storage class local-path as non-default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
	I1229 06:46:49.054590   14074 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I1229 06:46:49.054607   14074 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1229 06:46:49.054870   14074 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=registry
	I1229 06:46:49.054892   14074 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1229 06:46:49.208749   14074 addons.go:495] Verifying addon csi-hostpath-driver=true in "addons-264018"
	I1229 06:46:49.211188   14074 out.go:179] * Verifying csi-hostpath-driver addon...
	I1229 06:46:49.213709   14074 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I1229 06:46:49.216508   14074 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1229 06:46:49.216527   14074 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1229 06:46:49.555067   14074 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1229 06:46:49.555272   14074 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1229 06:46:49.676243   14074 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (1.187523835s)
	W1229 06:46:49.676299   14074 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1229 06:46:49.716628   14074 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1229 06:46:49.871387   14074 node_ready.go:57] node "addons-264018" has "Ready":"False" status (will retry)
	I1229 06:46:49.897456   14074 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1229 06:46:50.054421   14074 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1229 06:46:50.054626   14074 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1229 06:46:50.217661   14074 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1229 06:46:50.554415   14074 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1229 06:46:50.554632   14074 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1229 06:46:50.717598   14074 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1229 06:46:51.054042   14074 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1229 06:46:51.054183   14074 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1229 06:46:51.217210   14074 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1229 06:46:51.553858   14074 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1229 06:46:51.555679   14074 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1229 06:46:51.717562   14074 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1229 06:46:52.054478   14074 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1229 06:46:52.054628   14074 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1229 06:46:52.216276   14074 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1229 06:46:52.371448   14074 node_ready.go:57] node "addons-264018" has "Ready":"False" status (will retry)
	I1229 06:46:52.376510   14074 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.479014768s)
	I1229 06:46:52.553764   14074 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1229 06:46:52.554056   14074 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1229 06:46:52.716804   14074 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1229 06:46:53.054312   14074 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1229 06:46:53.054432   14074 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1229 06:46:53.217176   14074 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1229 06:46:53.553911   14074 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1229 06:46:53.554204   14074 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1229 06:46:53.717373   14074 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1229 06:46:54.054267   14074 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1229 06:46:54.054525   14074 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1229 06:46:54.217302   14074 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1229 06:46:54.372171   14074 node_ready.go:57] node "addons-264018" has "Ready":"False" status (will retry)
	I1229 06:46:54.553806   14074 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1229 06:46:54.554386   14074 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1229 06:46:54.716676   14074 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1229 06:46:55.054356   14074 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1229 06:46:55.054446   14074 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1229 06:46:55.062555   14074 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I1229 06:46:55.062622   14074 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-264018
	I1229 06:46:55.081167   14074 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22353-9207/.minikube/machines/addons-264018/id_rsa Username:docker}
	I1229 06:46:55.184339   14074 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I1229 06:46:55.196677   14074 addons.go:239] Setting addon gcp-auth=true in "addons-264018"
	I1229 06:46:55.196738   14074 host.go:66] Checking if "addons-264018" exists ...
	I1229 06:46:55.197204   14074 cli_runner.go:164] Run: docker container inspect addons-264018 --format={{.State.Status}}
	I1229 06:46:55.215020   14074 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I1229 06:46:55.215072   14074 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-264018
	I1229 06:46:55.217787   14074 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1229 06:46:55.236869   14074 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22353-9207/.minikube/machines/addons-264018/id_rsa Username:docker}
	I1229 06:46:55.332005   14074 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.5
	I1229 06:46:55.334104   14074 out.go:179]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.3
	I1229 06:46:55.335276   14074 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I1229 06:46:55.335291   14074 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I1229 06:46:55.348202   14074 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I1229 06:46:55.348239   14074 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I1229 06:46:55.360409   14074 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1229 06:46:55.360423   14074 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I1229 06:46:55.373253   14074 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1229 06:46:55.553993   14074 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1229 06:46:55.554168   14074 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1229 06:46:55.668183   14074 addons.go:495] Verifying addon gcp-auth=true in "addons-264018"
	I1229 06:46:55.669333   14074 out.go:179] * Verifying gcp-auth addon...
	I1229 06:46:55.671025   14074 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I1229 06:46:55.672625   14074 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I1229 06:46:55.672640   14074 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1229 06:46:55.717464   14074 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1229 06:46:56.053548   14074 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1229 06:46:56.054134   14074 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1229 06:46:56.173613   14074 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1229 06:46:56.216855   14074 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1229 06:46:56.553907   14074 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1229 06:46:56.554104   14074 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1229 06:46:56.673485   14074 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1229 06:46:56.716730   14074 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1229 06:46:56.871192   14074 node_ready.go:57] node "addons-264018" has "Ready":"False" status (will retry)
	I1229 06:46:57.053823   14074 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1229 06:46:57.054006   14074 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1229 06:46:57.174388   14074 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1229 06:46:57.216648   14074 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1229 06:46:57.553740   14074 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1229 06:46:57.553845   14074 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1229 06:46:57.674261   14074 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1229 06:46:57.716576   14074 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1229 06:46:58.054005   14074 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1229 06:46:58.054288   14074 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1229 06:46:58.173461   14074 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1229 06:46:58.217179   14074 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1229 06:46:58.553500   14074 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1229 06:46:58.554035   14074 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1229 06:46:58.673373   14074 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1229 06:46:58.716828   14074 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1229 06:46:59.053862   14074 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1229 06:46:59.054499   14074 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1229 06:46:59.174010   14074 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1229 06:46:59.216309   14074 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1229 06:46:59.371897   14074 node_ready.go:57] node "addons-264018" has "Ready":"False" status (will retry)
	I1229 06:46:59.553674   14074 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1229 06:46:59.553981   14074 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1229 06:46:59.674612   14074 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1229 06:46:59.717200   14074 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1229 06:47:00.053785   14074 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1229 06:47:00.053999   14074 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1229 06:47:00.174189   14074 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1229 06:47:00.216965   14074 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1229 06:47:00.553710   14074 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1229 06:47:00.553858   14074 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I1229 06:47:00.553884   14074 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1229 06:47:00.673857   14074 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1229 06:47:00.734233   14074 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1229 06:47:00.734260   14074 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1229 06:47:00.871881   14074 node_ready.go:49] node "addons-264018" is "Ready"
	I1229 06:47:00.871917   14074 node_ready.go:38] duration metric: took 13.003254376s for node "addons-264018" to be "Ready" ...
	I1229 06:47:00.871937   14074 api_server.go:52] waiting for apiserver process to appear ...
	I1229 06:47:00.871990   14074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1229 06:47:00.893869   14074 api_server.go:72] duration metric: took 13.575262247s to wait for apiserver process to appear ...
	I1229 06:47:00.893966   14074 api_server.go:88] waiting for apiserver healthz status ...
	I1229 06:47:00.894004   14074 api_server.go:299] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1229 06:47:00.901946   14074 api_server.go:325] https://192.168.49.2:8443/healthz returned 200:
	ok
	I1229 06:47:00.903140   14074 api_server.go:141] control plane version: v1.35.0
	I1229 06:47:00.903168   14074 api_server.go:131] duration metric: took 9.183888ms to wait for apiserver health ...
	I1229 06:47:00.903201   14074 system_pods.go:43] waiting for kube-system pods to appear ...
	I1229 06:47:00.908755   14074 system_pods.go:59] 20 kube-system pods found
	I1229 06:47:00.908813   14074 system_pods.go:61] "amd-gpu-device-plugin-9gzmq" [4f8b6ab5-1d47-4b72-b504-1f4b3e2277a7] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1229 06:47:00.908835   14074 system_pods.go:61] "coredns-7d764666f9-bjjpv" [529a55c5-9aa5-4a01-a640-c1a1365faeb4] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1229 06:47:00.908857   14074 system_pods.go:61] "csi-hostpath-attacher-0" [f3be3384-0b2a-442f-8edc-e878b515623c] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1229 06:47:00.908866   14074 system_pods.go:61] "csi-hostpath-resizer-0" [c67c1b7a-f4a6-4c42-ba83-fe8ce067efac] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1229 06:47:00.908916   14074 system_pods.go:61] "csi-hostpathplugin-jk9wm" [31526e8b-88bd-4955-9508-6eecd6477904] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1229 06:47:00.908928   14074 system_pods.go:61] "etcd-addons-264018" [59add920-5020-465f-87e4-79f033a4cb98] Running
	I1229 06:47:00.908934   14074 system_pods.go:61] "kindnet-z5qfv" [9c00bc55-dc55-404f-a371-d8abcfa077d9] Running
	I1229 06:47:00.908939   14074 system_pods.go:61] "kube-apiserver-addons-264018" [62c3b7fc-a986-4282-98f1-a14780a74269] Running
	I1229 06:47:00.908952   14074 system_pods.go:61] "kube-controller-manager-addons-264018" [7ca588df-8cbd-4788-8ce8-559f7045565a] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1229 06:47:00.908959   14074 system_pods.go:61] "kube-ingress-dns-minikube" [5d354ad8-e516-4665-87d5-2688f3dd640c] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1229 06:47:00.908965   14074 system_pods.go:61] "kube-proxy-ktmmg" [3aa906d0-dda4-4171-a4b2-946928a9560e] Running
	I1229 06:47:00.908970   14074 system_pods.go:61] "kube-scheduler-addons-264018" [3ce10183-4e9f-4219-b920-49946591b5ff] Running
	I1229 06:47:00.908977   14074 system_pods.go:61] "metrics-server-5778bb4788-l88w7" [b73304b2-0c01-492a-92ae-84e7287f9acc] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1229 06:47:00.908985   14074 system_pods.go:61] "nvidia-device-plugin-daemonset-ff5s5" [7c5f758b-ca19-4494-9ca6-2fe849085a8f] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1229 06:47:00.908992   14074 system_pods.go:61] "registry-788cd7d5bc-8clqr" [2d242195-9ec9-4edb-a76d-7692909e715b] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1229 06:47:00.909001   14074 system_pods.go:61] "registry-creds-567fb78d95-kz6hm" [e7864e17-b25f-4576-8c1a-c14ef1e6725a] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1229 06:47:00.909009   14074 system_pods.go:61] "registry-proxy-tq9sm" [c285867e-5e87-4d61-b445-882e6c785822] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1229 06:47:00.909022   14074 system_pods.go:61] "snapshot-controller-6588d87457-5848x" [24486ae7-9e8b-4f07-a22c-74adf85e0d42] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1229 06:47:00.909030   14074 system_pods.go:61] "snapshot-controller-6588d87457-jzrgj" [79680be0-7c6a-4df0-9856-06b13f101fff] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1229 06:47:00.909039   14074 system_pods.go:61] "storage-provisioner" [4fd3bb4c-71b1-436c-a78b-b103c1878a7e] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1229 06:47:00.909049   14074 system_pods.go:74] duration metric: took 5.84036ms to wait for pod list to return data ...
	I1229 06:47:00.909059   14074 default_sa.go:34] waiting for default service account to be created ...
	I1229 06:47:00.914236   14074 default_sa.go:45] found service account: "default"
	I1229 06:47:00.914261   14074 default_sa.go:55] duration metric: took 5.194973ms for default service account to be created ...
	I1229 06:47:00.914274   14074 system_pods.go:116] waiting for k8s-apps to be running ...
	I1229 06:47:01.008913   14074 system_pods.go:86] 20 kube-system pods found
	I1229 06:47:01.008951   14074 system_pods.go:89] "amd-gpu-device-plugin-9gzmq" [4f8b6ab5-1d47-4b72-b504-1f4b3e2277a7] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1229 06:47:01.008962   14074 system_pods.go:89] "coredns-7d764666f9-bjjpv" [529a55c5-9aa5-4a01-a640-c1a1365faeb4] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1229 06:47:01.008972   14074 system_pods.go:89] "csi-hostpath-attacher-0" [f3be3384-0b2a-442f-8edc-e878b515623c] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1229 06:47:01.008980   14074 system_pods.go:89] "csi-hostpath-resizer-0" [c67c1b7a-f4a6-4c42-ba83-fe8ce067efac] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1229 06:47:01.008987   14074 system_pods.go:89] "csi-hostpathplugin-jk9wm" [31526e8b-88bd-4955-9508-6eecd6477904] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1229 06:47:01.008995   14074 system_pods.go:89] "etcd-addons-264018" [59add920-5020-465f-87e4-79f033a4cb98] Running
	I1229 06:47:01.009001   14074 system_pods.go:89] "kindnet-z5qfv" [9c00bc55-dc55-404f-a371-d8abcfa077d9] Running
	I1229 06:47:01.009007   14074 system_pods.go:89] "kube-apiserver-addons-264018" [62c3b7fc-a986-4282-98f1-a14780a74269] Running
	I1229 06:47:01.009020   14074 system_pods.go:89] "kube-controller-manager-addons-264018" [7ca588df-8cbd-4788-8ce8-559f7045565a] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1229 06:47:01.009028   14074 system_pods.go:89] "kube-ingress-dns-minikube" [5d354ad8-e516-4665-87d5-2688f3dd640c] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1229 06:47:01.009034   14074 system_pods.go:89] "kube-proxy-ktmmg" [3aa906d0-dda4-4171-a4b2-946928a9560e] Running
	I1229 06:47:01.009039   14074 system_pods.go:89] "kube-scheduler-addons-264018" [3ce10183-4e9f-4219-b920-49946591b5ff] Running
	I1229 06:47:01.009048   14074 system_pods.go:89] "metrics-server-5778bb4788-l88w7" [b73304b2-0c01-492a-92ae-84e7287f9acc] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1229 06:47:01.009060   14074 system_pods.go:89] "nvidia-device-plugin-daemonset-ff5s5" [7c5f758b-ca19-4494-9ca6-2fe849085a8f] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1229 06:47:01.009075   14074 system_pods.go:89] "registry-788cd7d5bc-8clqr" [2d242195-9ec9-4edb-a76d-7692909e715b] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1229 06:47:01.009080   14074 system_pods.go:89] "registry-creds-567fb78d95-kz6hm" [e7864e17-b25f-4576-8c1a-c14ef1e6725a] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1229 06:47:01.009090   14074 system_pods.go:89] "registry-proxy-tq9sm" [c285867e-5e87-4d61-b445-882e6c785822] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1229 06:47:01.009102   14074 system_pods.go:89] "snapshot-controller-6588d87457-5848x" [24486ae7-9e8b-4f07-a22c-74adf85e0d42] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1229 06:47:01.009111   14074 system_pods.go:89] "snapshot-controller-6588d87457-jzrgj" [79680be0-7c6a-4df0-9856-06b13f101fff] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1229 06:47:01.009118   14074 system_pods.go:89] "storage-provisioner" [4fd3bb4c-71b1-436c-a78b-b103c1878a7e] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1229 06:47:01.009144   14074 retry.go:84] will retry after 200ms: missing components: kube-dns
	I1229 06:47:01.054160   14074 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1229 06:47:01.054425   14074 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1229 06:47:01.175953   14074 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1229 06:47:01.276757   14074 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1229 06:47:01.284297   14074 system_pods.go:86] 20 kube-system pods found
	I1229 06:47:01.284338   14074 system_pods.go:89] "amd-gpu-device-plugin-9gzmq" [4f8b6ab5-1d47-4b72-b504-1f4b3e2277a7] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1229 06:47:01.284348   14074 system_pods.go:89] "coredns-7d764666f9-bjjpv" [529a55c5-9aa5-4a01-a640-c1a1365faeb4] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1229 06:47:01.284358   14074 system_pods.go:89] "csi-hostpath-attacher-0" [f3be3384-0b2a-442f-8edc-e878b515623c] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1229 06:47:01.284366   14074 system_pods.go:89] "csi-hostpath-resizer-0" [c67c1b7a-f4a6-4c42-ba83-fe8ce067efac] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1229 06:47:01.284379   14074 system_pods.go:89] "csi-hostpathplugin-jk9wm" [31526e8b-88bd-4955-9508-6eecd6477904] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1229 06:47:01.284385   14074 system_pods.go:89] "etcd-addons-264018" [59add920-5020-465f-87e4-79f033a4cb98] Running
	I1229 06:47:01.284392   14074 system_pods.go:89] "kindnet-z5qfv" [9c00bc55-dc55-404f-a371-d8abcfa077d9] Running
	I1229 06:47:01.284402   14074 system_pods.go:89] "kube-apiserver-addons-264018" [62c3b7fc-a986-4282-98f1-a14780a74269] Running
	I1229 06:47:01.284410   14074 system_pods.go:89] "kube-controller-manager-addons-264018" [7ca588df-8cbd-4788-8ce8-559f7045565a] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1229 06:47:01.284424   14074 system_pods.go:89] "kube-ingress-dns-minikube" [5d354ad8-e516-4665-87d5-2688f3dd640c] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1229 06:47:01.284436   14074 system_pods.go:89] "kube-proxy-ktmmg" [3aa906d0-dda4-4171-a4b2-946928a9560e] Running
	I1229 06:47:01.284442   14074 system_pods.go:89] "kube-scheduler-addons-264018" [3ce10183-4e9f-4219-b920-49946591b5ff] Running
	I1229 06:47:01.284459   14074 system_pods.go:89] "metrics-server-5778bb4788-l88w7" [b73304b2-0c01-492a-92ae-84e7287f9acc] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1229 06:47:01.284467   14074 system_pods.go:89] "nvidia-device-plugin-daemonset-ff5s5" [7c5f758b-ca19-4494-9ca6-2fe849085a8f] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1229 06:47:01.284475   14074 system_pods.go:89] "registry-788cd7d5bc-8clqr" [2d242195-9ec9-4edb-a76d-7692909e715b] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1229 06:47:01.284483   14074 system_pods.go:89] "registry-creds-567fb78d95-kz6hm" [e7864e17-b25f-4576-8c1a-c14ef1e6725a] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1229 06:47:01.284491   14074 system_pods.go:89] "registry-proxy-tq9sm" [c285867e-5e87-4d61-b445-882e6c785822] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1229 06:47:01.284502   14074 system_pods.go:89] "snapshot-controller-6588d87457-5848x" [24486ae7-9e8b-4f07-a22c-74adf85e0d42] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1229 06:47:01.284509   14074 system_pods.go:89] "snapshot-controller-6588d87457-jzrgj" [79680be0-7c6a-4df0-9856-06b13f101fff] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1229 06:47:01.284517   14074 system_pods.go:89] "storage-provisioner" [4fd3bb4c-71b1-436c-a78b-b103c1878a7e] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1229 06:47:01.554849   14074 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1229 06:47:01.555118   14074 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1229 06:47:01.565156   14074 system_pods.go:86] 20 kube-system pods found
	I1229 06:47:01.565192   14074 system_pods.go:89] "amd-gpu-device-plugin-9gzmq" [4f8b6ab5-1d47-4b72-b504-1f4b3e2277a7] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1229 06:47:01.565201   14074 system_pods.go:89] "coredns-7d764666f9-bjjpv" [529a55c5-9aa5-4a01-a640-c1a1365faeb4] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1229 06:47:01.565211   14074 system_pods.go:89] "csi-hostpath-attacher-0" [f3be3384-0b2a-442f-8edc-e878b515623c] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1229 06:47:01.565235   14074 system_pods.go:89] "csi-hostpath-resizer-0" [c67c1b7a-f4a6-4c42-ba83-fe8ce067efac] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1229 06:47:01.565244   14074 system_pods.go:89] "csi-hostpathplugin-jk9wm" [31526e8b-88bd-4955-9508-6eecd6477904] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1229 06:47:01.565249   14074 system_pods.go:89] "etcd-addons-264018" [59add920-5020-465f-87e4-79f033a4cb98] Running
	I1229 06:47:01.565256   14074 system_pods.go:89] "kindnet-z5qfv" [9c00bc55-dc55-404f-a371-d8abcfa077d9] Running
	I1229 06:47:01.565261   14074 system_pods.go:89] "kube-apiserver-addons-264018" [62c3b7fc-a986-4282-98f1-a14780a74269] Running
	I1229 06:47:01.565270   14074 system_pods.go:89] "kube-controller-manager-addons-264018" [7ca588df-8cbd-4788-8ce8-559f7045565a] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1229 06:47:01.565278   14074 system_pods.go:89] "kube-ingress-dns-minikube" [5d354ad8-e516-4665-87d5-2688f3dd640c] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1229 06:47:01.565285   14074 system_pods.go:89] "kube-proxy-ktmmg" [3aa906d0-dda4-4171-a4b2-946928a9560e] Running
	I1229 06:47:01.565291   14074 system_pods.go:89] "kube-scheduler-addons-264018" [3ce10183-4e9f-4219-b920-49946591b5ff] Running
	I1229 06:47:01.565299   14074 system_pods.go:89] "metrics-server-5778bb4788-l88w7" [b73304b2-0c01-492a-92ae-84e7287f9acc] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1229 06:47:01.565306   14074 system_pods.go:89] "nvidia-device-plugin-daemonset-ff5s5" [7c5f758b-ca19-4494-9ca6-2fe849085a8f] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1229 06:47:01.565314   14074 system_pods.go:89] "registry-788cd7d5bc-8clqr" [2d242195-9ec9-4edb-a76d-7692909e715b] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1229 06:47:01.565322   14074 system_pods.go:89] "registry-creds-567fb78d95-kz6hm" [e7864e17-b25f-4576-8c1a-c14ef1e6725a] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1229 06:47:01.565332   14074 system_pods.go:89] "registry-proxy-tq9sm" [c285867e-5e87-4d61-b445-882e6c785822] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1229 06:47:01.565340   14074 system_pods.go:89] "snapshot-controller-6588d87457-5848x" [24486ae7-9e8b-4f07-a22c-74adf85e0d42] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1229 06:47:01.565350   14074 system_pods.go:89] "snapshot-controller-6588d87457-jzrgj" [79680be0-7c6a-4df0-9856-06b13f101fff] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1229 06:47:01.565357   14074 system_pods.go:89] "storage-provisioner" [4fd3bb4c-71b1-436c-a78b-b103c1878a7e] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1229 06:47:01.676193   14074 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1229 06:47:01.717850   14074 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1229 06:47:01.963035   14074 system_pods.go:86] 20 kube-system pods found
	I1229 06:47:01.963079   14074 system_pods.go:89] "amd-gpu-device-plugin-9gzmq" [4f8b6ab5-1d47-4b72-b504-1f4b3e2277a7] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1229 06:47:01.963087   14074 system_pods.go:89] "coredns-7d764666f9-bjjpv" [529a55c5-9aa5-4a01-a640-c1a1365faeb4] Running
	I1229 06:47:01.963098   14074 system_pods.go:89] "csi-hostpath-attacher-0" [f3be3384-0b2a-442f-8edc-e878b515623c] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1229 06:47:01.963106   14074 system_pods.go:89] "csi-hostpath-resizer-0" [c67c1b7a-f4a6-4c42-ba83-fe8ce067efac] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1229 06:47:01.963114   14074 system_pods.go:89] "csi-hostpathplugin-jk9wm" [31526e8b-88bd-4955-9508-6eecd6477904] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1229 06:47:01.963120   14074 system_pods.go:89] "etcd-addons-264018" [59add920-5020-465f-87e4-79f033a4cb98] Running
	I1229 06:47:01.963127   14074 system_pods.go:89] "kindnet-z5qfv" [9c00bc55-dc55-404f-a371-d8abcfa077d9] Running
	I1229 06:47:01.963134   14074 system_pods.go:89] "kube-apiserver-addons-264018" [62c3b7fc-a986-4282-98f1-a14780a74269] Running
	I1229 06:47:01.963140   14074 system_pods.go:89] "kube-controller-manager-addons-264018" [7ca588df-8cbd-4788-8ce8-559f7045565a] Running
	I1229 06:47:01.963159   14074 system_pods.go:89] "kube-ingress-dns-minikube" [5d354ad8-e516-4665-87d5-2688f3dd640c] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1229 06:47:01.963165   14074 system_pods.go:89] "kube-proxy-ktmmg" [3aa906d0-dda4-4171-a4b2-946928a9560e] Running
	I1229 06:47:01.963171   14074 system_pods.go:89] "kube-scheduler-addons-264018" [3ce10183-4e9f-4219-b920-49946591b5ff] Running
	I1229 06:47:01.963179   14074 system_pods.go:89] "metrics-server-5778bb4788-l88w7" [b73304b2-0c01-492a-92ae-84e7287f9acc] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1229 06:47:01.963189   14074 system_pods.go:89] "nvidia-device-plugin-daemonset-ff5s5" [7c5f758b-ca19-4494-9ca6-2fe849085a8f] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1229 06:47:01.963197   14074 system_pods.go:89] "registry-788cd7d5bc-8clqr" [2d242195-9ec9-4edb-a76d-7692909e715b] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1229 06:47:01.963211   14074 system_pods.go:89] "registry-creds-567fb78d95-kz6hm" [e7864e17-b25f-4576-8c1a-c14ef1e6725a] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1229 06:47:01.963231   14074 system_pods.go:89] "registry-proxy-tq9sm" [c285867e-5e87-4d61-b445-882e6c785822] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1229 06:47:01.963240   14074 system_pods.go:89] "snapshot-controller-6588d87457-5848x" [24486ae7-9e8b-4f07-a22c-74adf85e0d42] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1229 06:47:01.963251   14074 system_pods.go:89] "snapshot-controller-6588d87457-jzrgj" [79680be0-7c6a-4df0-9856-06b13f101fff] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1229 06:47:01.963257   14074 system_pods.go:89] "storage-provisioner" [4fd3bb4c-71b1-436c-a78b-b103c1878a7e] Running
	I1229 06:47:01.963267   14074 system_pods.go:126] duration metric: took 1.048985534s to wait for k8s-apps to be running ...
	I1229 06:47:01.963279   14074 system_svc.go:44] waiting for kubelet service to be running ....
	I1229 06:47:01.963333   14074 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1229 06:47:01.981736   14074 system_svc.go:56] duration metric: took 18.448108ms WaitForService to wait for kubelet
	I1229 06:47:01.981769   14074 kubeadm.go:587] duration metric: took 14.663163455s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1229 06:47:01.981856   14074 node_conditions.go:102] verifying NodePressure condition ...
	I1229 06:47:01.984837   14074 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1229 06:47:01.984863   14074 node_conditions.go:123] node cpu capacity is 8
	I1229 06:47:01.984882   14074 node_conditions.go:105] duration metric: took 3.020036ms to run NodePressure ...
	I1229 06:47:01.984900   14074 start.go:242] waiting for startup goroutines ...
	I1229 06:47:02.055828   14074 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1229 06:47:02.055837   14074 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1229 06:47:02.174517   14074 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1229 06:47:02.218781   14074 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1229 06:47:02.554502   14074 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1229 06:47:02.554519   14074 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1229 06:47:02.674161   14074 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1229 06:47:02.717629   14074 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1229 06:47:03.054842   14074 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1229 06:47:03.055007   14074 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1229 06:47:03.174751   14074 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1229 06:47:03.217838   14074 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1229 06:47:03.553723   14074 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1229 06:47:03.554206   14074 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1229 06:47:03.673844   14074 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1229 06:47:03.716643   14074 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1229 06:47:04.054843   14074 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1229 06:47:04.054885   14074 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1229 06:47:04.174559   14074 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1229 06:47:04.218090   14074 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1229 06:47:04.554623   14074 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1229 06:47:04.554966   14074 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1229 06:47:04.675006   14074 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1229 06:47:04.717437   14074 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1229 06:47:05.062958   14074 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1229 06:47:05.063048   14074 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1229 06:47:05.185627   14074 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1229 06:47:05.217183   14074 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1229 06:47:05.554302   14074 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1229 06:47:05.554316   14074 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1229 06:47:05.673470   14074 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1229 06:47:05.717812   14074 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1229 06:47:06.054893   14074 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1229 06:47:06.054905   14074 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1229 06:47:06.173968   14074 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1229 06:47:06.217385   14074 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1229 06:47:06.554586   14074 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1229 06:47:06.554714   14074 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1229 06:47:06.674145   14074 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1229 06:47:06.717141   14074 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1229 06:47:07.054885   14074 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1229 06:47:07.055030   14074 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1229 06:47:07.174662   14074 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1229 06:47:07.275951   14074 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1229 06:47:07.554161   14074 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1229 06:47:07.554197   14074 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1229 06:47:07.674121   14074 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1229 06:47:07.717195   14074 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1229 06:47:08.054189   14074 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1229 06:47:08.054321   14074 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1229 06:47:08.173872   14074 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1229 06:47:08.216722   14074 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1229 06:47:08.555193   14074 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1229 06:47:08.555400   14074 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1229 06:47:08.675088   14074 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1229 06:47:08.776564   14074 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1229 06:47:09.055078   14074 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1229 06:47:09.055583   14074 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1229 06:47:09.178671   14074 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1229 06:47:09.218655   14074 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1229 06:47:09.553821   14074 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1229 06:47:09.554010   14074 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1229 06:47:09.673903   14074 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1229 06:47:09.716767   14074 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1229 06:47:10.085329   14074 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1229 06:47:10.085389   14074 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1229 06:47:10.174099   14074 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1229 06:47:10.229001   14074 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1229 06:47:10.554588   14074 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1229 06:47:10.554652   14074 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1229 06:47:10.675189   14074 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1229 06:47:10.717768   14074 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1229 06:47:11.053935   14074 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1229 06:47:11.054074   14074 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1229 06:47:11.173995   14074 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1229 06:47:11.217074   14074 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1229 06:47:11.554206   14074 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1229 06:47:11.554589   14074 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1229 06:47:11.674588   14074 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1229 06:47:11.717207   14074 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1229 06:47:12.054474   14074 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1229 06:47:12.054497   14074 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1229 06:47:12.173981   14074 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1229 06:47:12.216527   14074 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1229 06:47:12.556083   14074 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1229 06:47:12.556513   14074 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1229 06:47:12.675355   14074 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1229 06:47:12.717786   14074 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1229 06:47:13.055174   14074 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1229 06:47:13.055487   14074 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1229 06:47:13.174510   14074 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1229 06:47:13.217728   14074 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1229 06:47:13.554059   14074 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1229 06:47:13.554127   14074 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1229 06:47:13.674267   14074 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1229 06:47:13.717188   14074 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1229 06:47:14.054264   14074 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1229 06:47:14.054360   14074 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1229 06:47:14.174100   14074 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1229 06:47:14.217793   14074 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1229 06:47:14.554274   14074 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1229 06:47:14.554493   14074 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1229 06:47:14.674301   14074 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1229 06:47:14.717005   14074 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1229 06:47:15.054625   14074 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1229 06:47:15.054843   14074 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1229 06:47:15.174640   14074 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1229 06:47:15.275934   14074 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1229 06:47:15.554041   14074 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1229 06:47:15.554187   14074 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1229 06:47:15.673776   14074 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1229 06:47:15.716447   14074 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1229 06:47:16.054500   14074 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1229 06:47:16.054669   14074 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1229 06:47:16.173950   14074 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1229 06:47:16.216467   14074 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1229 06:47:16.555952   14074 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1229 06:47:16.556059   14074 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1229 06:47:16.674770   14074 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1229 06:47:16.718088   14074 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1229 06:47:17.058079   14074 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1229 06:47:17.058286   14074 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1229 06:47:17.174993   14074 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1229 06:47:17.217059   14074 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1229 06:47:17.554322   14074 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1229 06:47:17.554366   14074 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1229 06:47:17.674215   14074 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1229 06:47:17.717406   14074 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1229 06:47:18.054454   14074 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1229 06:47:18.054493   14074 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1229 06:47:18.173881   14074 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1229 06:47:18.274604   14074 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1229 06:47:18.554877   14074 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1229 06:47:18.554915   14074 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1229 06:47:18.673822   14074 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1229 06:47:18.716868   14074 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1229 06:47:19.053803   14074 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1229 06:47:19.054161   14074 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1229 06:47:19.173831   14074 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1229 06:47:19.216629   14074 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1229 06:47:19.553712   14074 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1229 06:47:19.554062   14074 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1229 06:47:19.674422   14074 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1229 06:47:19.717428   14074 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1229 06:47:20.054500   14074 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1229 06:47:20.054645   14074 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1229 06:47:20.174326   14074 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1229 06:47:20.217017   14074 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1229 06:47:20.554523   14074 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1229 06:47:20.554601   14074 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1229 06:47:20.674746   14074 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1229 06:47:20.718016   14074 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1229 06:47:21.054396   14074 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1229 06:47:21.054501   14074 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1229 06:47:21.174483   14074 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1229 06:47:21.217351   14074 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1229 06:47:21.554345   14074 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1229 06:47:21.554609   14074 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1229 06:47:21.674184   14074 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1229 06:47:21.717934   14074 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1229 06:47:22.053749   14074 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1229 06:47:22.054079   14074 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1229 06:47:22.173414   14074 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1229 06:47:22.216931   14074 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1229 06:47:22.554093   14074 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1229 06:47:22.554347   14074 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1229 06:47:22.674074   14074 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1229 06:47:22.717711   14074 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1229 06:47:23.054053   14074 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1229 06:47:23.054305   14074 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1229 06:47:23.174554   14074 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1229 06:47:23.217691   14074 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1229 06:47:23.554049   14074 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1229 06:47:23.554064   14074 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1229 06:47:23.673652   14074 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1229 06:47:23.718351   14074 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1229 06:47:24.055003   14074 kapi.go:107] duration metric: took 35.003508241s to wait for kubernetes.io/minikube-addons=registry ...
	I1229 06:47:24.055366   14074 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1229 06:47:24.174340   14074 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1229 06:47:24.217321   14074 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1229 06:47:24.554190   14074 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1229 06:47:24.673659   14074 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1229 06:47:24.717480   14074 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1229 06:47:25.054941   14074 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1229 06:47:25.174471   14074 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1229 06:47:25.217847   14074 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1229 06:47:25.554429   14074 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1229 06:47:25.674487   14074 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1229 06:47:25.717685   14074 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1229 06:47:26.053823   14074 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1229 06:47:26.174272   14074 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1229 06:47:26.216721   14074 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1229 06:47:26.554891   14074 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1229 06:47:26.675479   14074 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1229 06:47:26.719303   14074 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1229 06:47:27.058840   14074 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1229 06:47:27.174717   14074 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1229 06:47:27.217595   14074 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1229 06:47:27.555444   14074 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1229 06:47:27.673816   14074 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1229 06:47:27.716301   14074 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1229 06:47:28.054182   14074 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1229 06:47:28.175007   14074 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1229 06:47:28.217331   14074 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1229 06:47:28.554276   14074 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1229 06:47:28.674286   14074 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1229 06:47:28.717310   14074 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1229 06:47:29.175010   14074 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1229 06:47:29.175178   14074 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1229 06:47:29.314377   14074 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1229 06:47:29.554576   14074 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1229 06:47:29.674411   14074 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1229 06:47:29.717421   14074 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1229 06:47:30.054051   14074 kapi.go:107] duration metric: took 41.003387038s to wait for app.kubernetes.io/name=ingress-nginx ...
	I1229 06:47:30.174378   14074 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1229 06:47:30.216926   14074 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1229 06:47:30.674479   14074 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1229 06:47:30.717639   14074 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1229 06:47:31.174179   14074 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1229 06:47:31.216998   14074 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1229 06:47:31.674948   14074 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1229 06:47:31.716467   14074 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1229 06:47:32.174401   14074 kapi.go:107] duration metric: took 36.503374396s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I1229 06:47:32.176342   14074 out.go:179] * Your GCP credentials will now be mounted into every pod created in the addons-264018 cluster.
	I1229 06:47:32.177498   14074 out.go:179] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I1229 06:47:32.178593   14074 out.go:179] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I1229 06:47:32.275055   14074 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1229 06:47:32.717327   14074 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1229 06:47:33.269185   14074 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1229 06:47:33.717790   14074 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1229 06:47:34.217736   14074 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1229 06:47:34.717424   14074 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1229 06:47:35.217446   14074 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1229 06:47:35.717973   14074 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1229 06:47:36.217651   14074 kapi.go:107] duration metric: took 47.003942229s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I1229 06:47:36.219500   14074 out.go:179] * Enabled addons: storage-provisioner, registry-creds, nvidia-device-plugin, cloud-spanner, inspektor-gadget, ingress-dns, amd-gpu-device-plugin, metrics-server, yakd, storage-provisioner-rancher, volumesnapshots, registry, ingress, gcp-auth, csi-hostpath-driver
	I1229 06:47:36.220645   14074 addons.go:530] duration metric: took 48.901990803s for enable addons: enabled=[storage-provisioner registry-creds nvidia-device-plugin cloud-spanner inspektor-gadget ingress-dns amd-gpu-device-plugin metrics-server yakd storage-provisioner-rancher volumesnapshots registry ingress gcp-auth csi-hostpath-driver]
	I1229 06:47:36.220687   14074 start.go:247] waiting for cluster config update ...
	I1229 06:47:36.220712   14074 start.go:256] writing updated cluster config ...
	I1229 06:47:36.220958   14074 ssh_runner.go:195] Run: rm -f paused
	I1229 06:47:36.224924   14074 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1229 06:47:36.227714   14074 pod_ready.go:83] waiting for pod "coredns-7d764666f9-bjjpv" in "kube-system" namespace to be "Ready" or be gone ...
	I1229 06:47:36.231477   14074 pod_ready.go:94] pod "coredns-7d764666f9-bjjpv" is "Ready"
	I1229 06:47:36.231497   14074 pod_ready.go:86] duration metric: took 3.763342ms for pod "coredns-7d764666f9-bjjpv" in "kube-system" namespace to be "Ready" or be gone ...
	I1229 06:47:36.233259   14074 pod_ready.go:83] waiting for pod "etcd-addons-264018" in "kube-system" namespace to be "Ready" or be gone ...
	I1229 06:47:36.236583   14074 pod_ready.go:94] pod "etcd-addons-264018" is "Ready"
	I1229 06:47:36.236607   14074 pod_ready.go:86] duration metric: took 3.32793ms for pod "etcd-addons-264018" in "kube-system" namespace to be "Ready" or be gone ...
	I1229 06:47:36.238525   14074 pod_ready.go:83] waiting for pod "kube-apiserver-addons-264018" in "kube-system" namespace to be "Ready" or be gone ...
	I1229 06:47:36.241626   14074 pod_ready.go:94] pod "kube-apiserver-addons-264018" is "Ready"
	I1229 06:47:36.241647   14074 pod_ready.go:86] duration metric: took 3.101332ms for pod "kube-apiserver-addons-264018" in "kube-system" namespace to be "Ready" or be gone ...
	I1229 06:47:36.243120   14074 pod_ready.go:83] waiting for pod "kube-controller-manager-addons-264018" in "kube-system" namespace to be "Ready" or be gone ...
	I1229 06:47:36.628909   14074 pod_ready.go:94] pod "kube-controller-manager-addons-264018" is "Ready"
	I1229 06:47:36.628937   14074 pod_ready.go:86] duration metric: took 385.799548ms for pod "kube-controller-manager-addons-264018" in "kube-system" namespace to be "Ready" or be gone ...
	I1229 06:47:36.828391   14074 pod_ready.go:83] waiting for pod "kube-proxy-ktmmg" in "kube-system" namespace to be "Ready" or be gone ...
	I1229 06:47:37.228628   14074 pod_ready.go:94] pod "kube-proxy-ktmmg" is "Ready"
	I1229 06:47:37.228653   14074 pod_ready.go:86] duration metric: took 400.238542ms for pod "kube-proxy-ktmmg" in "kube-system" namespace to be "Ready" or be gone ...
	I1229 06:47:37.428783   14074 pod_ready.go:83] waiting for pod "kube-scheduler-addons-264018" in "kube-system" namespace to be "Ready" or be gone ...
	I1229 06:47:37.828299   14074 pod_ready.go:94] pod "kube-scheduler-addons-264018" is "Ready"
	I1229 06:47:37.828322   14074 pod_ready.go:86] duration metric: took 399.517169ms for pod "kube-scheduler-addons-264018" in "kube-system" namespace to be "Ready" or be gone ...
	I1229 06:47:37.828333   14074 pod_ready.go:40] duration metric: took 1.60337978s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1229 06:47:37.869640   14074 start.go:625] kubectl: 1.35.0, cluster: 1.35.0 (minor skew: 0)
	I1229 06:47:37.871739   14074 out.go:179] * Done! kubectl is now configured to use "addons-264018" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Dec 29 06:47:39 addons-264018 crio[770]: time="2025-12-29T06:47:39.975801962Z" level=info msg="Starting container: ec5ba0d610edbb73742742612aac8e3c5c1f86c47394d12d37f5bb3eba0ba996" id=ceebc117-9405-48e5-baab-c1f7771e852b name=/runtime.v1.RuntimeService/StartContainer
	Dec 29 06:47:39 addons-264018 crio[770]: time="2025-12-29T06:47:39.977616886Z" level=info msg="Started container" PID=6470 containerID=ec5ba0d610edbb73742742612aac8e3c5c1f86c47394d12d37f5bb3eba0ba996 description=default/busybox/busybox id=ceebc117-9405-48e5-baab-c1f7771e852b name=/runtime.v1.RuntimeService/StartContainer sandboxID=6377ad184403ff91511822b8f30f5b9be8a6a8d24b4f47b1eb83ad5b20d08366
	Dec 29 06:47:52 addons-264018 crio[770]: time="2025-12-29T06:47:52.756195081Z" level=info msg="Running pod sandbox: local-path-storage/helper-pod-create-pvc-6eada480-56a9-4b24-ba4a-5bbb94972edc/POD" id=3fdd0cfc-56bb-4033-a4d7-74add6ced77a name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 29 06:47:52 addons-264018 crio[770]: time="2025-12-29T06:47:52.756298577Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 29 06:47:52 addons-264018 crio[770]: time="2025-12-29T06:47:52.762091936Z" level=info msg="Got pod network &{Name:helper-pod-create-pvc-6eada480-56a9-4b24-ba4a-5bbb94972edc Namespace:local-path-storage ID:3bfa124231ba75f4b752d3edcb1f9424364d2eb8cd2f36106f1f1af45f409f0b UID:0dd6cd26-8c04-4b5e-88e6-14ddcbbd1373 NetNS:/var/run/netns/d79d8220-9249-4c3f-8f23-eb72cc61af37 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc000398778}] Aliases:map[]}"
	Dec 29 06:47:52 addons-264018 crio[770]: time="2025-12-29T06:47:52.762125269Z" level=info msg="Adding pod local-path-storage_helper-pod-create-pvc-6eada480-56a9-4b24-ba4a-5bbb94972edc to CNI network \"kindnet\" (type=ptp)"
	Dec 29 06:47:52 addons-264018 crio[770]: time="2025-12-29T06:47:52.77979206Z" level=info msg="Got pod network &{Name:helper-pod-create-pvc-6eada480-56a9-4b24-ba4a-5bbb94972edc Namespace:local-path-storage ID:3bfa124231ba75f4b752d3edcb1f9424364d2eb8cd2f36106f1f1af45f409f0b UID:0dd6cd26-8c04-4b5e-88e6-14ddcbbd1373 NetNS:/var/run/netns/d79d8220-9249-4c3f-8f23-eb72cc61af37 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc000398778}] Aliases:map[]}"
	Dec 29 06:47:52 addons-264018 crio[770]: time="2025-12-29T06:47:52.779959626Z" level=info msg="Checking pod local-path-storage_helper-pod-create-pvc-6eada480-56a9-4b24-ba4a-5bbb94972edc for CNI network kindnet (type=ptp)"
	Dec 29 06:47:52 addons-264018 crio[770]: time="2025-12-29T06:47:52.780900983Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Dec 29 06:47:52 addons-264018 crio[770]: time="2025-12-29T06:47:52.781661114Z" level=info msg="Ran pod sandbox 3bfa124231ba75f4b752d3edcb1f9424364d2eb8cd2f36106f1f1af45f409f0b with infra container: local-path-storage/helper-pod-create-pvc-6eada480-56a9-4b24-ba4a-5bbb94972edc/POD" id=3fdd0cfc-56bb-4033-a4d7-74add6ced77a name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 29 06:47:52 addons-264018 crio[770]: time="2025-12-29T06:47:52.782944951Z" level=info msg="Checking image status: docker.io/busybox:stable@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79" id=6683a40b-d4cb-43c1-aff2-0f29f950e4b6 name=/runtime.v1.ImageService/ImageStatus
	Dec 29 06:47:52 addons-264018 crio[770]: time="2025-12-29T06:47:52.783146474Z" level=info msg="Image docker.io/busybox:stable@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79 not found" id=6683a40b-d4cb-43c1-aff2-0f29f950e4b6 name=/runtime.v1.ImageService/ImageStatus
	Dec 29 06:47:52 addons-264018 crio[770]: time="2025-12-29T06:47:52.78326817Z" level=info msg="Neither image nor artfiact docker.io/busybox:stable@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79 found" id=6683a40b-d4cb-43c1-aff2-0f29f950e4b6 name=/runtime.v1.ImageService/ImageStatus
	Dec 29 06:47:52 addons-264018 crio[770]: time="2025-12-29T06:47:52.784037306Z" level=info msg="Pulling image: docker.io/busybox:stable@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79" id=25d70cbc-a1c3-428b-954f-7fb7f656d78e name=/runtime.v1.ImageService/PullImage
	Dec 29 06:47:52 addons-264018 crio[770]: time="2025-12-29T06:47:52.784371259Z" level=info msg="Trying to access \"docker.io/library/busybox@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79\""
	Dec 29 06:47:53 addons-264018 crio[770]: time="2025-12-29T06:47:53.353802336Z" level=info msg="Pulled image: docker.io/library/busybox@sha256:023917ec6a886d0e8e15f28fb543515a5fcd8d938edb091e8147db4efed388ee" id=25d70cbc-a1c3-428b-954f-7fb7f656d78e name=/runtime.v1.ImageService/PullImage
	Dec 29 06:47:53 addons-264018 crio[770]: time="2025-12-29T06:47:53.35443806Z" level=info msg="Checking image status: docker.io/busybox:stable@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79" id=ea09bed3-27a6-4d45-b7f6-b17c39432079 name=/runtime.v1.ImageService/ImageStatus
	Dec 29 06:47:53 addons-264018 crio[770]: time="2025-12-29T06:47:53.356732626Z" level=info msg="Checking image status: docker.io/busybox:stable@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79" id=e3767733-1992-4add-834b-9f5a105e9ada name=/runtime.v1.ImageService/ImageStatus
	Dec 29 06:47:53 addons-264018 crio[770]: time="2025-12-29T06:47:53.361737022Z" level=info msg="Creating container: local-path-storage/helper-pod-create-pvc-6eada480-56a9-4b24-ba4a-5bbb94972edc/helper-pod" id=bf211486-55d5-41b1-95c0-f9c8161078b8 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 29 06:47:53 addons-264018 crio[770]: time="2025-12-29T06:47:53.361871675Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 29 06:47:53 addons-264018 crio[770]: time="2025-12-29T06:47:53.370141625Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 29 06:47:53 addons-264018 crio[770]: time="2025-12-29T06:47:53.370721523Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 29 06:47:53 addons-264018 crio[770]: time="2025-12-29T06:47:53.40738242Z" level=info msg="Created container cd5006266be14df9662281ff629e4b7d4ce497ae5b753c1486ad96a700eb2ec8: local-path-storage/helper-pod-create-pvc-6eada480-56a9-4b24-ba4a-5bbb94972edc/helper-pod" id=bf211486-55d5-41b1-95c0-f9c8161078b8 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 29 06:47:53 addons-264018 crio[770]: time="2025-12-29T06:47:53.407973266Z" level=info msg="Starting container: cd5006266be14df9662281ff629e4b7d4ce497ae5b753c1486ad96a700eb2ec8" id=bbbaf937-1d79-4f30-b512-8ce530371bf9 name=/runtime.v1.RuntimeService/StartContainer
	Dec 29 06:47:53 addons-264018 crio[770]: time="2025-12-29T06:47:53.409905488Z" level=info msg="Started container" PID=6900 containerID=cd5006266be14df9662281ff629e4b7d4ce497ae5b753c1486ad96a700eb2ec8 description=local-path-storage/helper-pod-create-pvc-6eada480-56a9-4b24-ba4a-5bbb94972edc/helper-pod id=bbbaf937-1d79-4f30-b512-8ce530371bf9 name=/runtime.v1.RuntimeService/StartContainer sandboxID=3bfa124231ba75f4b752d3edcb1f9424364d2eb8cd2f36106f1f1af45f409f0b
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                                        CREATED                  STATE               NAME                                     ATTEMPT             POD ID              POD                                                          NAMESPACE
	cd5006266be14       docker.io/library/busybox@sha256:023917ec6a886d0e8e15f28fb543515a5fcd8d938edb091e8147db4efed388ee                                            Less than a second ago   Exited              helper-pod                               0                   3bfa124231ba7       helper-pod-create-pvc-6eada480-56a9-4b24-ba4a-5bbb94972edc   local-path-storage
	ec5ba0d610edb       gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998                                          13 seconds ago           Running             busybox                                  0                   6377ad184403f       busybox                                                      default
	a6a0581d8c616       registry.k8s.io/sig-storage/csi-snapshotter@sha256:d844cb1faeb4ecf44bae6aea370c9c6128a87e665e40370021427d79a8819ee5                          17 seconds ago           Running             csi-snapshotter                          0                   3f38f310f3daa       csi-hostpathplugin-jk9wm                                     kube-system
	40f1bbbc2f1bc       registry.k8s.io/sig-storage/csi-provisioner@sha256:1bc653d13b27b8eefbba0799bdb5711819f8b987eaa6eb6750e8ef001958d5a7                          18 seconds ago           Running             csi-provisioner                          0                   3f38f310f3daa       csi-hostpathplugin-jk9wm                                     kube-system
	cf76d1d0070f8       registry.k8s.io/sig-storage/livenessprobe@sha256:42bc492c3c65078b1ccda5dbc416abf0cefdba3e6317416cbc43344cf0ed09b6                            19 seconds ago           Running             liveness-probe                           0                   3f38f310f3daa       csi-hostpathplugin-jk9wm                                     kube-system
	191723d7168d8       registry.k8s.io/sig-storage/hostpathplugin@sha256:6fdad87766e53edf987545067e69a0dffb8485cccc546be4efbaa14c9b22ea11                           20 seconds ago           Running             hostpath                                 0                   3f38f310f3daa       csi-hostpathplugin-jk9wm                                     kube-system
	ae96a02e9cf4d       registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:7caa903cf3f8d1d70c3b7bb3e23223685b05e4f342665877eabe84ae38b92ecc                21 seconds ago           Running             node-driver-registrar                    0                   3f38f310f3daa       csi-hostpathplugin-jk9wm                                     kube-system
	c78c4848a2824       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:441f351b4520c228d29ba8c02a438d9ba971dafbbba5c91eaf882b1528797fb8                                 21 seconds ago           Running             gcp-auth                                 0                   4514ac274cdad       gcp-auth-5bbcf684b5-xr7dj                                    gcp-auth
	296982828c330       registry.k8s.io/ingress-nginx/controller@sha256:d552aeecf01939bd11bdc4fa57ce7437d42651194a61edcd6b7aea44b9e74cad                             23 seconds ago           Running             controller                               0                   519838faee625       ingress-nginx-controller-7847b5c79c-qzd56                    ingress-nginx
	8dec6cdd2fb59       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:e2d8d9e1553c1ac5f9f41bc34d38d1eda519ed77a3106b036c43b6667dad19a9                   27 seconds ago           Exited              patch                                    1                   c7f02069b9848       ingress-nginx-admission-patch-mbp5l                          ingress-nginx
	6130b5c80738d       ghcr.io/inspektor-gadget/inspektor-gadget@sha256:ea428be7b01d41418fca4d91ae3dff6b037bdc0d42757e7ad392a38536488a1a                            27 seconds ago           Running             gadget                                   0                   df6beeec13529       gadget-xgnkq                                                 gadget
	c0c3e5a743acf       gcr.io/k8s-minikube/kube-registry-proxy@sha256:8f72a79b63ca56074435e82b87fca2642a8117e60be313d3586dbe2bfff11cac                              30 seconds ago           Running             registry-proxy                           0                   2a93236ff33c4       registry-proxy-tq9sm                                         kube-system
	f3b99ec55e372       registry.k8s.io/sig-storage/csi-resizer@sha256:0629447f7946e53df3ad775c5595888de1dae5a23bcaae8f68fdab0395af61a8                              31 seconds ago           Running             csi-resizer                              0                   0957bea6c2426       csi-hostpath-resizer-0                                       kube-system
	ebcf1d7d5c969       registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:317f43813e4e2c3e81823ff16041c8e0714fb80e6d040c6e6c799967ba27d864   32 seconds ago           Running             csi-external-health-monitor-controller   0                   3f38f310f3daa       csi-hostpathplugin-jk9wm                                     kube-system
	c9b16a3ab994c       docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f                                     33 seconds ago           Running             amd-gpu-device-plugin                    0                   310a9cd10f34a       amd-gpu-device-plugin-9gzmq                                  kube-system
	4bfc41143f5c8       registry.k8s.io/sig-storage/csi-attacher@sha256:66e4ecfa0ec50a88f9cd145e006805816f57040f40662d4cb9e31d10519d9bf0                             34 seconds ago           Running             csi-attacher                             0                   09fbed3975e26       csi-hostpath-attacher-0                                      kube-system
	e4899f3db6ad2       nvcr.io/nvidia/k8s-device-plugin@sha256:c3c1a099015d1810c249ba294beaad656ce0354f7e8a77803dacabe60a4f8c9f                                     35 seconds ago           Running             nvidia-device-plugin-ctr                 0                   08818ff37f592       nvidia-device-plugin-daemonset-ff5s5                         kube-system
	ca6544fff3e92       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:e2d8d9e1553c1ac5f9f41bc34d38d1eda519ed77a3106b036c43b6667dad19a9                   38 seconds ago           Exited              patch                                    0                   c452417e57940       gcp-auth-certs-patch-pk28d                                   gcp-auth
	ee5452e4de09a       docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef                             38 seconds ago           Running             local-path-provisioner                   0                   7be07ec7e95b4       local-path-provisioner-c44bcd496-6dz5k                       local-path-storage
	292ecef6acbc2       registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922                      38 seconds ago           Running             volume-snapshot-controller               0                   eca31c704e045       snapshot-controller-6588d87457-jzrgj                         kube-system
	cd19ba0cf2a14       gcr.io/cloud-spanner-emulator/emulator@sha256:b948b04b45496ebeb13eee27bc9d238593c142e8e010443892153f181591abde                               39 seconds ago           Running             cloud-spanner-emulator                   0                   84b1af1974295       cloud-spanner-emulator-5649ccbc87-c6x5r                      default
	b719d17a16ec6       registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922                      41 seconds ago           Running             volume-snapshot-controller               0                   102290824f16e       snapshot-controller-6588d87457-5848x                         kube-system
	df14d9b00b14e       ghcr.io/manusa/yakd@sha256:45d2fe163841511e351ae36a5e434fb854a886b0d6a70cea692bd707543fd8c6                                                  42 seconds ago           Running             yakd                                     0                   1052e3cd5b89b       yakd-dashboard-7bcf5795cd-6dvn8                              yakd-dashboard
	ea60838474a35       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:e2d8d9e1553c1ac5f9f41bc34d38d1eda519ed77a3106b036c43b6667dad19a9                   44 seconds ago           Exited              create                                   0                   2fa0a104e45d7       gcp-auth-certs-create-fb4l4                                  gcp-auth
	55f76df085f83       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:e2d8d9e1553c1ac5f9f41bc34d38d1eda519ed77a3106b036c43b6667dad19a9                   45 seconds ago           Exited              create                                   0                   c0d797b78614d       ingress-nginx-admission-create-kt7wq                         ingress-nginx
	f6f13110641eb       registry.k8s.io/metrics-server/metrics-server@sha256:5dd31abb8093690d9624a53277a00d2257e7e57e6766be3f9f54cf9f54cddbc1                        45 seconds ago           Running             metrics-server                           0                   fd1e3d28492b2       metrics-server-5778bb4788-l88w7                              kube-system
	17994c226aaca       docker.io/library/registry@sha256:f57ffd2bb01704b6082396158e77ca6d1112bc6fe32315c322864de804750d8a                                           47 seconds ago           Running             registry                                 0                   4a447f8b93cc9       registry-788cd7d5bc-8clqr                                    kube-system
	dc2d3d8fc63a8       docker.io/kicbase/minikube-ingress-dns@sha256:a0cc6cd76812357245a51bb05fabcd346a616c880e40ca4e0c8c8253912eaae7                               48 seconds ago           Running             minikube-ingress-dns                     0                   a9d14e129bf18       kube-ingress-dns-minikube                                    kube-system
	21f189ec37313       aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139                                                                             52 seconds ago           Running             coredns                                  0                   eeaecc4070669       coredns-7d764666f9-bjjpv                                     kube-system
	eeb0fd8ef1628       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                                             52 seconds ago           Running             storage-provisioner                      0                   e32f60fbb4c0d       storage-provisioner                                          kube-system
	33b335c36f365       docker.io/kindest/kindnetd@sha256:7c22558dc06a570d46ea6e8a73b23cdc754eb81f7c08d3441a3171ad359ffc27                                           About a minute ago       Running             kindnet-cni                              0                   6944528b1429c       kindnet-z5qfv                                                kube-system
	4a3c44dcac1f9       32652ff1bbe6b6df16a3dc621fcad4e6e185a2852bffc5847aab79e36c54bfb8                                                                             About a minute ago       Running             kube-proxy                               0                   2b4874422b47f       kube-proxy-ktmmg                                             kube-system
	51ed82b33b150       2c9a4b058bd7e6ec479c38f9e8a1dac2f5ee5b0a3ebda6dfac968e8720229508                                                                             About a minute ago       Running             kube-controller-manager                  0                   5d36f2f1e4759       kube-controller-manager-addons-264018                        kube-system
	3cbc1d555be4a       550794e3b12ac21ec7fd940bdfb45f7c8dae3c52acd8c70e580b882de20c3dcc                                                                             About a minute ago       Running             kube-scheduler                           0                   dc8140a46f9fa       kube-scheduler-addons-264018                                 kube-system
	76eadfaf130b7       5c6acd67e9cd10eb60a246cd233db251ef62ea97e6572f897e873f0cb648f499                                                                             About a minute ago       Running             kube-apiserver                           0                   4f97df3c93d04       kube-apiserver-addons-264018                                 kube-system
	1fdad9cd679e8       0a108f7189562e99793bdecab61fdf1a7c9d913af3385de9da17fb9d6ff430e2                                                                             About a minute ago       Running             etcd                                     0                   deb476653f629       etcd-addons-264018                                           kube-system
	
	
	==> coredns [21f189ec37313b547a78e6b7eefa6f39267001240605aa04be22ce21ebed4a7b] <==
	[INFO] 10.244.0.18:59017 - 13166 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000143769s
	[INFO] 10.244.0.18:40231 - 12419 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000100995s
	[INFO] 10.244.0.18:40231 - 12750 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.00011076s
	[INFO] 10.244.0.18:60976 - 59019 "AAAA IN registry.kube-system.svc.cluster.local.us-east4-a.c.k8s-minikube.internal. udp 91 false 512" NXDOMAIN qr,aa,rd,ra 198 0.000048674s
	[INFO] 10.244.0.18:60976 - 58712 "A IN registry.kube-system.svc.cluster.local.us-east4-a.c.k8s-minikube.internal. udp 91 false 512" NXDOMAIN qr,aa,rd,ra 198 0.00006934s
	[INFO] 10.244.0.18:54084 - 42284 "AAAA IN registry.kube-system.svc.cluster.local.c.k8s-minikube.internal. udp 80 false 512" NXDOMAIN qr,aa,rd,ra 185 0.000059936s
	[INFO] 10.244.0.18:54084 - 42052 "A IN registry.kube-system.svc.cluster.local.c.k8s-minikube.internal. udp 80 false 512" NXDOMAIN qr,aa,rd,ra 185 0.000081849s
	[INFO] 10.244.0.18:47079 - 12747 "AAAA IN registry.kube-system.svc.cluster.local.google.internal. udp 72 false 512" NXDOMAIN qr,aa,rd,ra 177 0.000058539s
	[INFO] 10.244.0.18:47079 - 12522 "A IN registry.kube-system.svc.cluster.local.google.internal. udp 72 false 512" NXDOMAIN qr,aa,rd,ra 177 0.00010406s
	[INFO] 10.244.0.18:41449 - 9562 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000097248s
	[INFO] 10.244.0.18:41449 - 9328 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000138015s
	[INFO] 10.244.0.21:47621 - 50974 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000189186s
	[INFO] 10.244.0.21:36966 - 60921 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000276916s
	[INFO] 10.244.0.21:57506 - 40843 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000177141s
	[INFO] 10.244.0.21:48389 - 17938 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000154684s
	[INFO] 10.244.0.21:51745 - 22616 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000095313s
	[INFO] 10.244.0.21:53379 - 43620 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000160429s
	[INFO] 10.244.0.21:32832 - 31854 "AAAA IN storage.googleapis.com.us-east4-a.c.k8s-minikube.internal. udp 86 false 1232" NXDOMAIN qr,rd,ra 182 0.004225312s
	[INFO] 10.244.0.21:48594 - 16804 "A IN storage.googleapis.com.us-east4-a.c.k8s-minikube.internal. udp 86 false 1232" NXDOMAIN qr,rd,ra 182 0.005225111s
	[INFO] 10.244.0.21:59927 - 3685 "AAAA IN storage.googleapis.com.c.k8s-minikube.internal. udp 75 false 1232" NXDOMAIN qr,rd,ra 169 0.004249335s
	[INFO] 10.244.0.21:33109 - 27556 "A IN storage.googleapis.com.c.k8s-minikube.internal. udp 75 false 1232" NXDOMAIN qr,rd,ra 169 0.004360261s
	[INFO] 10.244.0.21:57310 - 2702 "AAAA IN storage.googleapis.com.google.internal. udp 67 false 1232" NXDOMAIN qr,rd,ra 161 0.004981506s
	[INFO] 10.244.0.21:46730 - 25957 "A IN storage.googleapis.com.google.internal. udp 67 false 1232" NXDOMAIN qr,rd,ra 161 0.006528821s
	[INFO] 10.244.0.21:50808 - 15343 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.000930178s
	[INFO] 10.244.0.21:40608 - 61630 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 496 0.002964501s
	
	
	==> describe nodes <==
	Name:               addons-264018
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-264018
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=306c26d738eaa3534a776d7c684ed563998a48b8
	                    minikube.k8s.io/name=addons-264018
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_29T06_46_42_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-264018
	Annotations:        csi.volume.kubernetes.io/nodeid: {"hostpath.csi.k8s.io":"addons-264018"}
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 29 Dec 2025 06:46:39 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-264018
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 29 Dec 2025 06:47:43 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 29 Dec 2025 06:47:43 +0000   Mon, 29 Dec 2025 06:46:37 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 29 Dec 2025 06:47:43 +0000   Mon, 29 Dec 2025 06:46:37 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 29 Dec 2025 06:47:43 +0000   Mon, 29 Dec 2025 06:46:37 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 29 Dec 2025 06:47:43 +0000   Mon, 29 Dec 2025 06:47:00 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-264018
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863360Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863360Ki
	  pods:               110
	System Info:
	  Machine ID:                 44f990608ba801cb32708aeb6951f96d
	  System UUID:                088ab0e9-ed28-4f02-994d-9f8f90cb5943
	  Boot ID:                    38b59c2f-2e15-4538-b2f7-46a8b6545a02
	  Kernel Version:             6.8.0-1045-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.35.0
	  Kubelet Version:            v1.35.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (28 in total)
	  Namespace                   Name                                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         15s
	  default                     cloud-spanner-emulator-5649ccbc87-c6x5r                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         65s
	  gadget                      gadget-xgnkq                                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         65s
	  gcp-auth                    gcp-auth-5bbcf684b5-xr7dj                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         58s
	  ingress-nginx               ingress-nginx-controller-7847b5c79c-qzd56                     100m (1%)     0 (0%)      90Mi (0%)        0 (0%)         64s
	  kube-system                 amd-gpu-device-plugin-9gzmq                                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         53s
	  kube-system                 coredns-7d764666f9-bjjpv                                      100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     66s
	  kube-system                 csi-hostpath-attacher-0                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         64s
	  kube-system                 csi-hostpath-resizer-0                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         64s
	  kube-system                 csi-hostpathplugin-jk9wm                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         53s
	  kube-system                 etcd-addons-264018                                            100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         72s
	  kube-system                 kindnet-z5qfv                                                 100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      66s
	  kube-system                 kube-apiserver-addons-264018                                  250m (3%)     0 (0%)      0 (0%)           0 (0%)         72s
	  kube-system                 kube-controller-manager-addons-264018                         200m (2%)     0 (0%)      0 (0%)           0 (0%)         72s
	  kube-system                 kube-ingress-dns-minikube                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         65s
	  kube-system                 kube-proxy-ktmmg                                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         66s
	  kube-system                 kube-scheduler-addons-264018                                  100m (1%)     0 (0%)      0 (0%)           0 (0%)         73s
	  kube-system                 metrics-server-5778bb4788-l88w7                               100m (1%)     0 (0%)      200Mi (0%)       0 (0%)         65s
	  kube-system                 nvidia-device-plugin-daemonset-ff5s5                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         53s
	  kube-system                 registry-788cd7d5bc-8clqr                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         65s
	  kube-system                 registry-creds-567fb78d95-kz6hm                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         65s
	  kube-system                 registry-proxy-tq9sm                                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         53s
	  kube-system                 snapshot-controller-6588d87457-5848x                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         64s
	  kube-system                 snapshot-controller-6588d87457-jzrgj                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         64s
	  kube-system                 storage-provisioner                                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         65s
	  local-path-storage          helper-pod-create-pvc-6eada480-56a9-4b24-ba4a-5bbb94972edc    0 (0%)        0 (0%)      0 (0%)           0 (0%)         1s
	  local-path-storage          local-path-provisioner-c44bcd496-6dz5k                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         65s
	  yakd-dashboard              yakd-dashboard-7bcf5795cd-6dvn8                               0 (0%)        0 (0%)      128Mi (0%)       256Mi (0%)     65s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1050m (13%)  100m (1%)
	  memory             638Mi (1%)   476Mi (1%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-1Gi      0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason          Age   From             Message
	  ----    ------          ----  ----             -------
	  Normal  RegisteredNode  67s   node-controller  Node addons-264018 event: Registered Node addons-264018 in Controller
	
	
	==> dmesg <==
	[Dec29 06:17] MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details.
	[  +0.001714] TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details.
	[  +0.001000] MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details.
	[  +0.084008] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge
	[  +0.380672] i8042: Warning: Keylock active
	[  +0.013979] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.493300] block sda: the capability attribute has been deprecated.
	[  +0.084603] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.024785] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +4.737934] kauditd_printk_skb: 47 callbacks suppressed
	
	
	==> etcd [1fdad9cd679e891d20b4bee4f017f2f3bb45002271534d814ea8b3133140ff54] <==
	{"level":"info","ts":"2025-12-29T06:46:38.535893Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-12-29T06:46:38.535923Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-12-29T06:46:38.536373Z","caller":"membership/cluster.go:682","msg":"set initial cluster version","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","cluster-version":"3.6"}
	{"level":"info","ts":"2025-12-29T06:46:38.536476Z","caller":"api/capability.go:76","msg":"enabled capabilities for version","cluster-version":"3.6"}
	{"level":"info","ts":"2025-12-29T06:46:38.536513Z","caller":"etcdserver/server.go:2440","msg":"cluster version is updated","cluster-version":"3.6"}
	{"level":"info","ts":"2025-12-29T06:46:38.536530Z","caller":"version/monitor.go:116","msg":"cluster version differs from storage version.","cluster-version":"3.6.0","storage-version":"3.5.0"}
	{"level":"info","ts":"2025-12-29T06:46:38.536640Z","caller":"schema/migration.go:65","msg":"updated storage version","new-storage-version":"3.6.0"}
	{"level":"info","ts":"2025-12-29T06:46:38.536988Z","caller":"v3rpc/health.go:63","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-12-29T06:46:38.537107Z","caller":"v3rpc/health.go:63","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-12-29T06:46:38.541055Z","caller":"embed/serve.go:283","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.49.2:2379"}
	{"level":"info","ts":"2025-12-29T06:46:38.541154Z","caller":"embed/serve.go:283","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2025-12-29T06:47:05.061799Z","caller":"traceutil/trace.go:172","msg":"trace[1041852858] transaction","detail":"{read_only:false; response_revision:981; number_of_response:1; }","duration":"122.309486ms","start":"2025-12-29T06:47:04.939470Z","end":"2025-12-29T06:47:05.061779Z","steps":["trace[1041852858] 'process raft request'  (duration: 122.17566ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-29T06:47:09.923207Z","caller":"traceutil/trace.go:172","msg":"trace[703213602] transaction","detail":"{read_only:false; response_revision:1022; number_of_response:1; }","duration":"118.153422ms","start":"2025-12-29T06:47:09.805037Z","end":"2025-12-29T06:47:09.923191Z","steps":["trace[703213602] 'process raft request'  (duration: 118.057925ms)"],"step_count":1}
	{"level":"warn","ts":"2025-12-29T06:47:10.083349Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"102.296662ms","expected-duration":"100ms","prefix":"","request":"header:<ID:8128042302164439511 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/deployments/kube-system/metrics-server\" mod_revision:526 > success:<request_put:<key:\"/registry/deployments/kube-system/metrics-server\" value_size:5330 >> failure:<request_range:<key:\"/registry/deployments/kube-system/metrics-server\" > >>","response":"size:16"}
	{"level":"info","ts":"2025-12-29T06:47:10.083576Z","caller":"traceutil/trace.go:172","msg":"trace[1820841906] transaction","detail":"{read_only:false; response_revision:1028; number_of_response:1; }","duration":"103.643555ms","start":"2025-12-29T06:47:09.979911Z","end":"2025-12-29T06:47:10.083555Z","steps":["trace[1820841906] 'process raft request'  (duration: 103.534777ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-29T06:47:10.083728Z","caller":"traceutil/trace.go:172","msg":"trace[574146154] transaction","detail":"{read_only:false; response_revision:1027; number_of_response:1; }","duration":"106.87259ms","start":"2025-12-29T06:47:09.976839Z","end":"2025-12-29T06:47:10.083712Z","steps":["trace[574146154] 'compare'  (duration: 102.220333ms)"],"step_count":1}
	{"level":"warn","ts":"2025-12-29T06:47:29.172511Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"128.854956ms","expected-duration":"100ms","prefix":"","request":"header:<ID:8128042302164439970 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/replicasets/kube-system/snapshot-controller-6588d87457\" mod_revision:1157 > success:<request_put:<key:\"/registry/replicasets/kube-system/snapshot-controller-6588d87457\" value_size:2143 >> failure:<request_range:<key:\"/registry/replicasets/kube-system/snapshot-controller-6588d87457\" > >>","response":"size:16"}
	{"level":"info","ts":"2025-12-29T06:47:29.172604Z","caller":"traceutil/trace.go:172","msg":"trace[1175284449] linearizableReadLoop","detail":"{readStateIndex:1195; appliedIndex:1194; }","duration":"119.732031ms","start":"2025-12-29T06:47:29.052861Z","end":"2025-12-29T06:47:29.172593Z","steps":["trace[1175284449] 'read index received'  (duration: 25.168µs)","trace[1175284449] 'applied index is now lower than readState.Index'  (duration: 119.706042ms)"],"step_count":2}
	{"level":"info","ts":"2025-12-29T06:47:29.172626Z","caller":"traceutil/trace.go:172","msg":"trace[1274256783] transaction","detail":"{read_only:false; response_revision:1166; number_of_response:1; }","duration":"168.890699ms","start":"2025-12-29T06:47:29.003713Z","end":"2025-12-29T06:47:29.172604Z","steps":["trace[1274256783] 'process raft request'  (duration: 39.874217ms)","trace[1274256783] 'compare'  (duration: 128.754381ms)"],"step_count":2}
	{"level":"warn","ts":"2025-12-29T06:47:29.172708Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"119.841733ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-12-29T06:47:29.172727Z","caller":"traceutil/trace.go:172","msg":"trace[1381238008] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1166; }","duration":"119.864351ms","start":"2025-12-29T06:47:29.052857Z","end":"2025-12-29T06:47:29.172721Z","steps":["trace[1381238008] 'agreement among raft nodes before linearized reading'  (duration: 119.819797ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-29T06:47:29.312886Z","caller":"traceutil/trace.go:172","msg":"trace[791657384] transaction","detail":"{read_only:false; response_revision:1167; number_of_response:1; }","duration":"136.222432ms","start":"2025-12-29T06:47:29.176645Z","end":"2025-12-29T06:47:29.312867Z","steps":["trace[791657384] 'process raft request'  (duration: 135.680767ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-29T06:47:29.364318Z","caller":"traceutil/trace.go:172","msg":"trace[1532176116] transaction","detail":"{read_only:false; response_revision:1168; number_of_response:1; }","duration":"187.264569ms","start":"2025-12-29T06:47:29.177036Z","end":"2025-12-29T06:47:29.364301Z","steps":["trace[1532176116] 'process raft request'  (duration: 187.142698ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-29T06:47:33.267895Z","caller":"traceutil/trace.go:172","msg":"trace[704543682] transaction","detail":"{read_only:false; response_revision:1206; number_of_response:1; }","duration":"129.373818ms","start":"2025-12-29T06:47:33.138498Z","end":"2025-12-29T06:47:33.267872Z","steps":["trace[704543682] 'process raft request'  (duration: 129.20334ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-29T06:47:44.702044Z","caller":"traceutil/trace.go:172","msg":"trace[904756185] transaction","detail":"{read_only:false; response_revision:1268; number_of_response:1; }","duration":"136.568326ms","start":"2025-12-29T06:47:44.565463Z","end":"2025-12-29T06:47:44.702031Z","steps":["trace[904756185] 'process raft request'  (duration: 136.441929ms)"],"step_count":1}
	
	
	==> gcp-auth [c78c4848a2824293365e89dcbcf0c17df9dcd65da941189c067ff1c1ceec08d9] <==
	2025/12/29 06:47:31 GCP Auth Webhook started!
	2025/12/29 06:47:38 Ready to marshal response ...
	2025/12/29 06:47:38 Ready to write response ...
	2025/12/29 06:47:38 Ready to marshal response ...
	2025/12/29 06:47:38 Ready to write response ...
	2025/12/29 06:47:38 Ready to marshal response ...
	2025/12/29 06:47:38 Ready to write response ...
	2025/12/29 06:47:52 Ready to marshal response ...
	2025/12/29 06:47:52 Ready to write response ...
	2025/12/29 06:47:52 Ready to marshal response ...
	2025/12/29 06:47:52 Ready to write response ...
	
	
	==> kernel <==
	 06:47:53 up 30 min,  0 user,  load average: 2.37, 1.01, 0.38
	Linux addons-264018 6.8.0-1045-gcp #48~22.04.1-Ubuntu SMP Tue Nov 25 13:07:56 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [33b335c36f36593b5f7bf67397f810f69aa60256086010a88acab17521368f9f] <==
	I1229 06:46:49.736112       1 main.go:139] hostIP = 192.168.49.2
	podIP = 192.168.49.2
	I1229 06:46:49.736262       1 main.go:148] setting mtu 1500 for CNI 
	I1229 06:46:49.736287       1 main.go:178] kindnetd IP family: "ipv4"
	I1229 06:46:49.736313       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-12-29T06:46:49Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1229 06:46:49.940899       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1229 06:46:50.030487       1 controller.go:381] "Waiting for informer caches to sync"
	I1229 06:46:50.030507       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1229 06:46:50.030686       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1229 06:46:50.431318       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1229 06:46:50.431356       1 metrics.go:72] Registering metrics
	I1229 06:46:50.431423       1 controller.go:711] "Syncing nftables rules"
	I1229 06:46:59.941428       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1229 06:46:59.941531       1 main.go:301] handling current node
	I1229 06:47:09.940667       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1229 06:47:09.940703       1 main.go:301] handling current node
	I1229 06:47:19.940532       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1229 06:47:19.940564       1 main.go:301] handling current node
	I1229 06:47:29.940937       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1229 06:47:29.940983       1 main.go:301] handling current node
	I1229 06:47:39.940530       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1229 06:47:39.940582       1 main.go:301] handling current node
	I1229 06:47:49.941420       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1229 06:47:49.941451       1 main.go:301] handling current node
	
	
	==> kube-apiserver [76eadfaf130b7086d5045d8a47b4ca38794f2e222f8f835d379eb06fbf27c686] <==
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	E1229 06:47:09.977076       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.108.130.230:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.108.130.230:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.108.130.230:443: connect: connection refused" logger="UnhandledError"
	W1229 06:47:10.978786       1 handler_proxy.go:99] no RequestInfo found in the context
	W1229 06:47:10.978805       1 handler_proxy.go:99] no RequestInfo found in the context
	E1229 06:47:10.978830       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I1229 06:47:10.978849       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	E1229 06:47:10.978864       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I1229 06:47:10.980006       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1229 06:47:11.830151       1 logging.go:55] [core] [Channel #267 SubChannel #268]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1229 06:47:11.837693       1 logging.go:55] [core] [Channel #271 SubChannel #272]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1229 06:47:11.854105       1 logging.go:55] [core] [Channel #275 SubChannel #276]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1229 06:47:11.861434       1 logging.go:55] [core] [Channel #279 SubChannel #280]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	E1229 06:47:14.986806       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.108.130.230:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.108.130.230:443/apis/metrics.k8s.io/v1beta1\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" logger="UnhandledError"
	W1229 06:47:14.986923       1 handler_proxy.go:99] no RequestInfo found in the context
	E1229 06:47:14.986969       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I1229 06:47:14.997844       1 handler.go:304] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	E1229 06:47:46.531133       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:49456: use of closed network connection
	E1229 06:47:46.667883       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:49474: use of closed network connection
	
	
	==> kube-controller-manager [51ed82b33b15063ed9584df91a5d6d235c8817d11e47a9a38a1d56e037f2ed6e] <==
	I1229 06:46:46.190544       1 shared_informer.go:377] "Caches are synced"
	I1229 06:46:46.190811       1 node_lifecycle_controller.go:1234] "Initializing eviction metric for zone" zone=""
	I1229 06:46:46.190946       1 shared_informer.go:377] "Caches are synced"
	I1229 06:46:46.190552       1 shared_informer.go:377] "Caches are synced"
	I1229 06:46:46.190560       1 shared_informer.go:377] "Caches are synced"
	I1229 06:46:46.191127       1 range_allocator.go:177] "Sending events to api server"
	I1229 06:46:46.190956       1 node_lifecycle_controller.go:886] "Missing timestamp for Node. Assuming now as a timestamp" node="addons-264018"
	I1229 06:46:46.191184       1 range_allocator.go:181] "Starting range CIDR allocator"
	I1229 06:46:46.191194       1 shared_informer.go:370] "Waiting for caches to sync"
	I1229 06:46:46.191199       1 shared_informer.go:377] "Caches are synced"
	I1229 06:46:46.191206       1 node_lifecycle_controller.go:1038] "Controller detected that all Nodes are not-Ready. Entering master disruption mode"
	I1229 06:46:46.192141       1 shared_informer.go:370] "Waiting for caches to sync"
	I1229 06:46:46.195811       1 shared_informer.go:377] "Caches are synced"
	I1229 06:46:46.197529       1 range_allocator.go:433] "Set node PodCIDR" node="addons-264018" podCIDRs=["10.244.0.0/24"]
	I1229 06:46:46.291043       1 shared_informer.go:377] "Caches are synced"
	I1229 06:46:46.291067       1 garbagecollector.go:166] "Garbage collector: all resource monitors have synced"
	I1229 06:46:46.291075       1 garbagecollector.go:169] "Proceeding to collect garbage"
	I1229 06:46:46.293271       1 shared_informer.go:377] "Caches are synced"
	E1229 06:46:48.606850       1 replica_set.go:592] "Unhandled Error" err="sync \"kube-system/metrics-server-5778bb4788\" failed with pods \"metrics-server-5778bb4788-\" is forbidden: error looking up service account kube-system/metrics-server: serviceaccount \"metrics-server\" not found" logger="UnhandledError"
	I1229 06:47:01.193472       1 node_lifecycle_controller.go:1057] "Controller detected that some Nodes are Ready. Exiting master disruption mode"
	I1229 06:47:16.200673       1 resource_quota_monitor.go:228] "QuotaMonitor created object count evaluator" resource="volumesnapshots.snapshot.storage.k8s.io"
	I1229 06:47:16.200733       1 shared_informer.go:370] "Waiting for caches to sync"
	I1229 06:47:16.300761       1 shared_informer.go:370] "Waiting for caches to sync"
	I1229 06:47:16.301016       1 shared_informer.go:377] "Caches are synced"
	I1229 06:47:16.401084       1 shared_informer.go:377] "Caches are synced"
	
	
	==> kube-proxy [4a3c44dcac1f9a83e3a787226cced9a0ab00baf1994255347f9606435fbde789] <==
	I1229 06:46:48.340123       1 server_linux.go:53] "Using iptables proxy"
	I1229 06:46:48.560171       1 shared_informer.go:370] "Waiting for caches to sync"
	I1229 06:46:48.669393       1 shared_informer.go:377] "Caches are synced"
	I1229 06:46:48.669703       1 server.go:218] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1229 06:46:48.670472       1 server.go:255] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1229 06:46:48.718378       1 server.go:264] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1229 06:46:48.718487       1 server_linux.go:136] "Using iptables Proxier"
	I1229 06:46:48.725351       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1229 06:46:48.734116       1 server.go:529] "Version info" version="v1.35.0"
	I1229 06:46:48.742701       1 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1229 06:46:48.744833       1 config.go:200] "Starting service config controller"
	I1229 06:46:48.744930       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1229 06:46:48.745003       1 config.go:106] "Starting endpoint slice config controller"
	I1229 06:46:48.745027       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1229 06:46:48.745060       1 config.go:403] "Starting serviceCIDR config controller"
	I1229 06:46:48.745083       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1229 06:46:48.746007       1 config.go:309] "Starting node config controller"
	I1229 06:46:48.746066       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1229 06:46:48.746093       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1229 06:46:48.845713       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1229 06:46:48.845910       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1229 06:46:48.845984       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [3cbc1d555be4a7bc9b343c6dd9de17ccc7a1450486f6885704f6cf2ccfab7ebb] <==
	E1229 06:46:39.410252       1 reflector.go:204] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.CSIStorageCapacity"
	E1229 06:46:39.410381       1 reflector.go:204] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.StatefulSet"
	E1229 06:46:39.410814       1 reflector.go:204] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.PodDisruptionBudget"
	E1229 06:46:39.410935       1 reflector.go:204] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.PersistentVolume"
	E1229 06:46:39.411184       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ReplicaSet"
	E1229 06:46:39.411324       1 reflector.go:204] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.DeviceClass"
	E1229 06:46:39.411940       1 reflector.go:204] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.CSIDriver"
	E1229 06:46:39.411951       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Namespace"
	E1229 06:46:39.412718       1 reflector.go:204] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.StorageClass"
	E1229 06:46:39.412766       1 reflector.go:204] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.CSINode"
	E1229 06:46:39.412796       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Service"
	E1229 06:46:39.412824       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ReplicationController"
	E1229 06:46:39.412900       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Pod"
	E1229 06:46:40.296627       1 reflector.go:204] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.StorageClass"
	E1229 06:46:40.345867       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ResourceClaim"
	E1229 06:46:40.349480       1 reflector.go:204] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.VolumeAttachment"
	E1229 06:46:40.463155       1 reflector.go:204] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.CSIDriver"
	E1229 06:46:40.483853       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ReplicaSet"
	E1229 06:46:40.495488       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Pod"
	E1229 06:46:40.525514       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ResourceSlice"
	E1229 06:46:40.584421       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Service"
	E1229 06:46:40.595368       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Node"
	E1229 06:46:40.605179       1 reflector.go:204] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.PersistentVolumeClaim"
	E1229 06:46:40.609866       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Namespace"
	I1229 06:46:41.005394       1 shared_informer.go:377] "Caches are synced"
	
	
	==> kubelet <==
	Dec 29 06:47:28 addons-264018 kubelet[1265]: I1229 06:47:28.890248    1265 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c7f02069b9848672f1b181845e03b894e4ce41196535124c862e2f98bd1eeeb1"
	Dec 29 06:47:29 addons-264018 kubelet[1265]: E1229 06:47:29.866985    1265 prober_manager.go:197] "Startup probe already exists for container" pod="gadget/gadget-xgnkq" containerName="gadget"
	Dec 29 06:47:29 addons-264018 kubelet[1265]: E1229 06:47:29.895784    1265 prober_manager.go:209] "Readiness probe already exists for container" pod="ingress-nginx/ingress-nginx-controller-7847b5c79c-qzd56" containerName="controller"
	Dec 29 06:47:29 addons-264018 kubelet[1265]: E1229 06:47:29.955038    1265 prober_manager.go:197] "Startup probe already exists for container" pod="gadget/gadget-xgnkq" containerName="gadget"
	Dec 29 06:47:29 addons-264018 kubelet[1265]: I1229 06:47:29.969302    1265 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="ingress-nginx/ingress-nginx-controller-7847b5c79c-qzd56" podStartSLOduration=27.665709796 podStartE2EDuration="40.969281384s" podCreationTimestamp="2025-12-29 06:46:49 +0000 UTC" firstStartedPulling="2025-12-29 06:47:16.506914307 +0000 UTC m=+34.920035833" lastFinishedPulling="2025-12-29 06:47:29.810485887 +0000 UTC m=+48.223607421" observedRunningTime="2025-12-29 06:47:29.909506154 +0000 UTC m=+48.322627690" watchObservedRunningTime="2025-12-29 06:47:29.969281384 +0000 UTC m=+48.382402921"
	Dec 29 06:47:30 addons-264018 kubelet[1265]: E1229 06:47:30.899373    1265 prober_manager.go:197] "Startup probe already exists for container" pod="gadget/gadget-xgnkq" containerName="gadget"
	Dec 29 06:47:30 addons-264018 kubelet[1265]: E1229 06:47:30.899524    1265 prober_manager.go:209] "Readiness probe already exists for container" pod="ingress-nginx/ingress-nginx-controller-7847b5c79c-qzd56" containerName="controller"
	Dec 29 06:47:31 addons-264018 kubelet[1265]: I1229 06:47:31.920782    1265 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="gcp-auth/gcp-auth-5bbcf684b5-xr7dj" podStartSLOduration=21.956891753 podStartE2EDuration="36.92076292s" podCreationTimestamp="2025-12-29 06:46:55 +0000 UTC" firstStartedPulling="2025-12-29 06:47:16.549190912 +0000 UTC m=+34.962312443" lastFinishedPulling="2025-12-29 06:47:31.513062091 +0000 UTC m=+49.926183610" observedRunningTime="2025-12-29 06:47:31.918947315 +0000 UTC m=+50.332068851" watchObservedRunningTime="2025-12-29 06:47:31.92076292 +0000 UTC m=+50.333884459"
	Dec 29 06:47:32 addons-264018 kubelet[1265]: E1229 06:47:32.352031    1265 secret.go:189] Couldn't get secret kube-system/registry-creds-gcr: secret "registry-creds-gcr" not found
	Dec 29 06:47:32 addons-264018 kubelet[1265]: E1229 06:47:32.352113    1265 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e7864e17-b25f-4576-8c1a-c14ef1e6725a-gcr-creds podName:e7864e17-b25f-4576-8c1a-c14ef1e6725a nodeName:}" failed. No retries permitted until 2025-12-29 06:48:04.352095358 +0000 UTC m=+82.765216872 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "gcr-creds" (UniqueName: "kubernetes.io/secret/e7864e17-b25f-4576-8c1a-c14ef1e6725a-gcr-creds") pod "registry-creds-567fb78d95-kz6hm" (UID: "e7864e17-b25f-4576-8c1a-c14ef1e6725a") : secret "registry-creds-gcr" not found
	Dec 29 06:47:34 addons-264018 kubelet[1265]: I1229 06:47:34.702375    1265 csi_plugin.go:106] kubernetes.io/csi: Trying to validate a new CSI Driver with name: hostpath.csi.k8s.io endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock versions: 1.0.0
	Dec 29 06:47:34 addons-264018 kubelet[1265]: I1229 06:47:34.702417    1265 csi_plugin.go:119] kubernetes.io/csi: Register new plugin with name: hostpath.csi.k8s.io at endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock
	Dec 29 06:47:35 addons-264018 kubelet[1265]: E1229 06:47:35.929000    1265 prober_manager.go:221] "Liveness probe already exists for container" pod="kube-system/csi-hostpathplugin-jk9wm" containerName="hostpath"
	Dec 29 06:47:35 addons-264018 kubelet[1265]: I1229 06:47:35.948416    1265 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/csi-hostpathplugin-jk9wm" podStartSLOduration=1.231454761 podStartE2EDuration="35.948396457s" podCreationTimestamp="2025-12-29 06:47:00 +0000 UTC" firstStartedPulling="2025-12-29 06:47:00.964416423 +0000 UTC m=+19.377537952" lastFinishedPulling="2025-12-29 06:47:35.681358117 +0000 UTC m=+54.094479648" observedRunningTime="2025-12-29 06:47:35.945246531 +0000 UTC m=+54.358368068" watchObservedRunningTime="2025-12-29 06:47:35.948396457 +0000 UTC m=+54.361517993"
	Dec 29 06:47:36 addons-264018 kubelet[1265]: E1229 06:47:36.933278    1265 prober_manager.go:221] "Liveness probe already exists for container" pod="kube-system/csi-hostpathplugin-jk9wm" containerName="hostpath"
	Dec 29 06:47:38 addons-264018 kubelet[1265]: I1229 06:47:38.503076    1265 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pzb95\" (UniqueName: \"kubernetes.io/projected/c34183a6-ab5e-44fd-811d-4ecfe518baf1-kube-api-access-pzb95\") pod \"busybox\" (UID: \"c34183a6-ab5e-44fd-811d-4ecfe518baf1\") " pod="default/busybox"
	Dec 29 06:47:38 addons-264018 kubelet[1265]: I1229 06:47:38.503175    1265 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/c34183a6-ab5e-44fd-811d-4ecfe518baf1-gcp-creds\") pod \"busybox\" (UID: \"c34183a6-ab5e-44fd-811d-4ecfe518baf1\") " pod="default/busybox"
	Dec 29 06:47:40 addons-264018 kubelet[1265]: E1229 06:47:40.901934    1265 prober_manager.go:209] "Readiness probe already exists for container" pod="ingress-nginx/ingress-nginx-controller-7847b5c79c-qzd56" containerName="controller"
	Dec 29 06:47:40 addons-264018 kubelet[1265]: I1229 06:47:40.961833    1265 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="default/busybox" podStartSLOduration=1.780538036 podStartE2EDuration="2.961815519s" podCreationTimestamp="2025-12-29 06:47:38 +0000 UTC" firstStartedPulling="2025-12-29 06:47:38.73106336 +0000 UTC m=+57.144184874" lastFinishedPulling="2025-12-29 06:47:39.912340839 +0000 UTC m=+58.325462357" observedRunningTime="2025-12-29 06:47:40.960863377 +0000 UTC m=+59.373984913" watchObservedRunningTime="2025-12-29 06:47:40.961815519 +0000 UTC m=+59.374937055"
	Dec 29 06:47:41 addons-264018 kubelet[1265]: I1229 06:47:41.667436    1265 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="c03fca7f-b58b-4d9d-a5a3-f995cb9fc83d" path="/var/lib/kubelet/pods/c03fca7f-b58b-4d9d-a5a3-f995cb9fc83d/volumes"
	Dec 29 06:47:49 addons-264018 kubelet[1265]: I1229 06:47:49.667959    1265 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="e98e897d-0a7d-4c89-9799-efd3bcda707e" path="/var/lib/kubelet/pods/e98e897d-0a7d-4c89-9799-efd3bcda707e/volumes"
	Dec 29 06:47:52 addons-264018 kubelet[1265]: I1229 06:47:52.606767    1265 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data\" (UniqueName: \"kubernetes.io/host-path/0dd6cd26-8c04-4b5e-88e6-14ddcbbd1373-data\") pod \"helper-pod-create-pvc-6eada480-56a9-4b24-ba4a-5bbb94972edc\" (UID: \"0dd6cd26-8c04-4b5e-88e6-14ddcbbd1373\") " pod="local-path-storage/helper-pod-create-pvc-6eada480-56a9-4b24-ba4a-5bbb94972edc"
	Dec 29 06:47:52 addons-264018 kubelet[1265]: I1229 06:47:52.606834    1265 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"script\" (UniqueName: \"kubernetes.io/configmap/0dd6cd26-8c04-4b5e-88e6-14ddcbbd1373-script\") pod \"helper-pod-create-pvc-6eada480-56a9-4b24-ba4a-5bbb94972edc\" (UID: \"0dd6cd26-8c04-4b5e-88e6-14ddcbbd1373\") " pod="local-path-storage/helper-pod-create-pvc-6eada480-56a9-4b24-ba4a-5bbb94972edc"
	Dec 29 06:47:52 addons-264018 kubelet[1265]: I1229 06:47:52.606862    1265 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t24ng\" (UniqueName: \"kubernetes.io/projected/0dd6cd26-8c04-4b5e-88e6-14ddcbbd1373-kube-api-access-t24ng\") pod \"helper-pod-create-pvc-6eada480-56a9-4b24-ba4a-5bbb94972edc\" (UID: \"0dd6cd26-8c04-4b5e-88e6-14ddcbbd1373\") " pod="local-path-storage/helper-pod-create-pvc-6eada480-56a9-4b24-ba4a-5bbb94972edc"
	Dec 29 06:47:52 addons-264018 kubelet[1265]: I1229 06:47:52.607028    1265 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/0dd6cd26-8c04-4b5e-88e6-14ddcbbd1373-gcp-creds\") pod \"helper-pod-create-pvc-6eada480-56a9-4b24-ba4a-5bbb94972edc\" (UID: \"0dd6cd26-8c04-4b5e-88e6-14ddcbbd1373\") " pod="local-path-storage/helper-pod-create-pvc-6eada480-56a9-4b24-ba4a-5bbb94972edc"
	
	
	==> storage-provisioner [eeb0fd8ef162839fa207c203bea491f6c9a658c71049e04fca885b95bea92e12] <==
	W1229 06:47:29.313833       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1229 06:47:31.317515       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1229 06:47:31.322784       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1229 06:47:33.325886       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1229 06:47:33.329990       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1229 06:47:35.332950       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1229 06:47:35.338433       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1229 06:47:37.341036       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1229 06:47:37.344650       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1229 06:47:39.347998       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1229 06:47:39.354200       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1229 06:47:41.357246       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1229 06:47:41.361489       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1229 06:47:43.363996       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1229 06:47:43.367510       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1229 06:47:45.370891       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1229 06:47:45.374325       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1229 06:47:47.377569       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1229 06:47:47.382084       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1229 06:47:49.384340       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1229 06:47:49.387901       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1229 06:47:51.390822       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1229 06:47:51.395933       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1229 06:47:53.399912       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1229 06:47:53.404034       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-264018 -n addons-264018
helpers_test.go:270: (dbg) Run:  kubectl --context addons-264018 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:281: non-running pods: test-local-path ingress-nginx-admission-create-kt7wq ingress-nginx-admission-patch-mbp5l registry-creds-567fb78d95-kz6hm helper-pod-create-pvc-6eada480-56a9-4b24-ba4a-5bbb94972edc
helpers_test.go:283: ======> post-mortem[TestAddons/parallel/Headlamp]: describe non-running pods <======
helpers_test.go:286: (dbg) Run:  kubectl --context addons-264018 describe pod test-local-path ingress-nginx-admission-create-kt7wq ingress-nginx-admission-patch-mbp5l registry-creds-567fb78d95-kz6hm helper-pod-create-pvc-6eada480-56a9-4b24-ba4a-5bbb94972edc
helpers_test.go:286: (dbg) Non-zero exit: kubectl --context addons-264018 describe pod test-local-path ingress-nginx-admission-create-kt7wq ingress-nginx-admission-patch-mbp5l registry-creds-567fb78d95-kz6hm helper-pod-create-pvc-6eada480-56a9-4b24-ba4a-5bbb94972edc: exit status 1 (66.125195ms)

                                                
                                                
-- stdout --
	Name:               test-local-path
	Namespace:          default
	Priority:           0
	Service Account:    default
	Node:               <none>
	Labels:             run=test-local-path
	Annotations:        <none>
	Status:             Pending
	IP:                 
	IPs:                <none>
	NominatedNodeName:  addons-264018
	Containers:
	  busybox:
	    Image:      busybox:stable
	    Port:       <none>
	    Host Port:  <none>
	    Command:
	      sh
	      -c
	      echo 'local-path-provisioner' > /test/file1
	    Environment:
	      GOOGLE_APPLICATION_CREDENTIALS:  /google-app-creds.json
	      PROJECT_ID:                      this_is_fake
	      GCP_PROJECT:                     this_is_fake
	      GCLOUD_PROJECT:                  this_is_fake
	      GOOGLE_CLOUD_PROJECT:            this_is_fake
	      CLOUDSDK_CORE_PROJECT:           this_is_fake
	    Mounts:
	      /google-app-creds.json from gcp-creds (ro)
	      /test from data (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-mzs85 (ro)
	Volumes:
	  data:
	    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
	    ClaimName:  test-pvc
	    ReadOnly:   false
	  kube-api-access-mzs85:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	  gcp-creds:
	    Type:          HostPath (bare host directory volume)
	    Path:          /var/lib/minikube/google_application_credentials.json
	    HostPathType:  File
	QoS Class:         BestEffort
	Node-Selectors:    <none>
	Tolerations:       node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                   node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:            <none>

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-kt7wq" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-mbp5l" not found
	Error from server (NotFound): pods "registry-creds-567fb78d95-kz6hm" not found
	Error from server (NotFound): pods "helper-pod-create-pvc-6eada480-56a9-4b24-ba4a-5bbb94972edc" not found

                                                
                                                
** /stderr **
helpers_test.go:288: kubectl --context addons-264018 describe pod test-local-path ingress-nginx-admission-create-kt7wq ingress-nginx-admission-patch-mbp5l registry-creds-567fb78d95-kz6hm helper-pod-create-pvc-6eada480-56a9-4b24-ba4a-5bbb94972edc: exit status 1
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-264018 addons disable headlamp --alsologtostderr -v=1
addons_test.go:1055: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-264018 addons disable headlamp --alsologtostderr -v=1: exit status 11 (235.855441ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1229 06:47:54.562030   23527 out.go:360] Setting OutFile to fd 1 ...
	I1229 06:47:54.562152   23527 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1229 06:47:54.562162   23527 out.go:374] Setting ErrFile to fd 2...
	I1229 06:47:54.562166   23527 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1229 06:47:54.562372   23527 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22353-9207/.minikube/bin
	I1229 06:47:54.562620   23527 mustload.go:66] Loading cluster: addons-264018
	I1229 06:47:54.562906   23527 config.go:182] Loaded profile config "addons-264018": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1229 06:47:54.562929   23527 addons.go:622] checking whether the cluster is paused
	I1229 06:47:54.563008   23527 config.go:182] Loaded profile config "addons-264018": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1229 06:47:54.563026   23527 host.go:66] Checking if "addons-264018" exists ...
	I1229 06:47:54.563402   23527 cli_runner.go:164] Run: docker container inspect addons-264018 --format={{.State.Status}}
	I1229 06:47:54.580452   23527 ssh_runner.go:195] Run: systemctl --version
	I1229 06:47:54.580496   23527 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-264018
	I1229 06:47:54.596640   23527 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22353-9207/.minikube/machines/addons-264018/id_rsa Username:docker}
	I1229 06:47:54.692736   23527 cri.go:61] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1229 06:47:54.692838   23527 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1229 06:47:54.723638   23527 cri.go:96] found id: "a6a0581d8c61611e2f272b96343a268fe5c70f7029e38edfe79b93a5fd1c926f"
	I1229 06:47:54.723674   23527 cri.go:96] found id: "40f1bbbc2f1bc41ffc575f885bace3f777cfb7eef12de99365091343767cae35"
	I1229 06:47:54.723679   23527 cri.go:96] found id: "cf76d1d0070f881a9ffa8a6c04533147ff62c7b1514e1c54068befd5af071e6e"
	I1229 06:47:54.723683   23527 cri.go:96] found id: "191723d7168d895c6bf51e40836a2fca9afd4a59b8bdb556490f53c792ed7d42"
	I1229 06:47:54.723686   23527 cri.go:96] found id: "ae96a02e9cf4dc9de5d7df316ab70ff8a9d5d492040546a516cfaf1820f86bad"
	I1229 06:47:54.723689   23527 cri.go:96] found id: "c0c3e5a743acf0570907049fa852e80d8eef97b9bad8644c6ff7a8e19a9c7733"
	I1229 06:47:54.723692   23527 cri.go:96] found id: "f3b99ec55e372e2b5c844e254b9ac4579df2d959ab772039faba6a6b067f14b3"
	I1229 06:47:54.723695   23527 cri.go:96] found id: "ebcf1d7d5c969c114009537b9defcb8f0588535464fbc0a2ff842fefb447bb8c"
	I1229 06:47:54.723698   23527 cri.go:96] found id: "c9b16a3ab994c277bfed1b0bf551b2af5be8fe356cf284ce509b41b234e1bbc2"
	I1229 06:47:54.723707   23527 cri.go:96] found id: "4bfc41143f5c8fbc4ac026d705315ecf51fb6fe6d2b5c89b603528b3b12a74db"
	I1229 06:47:54.723710   23527 cri.go:96] found id: "e4899f3db6ad29c94b5952375d6ee208724e8ad29644873f0cd737a68e8a703a"
	I1229 06:47:54.723713   23527 cri.go:96] found id: "292ecef6acbc2acee8ef472e9ea145feebe4df5df46f16f5454e0f099419d6ac"
	I1229 06:47:54.723716   23527 cri.go:96] found id: "b719d17a16ec647d0d77415a2263349a1c5df8d59d393b88bb52526b8c006bde"
	I1229 06:47:54.723724   23527 cri.go:96] found id: "f6f13110641eb941db49bd781fcb8fe293b7d6c4cd5c427879803c2ba59617a1"
	I1229 06:47:54.723727   23527 cri.go:96] found id: "17994c226aacad57fbda97a02fed4553e91bd134ce6d53975246176759124769"
	I1229 06:47:54.723739   23527 cri.go:96] found id: "dc2d3d8fc63a89092d1fb7c5c78aa3b5961a80f87adc9e764d27f5097df557c8"
	I1229 06:47:54.723743   23527 cri.go:96] found id: "21f189ec37313b547a78e6b7eefa6f39267001240605aa04be22ce21ebed4a7b"
	I1229 06:47:54.723752   23527 cri.go:96] found id: "eeb0fd8ef162839fa207c203bea491f6c9a658c71049e04fca885b95bea92e12"
	I1229 06:47:54.723754   23527 cri.go:96] found id: "33b335c36f36593b5f7bf67397f810f69aa60256086010a88acab17521368f9f"
	I1229 06:47:54.723757   23527 cri.go:96] found id: "4a3c44dcac1f9a83e3a787226cced9a0ab00baf1994255347f9606435fbde789"
	I1229 06:47:54.723762   23527 cri.go:96] found id: "51ed82b33b15063ed9584df91a5d6d235c8817d11e47a9a38a1d56e037f2ed6e"
	I1229 06:47:54.723768   23527 cri.go:96] found id: "3cbc1d555be4a7bc9b343c6dd9de17ccc7a1450486f6885704f6cf2ccfab7ebb"
	I1229 06:47:54.723771   23527 cri.go:96] found id: "76eadfaf130b7086d5045d8a47b4ca38794f2e222f8f835d379eb06fbf27c686"
	I1229 06:47:54.723773   23527 cri.go:96] found id: "1fdad9cd679e891d20b4bee4f017f2f3bb45002271534d814ea8b3133140ff54"
	I1229 06:47:54.723776   23527 cri.go:96] found id: ""
	I1229 06:47:54.723826   23527 ssh_runner.go:195] Run: sudo runc list -f json
	I1229 06:47:54.738241   23527 out.go:203] 
	W1229 06:47:54.739551   23527 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-29T06:47:54Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-29T06:47:54Z" level=error msg="open /run/runc: no such file or directory"
	
	W1229 06:47:54.739570   23527 out.go:285] * 
	* 
	W1229 06:47:54.740358   23527 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_efe3f0a65eabdab15324ffdebd5a66da17706a9c_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_efe3f0a65eabdab15324ffdebd5a66da17706a9c_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1229 06:47:54.741712   23527 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1057: failed to disable headlamp addon: args "out/minikube-linux-amd64 -p addons-264018 addons disable headlamp --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/Headlamp (2.56s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.26s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:842: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:353: "cloud-spanner-emulator-5649ccbc87-c6x5r" [ae34ce73-bd18-4c76-b354-f046417d0f46] Running
addons_test.go:842: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.003093153s
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-264018 addons disable cloud-spanner --alsologtostderr -v=1
addons_test.go:1055: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-264018 addons disable cloud-spanner --alsologtostderr -v=1: exit status 11 (255.121507ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1229 06:47:57.245630   23784 out.go:360] Setting OutFile to fd 1 ...
	I1229 06:47:57.245791   23784 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1229 06:47:57.245804   23784 out.go:374] Setting ErrFile to fd 2...
	I1229 06:47:57.245809   23784 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1229 06:47:57.246033   23784 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22353-9207/.minikube/bin
	I1229 06:47:57.246352   23784 mustload.go:66] Loading cluster: addons-264018
	I1229 06:47:57.246707   23784 config.go:182] Loaded profile config "addons-264018": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1229 06:47:57.246729   23784 addons.go:622] checking whether the cluster is paused
	I1229 06:47:57.246843   23784 config.go:182] Loaded profile config "addons-264018": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1229 06:47:57.246865   23784 host.go:66] Checking if "addons-264018" exists ...
	I1229 06:47:57.247319   23784 cli_runner.go:164] Run: docker container inspect addons-264018 --format={{.State.Status}}
	I1229 06:47:57.267239   23784 ssh_runner.go:195] Run: systemctl --version
	I1229 06:47:57.267326   23784 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-264018
	I1229 06:47:57.286047   23784 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22353-9207/.minikube/machines/addons-264018/id_rsa Username:docker}
	I1229 06:47:57.385483   23784 cri.go:61] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1229 06:47:57.385547   23784 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1229 06:47:57.422067   23784 cri.go:96] found id: "a6a0581d8c61611e2f272b96343a268fe5c70f7029e38edfe79b93a5fd1c926f"
	I1229 06:47:57.422093   23784 cri.go:96] found id: "40f1bbbc2f1bc41ffc575f885bace3f777cfb7eef12de99365091343767cae35"
	I1229 06:47:57.422098   23784 cri.go:96] found id: "cf76d1d0070f881a9ffa8a6c04533147ff62c7b1514e1c54068befd5af071e6e"
	I1229 06:47:57.422101   23784 cri.go:96] found id: "191723d7168d895c6bf51e40836a2fca9afd4a59b8bdb556490f53c792ed7d42"
	I1229 06:47:57.422104   23784 cri.go:96] found id: "ae96a02e9cf4dc9de5d7df316ab70ff8a9d5d492040546a516cfaf1820f86bad"
	I1229 06:47:57.422108   23784 cri.go:96] found id: "c0c3e5a743acf0570907049fa852e80d8eef97b9bad8644c6ff7a8e19a9c7733"
	I1229 06:47:57.422113   23784 cri.go:96] found id: "f3b99ec55e372e2b5c844e254b9ac4579df2d959ab772039faba6a6b067f14b3"
	I1229 06:47:57.422121   23784 cri.go:96] found id: "ebcf1d7d5c969c114009537b9defcb8f0588535464fbc0a2ff842fefb447bb8c"
	I1229 06:47:57.422124   23784 cri.go:96] found id: "c9b16a3ab994c277bfed1b0bf551b2af5be8fe356cf284ce509b41b234e1bbc2"
	I1229 06:47:57.422130   23784 cri.go:96] found id: "4bfc41143f5c8fbc4ac026d705315ecf51fb6fe6d2b5c89b603528b3b12a74db"
	I1229 06:47:57.422133   23784 cri.go:96] found id: "e4899f3db6ad29c94b5952375d6ee208724e8ad29644873f0cd737a68e8a703a"
	I1229 06:47:57.422136   23784 cri.go:96] found id: "292ecef6acbc2acee8ef472e9ea145feebe4df5df46f16f5454e0f099419d6ac"
	I1229 06:47:57.422139   23784 cri.go:96] found id: "b719d17a16ec647d0d77415a2263349a1c5df8d59d393b88bb52526b8c006bde"
	I1229 06:47:57.422142   23784 cri.go:96] found id: "f6f13110641eb941db49bd781fcb8fe293b7d6c4cd5c427879803c2ba59617a1"
	I1229 06:47:57.422145   23784 cri.go:96] found id: "17994c226aacad57fbda97a02fed4553e91bd134ce6d53975246176759124769"
	I1229 06:47:57.422149   23784 cri.go:96] found id: "dc2d3d8fc63a89092d1fb7c5c78aa3b5961a80f87adc9e764d27f5097df557c8"
	I1229 06:47:57.422152   23784 cri.go:96] found id: "21f189ec37313b547a78e6b7eefa6f39267001240605aa04be22ce21ebed4a7b"
	I1229 06:47:57.422155   23784 cri.go:96] found id: "eeb0fd8ef162839fa207c203bea491f6c9a658c71049e04fca885b95bea92e12"
	I1229 06:47:57.422162   23784 cri.go:96] found id: "33b335c36f36593b5f7bf67397f810f69aa60256086010a88acab17521368f9f"
	I1229 06:47:57.422165   23784 cri.go:96] found id: "4a3c44dcac1f9a83e3a787226cced9a0ab00baf1994255347f9606435fbde789"
	I1229 06:47:57.422172   23784 cri.go:96] found id: "51ed82b33b15063ed9584df91a5d6d235c8817d11e47a9a38a1d56e037f2ed6e"
	I1229 06:47:57.422177   23784 cri.go:96] found id: "3cbc1d555be4a7bc9b343c6dd9de17ccc7a1450486f6885704f6cf2ccfab7ebb"
	I1229 06:47:57.422180   23784 cri.go:96] found id: "76eadfaf130b7086d5045d8a47b4ca38794f2e222f8f835d379eb06fbf27c686"
	I1229 06:47:57.422183   23784 cri.go:96] found id: "1fdad9cd679e891d20b4bee4f017f2f3bb45002271534d814ea8b3133140ff54"
	I1229 06:47:57.422186   23784 cri.go:96] found id: ""
	I1229 06:47:57.422241   23784 ssh_runner.go:195] Run: sudo runc list -f json
	I1229 06:47:57.438454   23784 out.go:203] 
	W1229 06:47:57.439745   23784 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-29T06:47:57Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-29T06:47:57Z" level=error msg="open /run/runc: no such file or directory"
	
	W1229 06:47:57.439770   23784 out.go:285] * 
	* 
	W1229 06:47:57.440609   23784 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_e93ff976b7e98e1dc466aded9385c0856b6d1b41_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_e93ff976b7e98e1dc466aded9385c0856b6d1b41_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1229 06:47:57.441788   23784 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1057: failed to disable cloud-spanner addon: args "out/minikube-linux-amd64 -p addons-264018 addons disable cloud-spanner --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/CloudSpanner (5.26s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (8.13s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:951: (dbg) Run:  kubectl --context addons-264018 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:957: (dbg) Run:  kubectl --context addons-264018 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:961: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:403: (dbg) Run:  kubectl --context addons-264018 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-264018 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-264018 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-264018 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-264018 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:964: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:353: "test-local-path" [f812707b-9cc8-43e8-90aa-fb582bda647e] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:353: "test-local-path" [f812707b-9cc8-43e8-90aa-fb582bda647e] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:353: "test-local-path" [f812707b-9cc8-43e8-90aa-fb582bda647e] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:964: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 3.003400038s
addons_test.go:969: (dbg) Run:  kubectl --context addons-264018 get pvc test-pvc -o=json
addons_test.go:978: (dbg) Run:  out/minikube-linux-amd64 -p addons-264018 ssh "cat /opt/local-path-provisioner/pvc-6eada480-56a9-4b24-ba4a-5bbb94972edc_default_test-pvc/file1"
addons_test.go:990: (dbg) Run:  kubectl --context addons-264018 delete pod test-local-path
addons_test.go:994: (dbg) Run:  kubectl --context addons-264018 delete pvc test-pvc
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-264018 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:1055: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-264018 addons disable storage-provisioner-rancher --alsologtostderr -v=1: exit status 11 (258.852967ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1229 06:48:00.104105   24146 out.go:360] Setting OutFile to fd 1 ...
	I1229 06:48:00.104430   24146 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1229 06:48:00.104440   24146 out.go:374] Setting ErrFile to fd 2...
	I1229 06:48:00.104445   24146 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1229 06:48:00.104634   24146 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22353-9207/.minikube/bin
	I1229 06:48:00.104933   24146 mustload.go:66] Loading cluster: addons-264018
	I1229 06:48:00.105306   24146 config.go:182] Loaded profile config "addons-264018": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1229 06:48:00.105331   24146 addons.go:622] checking whether the cluster is paused
	I1229 06:48:00.105469   24146 config.go:182] Loaded profile config "addons-264018": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1229 06:48:00.105487   24146 host.go:66] Checking if "addons-264018" exists ...
	I1229 06:48:00.105981   24146 cli_runner.go:164] Run: docker container inspect addons-264018 --format={{.State.Status}}
	I1229 06:48:00.125071   24146 ssh_runner.go:195] Run: systemctl --version
	I1229 06:48:00.125130   24146 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-264018
	I1229 06:48:00.145954   24146 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22353-9207/.minikube/machines/addons-264018/id_rsa Username:docker}
	I1229 06:48:00.246777   24146 cri.go:61] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1229 06:48:00.246861   24146 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1229 06:48:00.279048   24146 cri.go:96] found id: "a6a0581d8c61611e2f272b96343a268fe5c70f7029e38edfe79b93a5fd1c926f"
	I1229 06:48:00.279072   24146 cri.go:96] found id: "40f1bbbc2f1bc41ffc575f885bace3f777cfb7eef12de99365091343767cae35"
	I1229 06:48:00.279079   24146 cri.go:96] found id: "cf76d1d0070f881a9ffa8a6c04533147ff62c7b1514e1c54068befd5af071e6e"
	I1229 06:48:00.279084   24146 cri.go:96] found id: "191723d7168d895c6bf51e40836a2fca9afd4a59b8bdb556490f53c792ed7d42"
	I1229 06:48:00.279088   24146 cri.go:96] found id: "ae96a02e9cf4dc9de5d7df316ab70ff8a9d5d492040546a516cfaf1820f86bad"
	I1229 06:48:00.279120   24146 cri.go:96] found id: "c0c3e5a743acf0570907049fa852e80d8eef97b9bad8644c6ff7a8e19a9c7733"
	I1229 06:48:00.279127   24146 cri.go:96] found id: "f3b99ec55e372e2b5c844e254b9ac4579df2d959ab772039faba6a6b067f14b3"
	I1229 06:48:00.279132   24146 cri.go:96] found id: "ebcf1d7d5c969c114009537b9defcb8f0588535464fbc0a2ff842fefb447bb8c"
	I1229 06:48:00.279136   24146 cri.go:96] found id: "c9b16a3ab994c277bfed1b0bf551b2af5be8fe356cf284ce509b41b234e1bbc2"
	I1229 06:48:00.279148   24146 cri.go:96] found id: "4bfc41143f5c8fbc4ac026d705315ecf51fb6fe6d2b5c89b603528b3b12a74db"
	I1229 06:48:00.279157   24146 cri.go:96] found id: "e4899f3db6ad29c94b5952375d6ee208724e8ad29644873f0cd737a68e8a703a"
	I1229 06:48:00.279161   24146 cri.go:96] found id: "292ecef6acbc2acee8ef472e9ea145feebe4df5df46f16f5454e0f099419d6ac"
	I1229 06:48:00.279169   24146 cri.go:96] found id: "b719d17a16ec647d0d77415a2263349a1c5df8d59d393b88bb52526b8c006bde"
	I1229 06:48:00.279176   24146 cri.go:96] found id: "f6f13110641eb941db49bd781fcb8fe293b7d6c4cd5c427879803c2ba59617a1"
	I1229 06:48:00.279184   24146 cri.go:96] found id: "17994c226aacad57fbda97a02fed4553e91bd134ce6d53975246176759124769"
	I1229 06:48:00.279200   24146 cri.go:96] found id: "dc2d3d8fc63a89092d1fb7c5c78aa3b5961a80f87adc9e764d27f5097df557c8"
	I1229 06:48:00.279205   24146 cri.go:96] found id: "21f189ec37313b547a78e6b7eefa6f39267001240605aa04be22ce21ebed4a7b"
	I1229 06:48:00.279211   24146 cri.go:96] found id: "eeb0fd8ef162839fa207c203bea491f6c9a658c71049e04fca885b95bea92e12"
	I1229 06:48:00.279243   24146 cri.go:96] found id: "33b335c36f36593b5f7bf67397f810f69aa60256086010a88acab17521368f9f"
	I1229 06:48:00.279252   24146 cri.go:96] found id: "4a3c44dcac1f9a83e3a787226cced9a0ab00baf1994255347f9606435fbde789"
	I1229 06:48:00.279257   24146 cri.go:96] found id: "51ed82b33b15063ed9584df91a5d6d235c8817d11e47a9a38a1d56e037f2ed6e"
	I1229 06:48:00.279265   24146 cri.go:96] found id: "3cbc1d555be4a7bc9b343c6dd9de17ccc7a1450486f6885704f6cf2ccfab7ebb"
	I1229 06:48:00.279269   24146 cri.go:96] found id: "76eadfaf130b7086d5045d8a47b4ca38794f2e222f8f835d379eb06fbf27c686"
	I1229 06:48:00.279278   24146 cri.go:96] found id: "1fdad9cd679e891d20b4bee4f017f2f3bb45002271534d814ea8b3133140ff54"
	I1229 06:48:00.279283   24146 cri.go:96] found id: ""
	I1229 06:48:00.279327   24146 ssh_runner.go:195] Run: sudo runc list -f json
	I1229 06:48:00.295361   24146 out.go:203] 
	W1229 06:48:00.296656   24146 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-29T06:48:00Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-29T06:48:00Z" level=error msg="open /run/runc: no such file or directory"
	
	W1229 06:48:00.296689   24146 out.go:285] * 
	* 
	W1229 06:48:00.297600   24146 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_e8b2053d4ef30ba659303f708d034237180eb1ed_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_e8b2053d4ef30ba659303f708d034237180eb1ed_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1229 06:48:00.299273   24146 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1057: failed to disable storage-provisioner-rancher addon: args "out/minikube-linux-amd64 -p addons-264018 addons disable storage-provisioner-rancher --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/LocalPath (8.13s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (5.27s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:1027: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:353: "nvidia-device-plugin-daemonset-ff5s5" [7c5f758b-ca19-4494-9ca6-2fe849085a8f] Running
addons_test.go:1027: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 5.003273425s
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-264018 addons disable nvidia-device-plugin --alsologtostderr -v=1
addons_test.go:1055: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-264018 addons disable nvidia-device-plugin --alsologtostderr -v=1: exit status 11 (265.165505ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1229 06:47:51.983386   22528 out.go:360] Setting OutFile to fd 1 ...
	I1229 06:47:51.983706   22528 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1229 06:47:51.983721   22528 out.go:374] Setting ErrFile to fd 2...
	I1229 06:47:51.983729   22528 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1229 06:47:51.984030   22528 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22353-9207/.minikube/bin
	I1229 06:47:51.984433   22528 mustload.go:66] Loading cluster: addons-264018
	I1229 06:47:51.984924   22528 config.go:182] Loaded profile config "addons-264018": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1229 06:47:51.984955   22528 addons.go:622] checking whether the cluster is paused
	I1229 06:47:51.985087   22528 config.go:182] Loaded profile config "addons-264018": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1229 06:47:51.985106   22528 host.go:66] Checking if "addons-264018" exists ...
	I1229 06:47:51.985651   22528 cli_runner.go:164] Run: docker container inspect addons-264018 --format={{.State.Status}}
	I1229 06:47:52.006662   22528 ssh_runner.go:195] Run: systemctl --version
	I1229 06:47:52.006718   22528 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-264018
	I1229 06:47:52.026503   22528 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22353-9207/.minikube/machines/addons-264018/id_rsa Username:docker}
	I1229 06:47:52.123042   22528 cri.go:61] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1229 06:47:52.123117   22528 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1229 06:47:52.156393   22528 cri.go:96] found id: "a6a0581d8c61611e2f272b96343a268fe5c70f7029e38edfe79b93a5fd1c926f"
	I1229 06:47:52.156422   22528 cri.go:96] found id: "40f1bbbc2f1bc41ffc575f885bace3f777cfb7eef12de99365091343767cae35"
	I1229 06:47:52.156432   22528 cri.go:96] found id: "cf76d1d0070f881a9ffa8a6c04533147ff62c7b1514e1c54068befd5af071e6e"
	I1229 06:47:52.156436   22528 cri.go:96] found id: "191723d7168d895c6bf51e40836a2fca9afd4a59b8bdb556490f53c792ed7d42"
	I1229 06:47:52.156439   22528 cri.go:96] found id: "ae96a02e9cf4dc9de5d7df316ab70ff8a9d5d492040546a516cfaf1820f86bad"
	I1229 06:47:52.156442   22528 cri.go:96] found id: "c0c3e5a743acf0570907049fa852e80d8eef97b9bad8644c6ff7a8e19a9c7733"
	I1229 06:47:52.156446   22528 cri.go:96] found id: "f3b99ec55e372e2b5c844e254b9ac4579df2d959ab772039faba6a6b067f14b3"
	I1229 06:47:52.156450   22528 cri.go:96] found id: "ebcf1d7d5c969c114009537b9defcb8f0588535464fbc0a2ff842fefb447bb8c"
	I1229 06:47:52.156454   22528 cri.go:96] found id: "c9b16a3ab994c277bfed1b0bf551b2af5be8fe356cf284ce509b41b234e1bbc2"
	I1229 06:47:52.156467   22528 cri.go:96] found id: "4bfc41143f5c8fbc4ac026d705315ecf51fb6fe6d2b5c89b603528b3b12a74db"
	I1229 06:47:52.156472   22528 cri.go:96] found id: "e4899f3db6ad29c94b5952375d6ee208724e8ad29644873f0cd737a68e8a703a"
	I1229 06:47:52.156477   22528 cri.go:96] found id: "292ecef6acbc2acee8ef472e9ea145feebe4df5df46f16f5454e0f099419d6ac"
	I1229 06:47:52.156482   22528 cri.go:96] found id: "b719d17a16ec647d0d77415a2263349a1c5df8d59d393b88bb52526b8c006bde"
	I1229 06:47:52.156487   22528 cri.go:96] found id: "f6f13110641eb941db49bd781fcb8fe293b7d6c4cd5c427879803c2ba59617a1"
	I1229 06:47:52.156492   22528 cri.go:96] found id: "17994c226aacad57fbda97a02fed4553e91bd134ce6d53975246176759124769"
	I1229 06:47:52.156500   22528 cri.go:96] found id: "dc2d3d8fc63a89092d1fb7c5c78aa3b5961a80f87adc9e764d27f5097df557c8"
	I1229 06:47:52.156503   22528 cri.go:96] found id: "21f189ec37313b547a78e6b7eefa6f39267001240605aa04be22ce21ebed4a7b"
	I1229 06:47:52.156508   22528 cri.go:96] found id: "eeb0fd8ef162839fa207c203bea491f6c9a658c71049e04fca885b95bea92e12"
	I1229 06:47:52.156511   22528 cri.go:96] found id: "33b335c36f36593b5f7bf67397f810f69aa60256086010a88acab17521368f9f"
	I1229 06:47:52.156513   22528 cri.go:96] found id: "4a3c44dcac1f9a83e3a787226cced9a0ab00baf1994255347f9606435fbde789"
	I1229 06:47:52.156516   22528 cri.go:96] found id: "51ed82b33b15063ed9584df91a5d6d235c8817d11e47a9a38a1d56e037f2ed6e"
	I1229 06:47:52.156519   22528 cri.go:96] found id: "3cbc1d555be4a7bc9b343c6dd9de17ccc7a1450486f6885704f6cf2ccfab7ebb"
	I1229 06:47:52.156522   22528 cri.go:96] found id: "76eadfaf130b7086d5045d8a47b4ca38794f2e222f8f835d379eb06fbf27c686"
	I1229 06:47:52.156525   22528 cri.go:96] found id: "1fdad9cd679e891d20b4bee4f017f2f3bb45002271534d814ea8b3133140ff54"
	I1229 06:47:52.156528   22528 cri.go:96] found id: ""
	I1229 06:47:52.156572   22528 ssh_runner.go:195] Run: sudo runc list -f json
	I1229 06:47:52.170919   22528 out.go:203] 
	W1229 06:47:52.172630   22528 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-29T06:47:52Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-29T06:47:52Z" level=error msg="open /run/runc: no such file or directory"
	
	W1229 06:47:52.172651   22528 out.go:285] * 
	* 
	W1229 06:47:52.173286   22528 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_d91df5e23a6c7812cf3b3b0d72c142ff742a541e_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_d91df5e23a6c7812cf3b3b0d72c142ff742a541e_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1229 06:47:52.177166   22528 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1057: failed to disable nvidia-device-plugin addon: args "out/minikube-linux-amd64 -p addons-264018 addons disable nvidia-device-plugin --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/NvidiaDevicePlugin (5.27s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (5.27s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:1049: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:353: "yakd-dashboard-7bcf5795cd-6dvn8" [416f41e0-37f3-49ed-8f31-abd685b06b70] Running
addons_test.go:1049: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 5.003294364s
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-264018 addons disable yakd --alsologtostderr -v=1
addons_test.go:1055: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-264018 addons disable yakd --alsologtostderr -v=1: exit status 11 (266.723268ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1229 06:47:51.978173   22529 out.go:360] Setting OutFile to fd 1 ...
	I1229 06:47:51.978509   22529 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1229 06:47:51.978524   22529 out.go:374] Setting ErrFile to fd 2...
	I1229 06:47:51.978529   22529 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1229 06:47:51.978815   22529 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22353-9207/.minikube/bin
	I1229 06:47:51.979121   22529 mustload.go:66] Loading cluster: addons-264018
	I1229 06:47:51.979489   22529 config.go:182] Loaded profile config "addons-264018": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1229 06:47:51.979511   22529 addons.go:622] checking whether the cluster is paused
	I1229 06:47:51.979622   22529 config.go:182] Loaded profile config "addons-264018": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1229 06:47:51.979650   22529 host.go:66] Checking if "addons-264018" exists ...
	I1229 06:47:51.980163   22529 cli_runner.go:164] Run: docker container inspect addons-264018 --format={{.State.Status}}
	I1229 06:47:52.002335   22529 ssh_runner.go:195] Run: systemctl --version
	I1229 06:47:52.002401   22529 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-264018
	I1229 06:47:52.025682   22529 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22353-9207/.minikube/machines/addons-264018/id_rsa Username:docker}
	I1229 06:47:52.121828   22529 cri.go:61] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1229 06:47:52.121898   22529 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1229 06:47:52.155267   22529 cri.go:96] found id: "a6a0581d8c61611e2f272b96343a268fe5c70f7029e38edfe79b93a5fd1c926f"
	I1229 06:47:52.155291   22529 cri.go:96] found id: "40f1bbbc2f1bc41ffc575f885bace3f777cfb7eef12de99365091343767cae35"
	I1229 06:47:52.155296   22529 cri.go:96] found id: "cf76d1d0070f881a9ffa8a6c04533147ff62c7b1514e1c54068befd5af071e6e"
	I1229 06:47:52.155301   22529 cri.go:96] found id: "191723d7168d895c6bf51e40836a2fca9afd4a59b8bdb556490f53c792ed7d42"
	I1229 06:47:52.155305   22529 cri.go:96] found id: "ae96a02e9cf4dc9de5d7df316ab70ff8a9d5d492040546a516cfaf1820f86bad"
	I1229 06:47:52.155311   22529 cri.go:96] found id: "c0c3e5a743acf0570907049fa852e80d8eef97b9bad8644c6ff7a8e19a9c7733"
	I1229 06:47:52.155315   22529 cri.go:96] found id: "f3b99ec55e372e2b5c844e254b9ac4579df2d959ab772039faba6a6b067f14b3"
	I1229 06:47:52.155320   22529 cri.go:96] found id: "ebcf1d7d5c969c114009537b9defcb8f0588535464fbc0a2ff842fefb447bb8c"
	I1229 06:47:52.155324   22529 cri.go:96] found id: "c9b16a3ab994c277bfed1b0bf551b2af5be8fe356cf284ce509b41b234e1bbc2"
	I1229 06:47:52.155338   22529 cri.go:96] found id: "4bfc41143f5c8fbc4ac026d705315ecf51fb6fe6d2b5c89b603528b3b12a74db"
	I1229 06:47:52.155342   22529 cri.go:96] found id: "e4899f3db6ad29c94b5952375d6ee208724e8ad29644873f0cd737a68e8a703a"
	I1229 06:47:52.155347   22529 cri.go:96] found id: "292ecef6acbc2acee8ef472e9ea145feebe4df5df46f16f5454e0f099419d6ac"
	I1229 06:47:52.155351   22529 cri.go:96] found id: "b719d17a16ec647d0d77415a2263349a1c5df8d59d393b88bb52526b8c006bde"
	I1229 06:47:52.155356   22529 cri.go:96] found id: "f6f13110641eb941db49bd781fcb8fe293b7d6c4cd5c427879803c2ba59617a1"
	I1229 06:47:52.155361   22529 cri.go:96] found id: "17994c226aacad57fbda97a02fed4553e91bd134ce6d53975246176759124769"
	I1229 06:47:52.155367   22529 cri.go:96] found id: "dc2d3d8fc63a89092d1fb7c5c78aa3b5961a80f87adc9e764d27f5097df557c8"
	I1229 06:47:52.155372   22529 cri.go:96] found id: "21f189ec37313b547a78e6b7eefa6f39267001240605aa04be22ce21ebed4a7b"
	I1229 06:47:52.155378   22529 cri.go:96] found id: "eeb0fd8ef162839fa207c203bea491f6c9a658c71049e04fca885b95bea92e12"
	I1229 06:47:52.155383   22529 cri.go:96] found id: "33b335c36f36593b5f7bf67397f810f69aa60256086010a88acab17521368f9f"
	I1229 06:47:52.155387   22529 cri.go:96] found id: "4a3c44dcac1f9a83e3a787226cced9a0ab00baf1994255347f9606435fbde789"
	I1229 06:47:52.155391   22529 cri.go:96] found id: "51ed82b33b15063ed9584df91a5d6d235c8817d11e47a9a38a1d56e037f2ed6e"
	I1229 06:47:52.155395   22529 cri.go:96] found id: "3cbc1d555be4a7bc9b343c6dd9de17ccc7a1450486f6885704f6cf2ccfab7ebb"
	I1229 06:47:52.155399   22529 cri.go:96] found id: "76eadfaf130b7086d5045d8a47b4ca38794f2e222f8f835d379eb06fbf27c686"
	I1229 06:47:52.155409   22529 cri.go:96] found id: "1fdad9cd679e891d20b4bee4f017f2f3bb45002271534d814ea8b3133140ff54"
	I1229 06:47:52.155413   22529 cri.go:96] found id: ""
	I1229 06:47:52.155459   22529 ssh_runner.go:195] Run: sudo runc list -f json
	I1229 06:47:52.170926   22529 out.go:203] 
	W1229 06:47:52.172624   22529 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-29T06:47:52Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-29T06:47:52Z" level=error msg="open /run/runc: no such file or directory"
	
	W1229 06:47:52.172645   22529 out.go:285] * 
	* 
	W1229 06:47:52.173321   22529 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_d91df5e23a6c7812cf3b3b0d72c142ff742a541e_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_d91df5e23a6c7812cf3b3b0d72c142ff742a541e_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1229 06:47:52.177180   22529 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1057: failed to disable yakd addon: args "out/minikube-linux-amd64 -p addons-264018 addons disable yakd --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/Yakd (5.27s)

                                                
                                    
x
+
TestAddons/parallel/AmdGpuDevicePlugin (5.27s)

                                                
                                                
=== RUN   TestAddons/parallel/AmdGpuDevicePlugin
=== PAUSE TestAddons/parallel/AmdGpuDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/AmdGpuDevicePlugin
addons_test.go:1040: (dbg) TestAddons/parallel/AmdGpuDevicePlugin: waiting 6m0s for pods matching "name=amd-gpu-device-plugin" in namespace "kube-system" ...
helpers_test.go:353: "amd-gpu-device-plugin-9gzmq" [4f8b6ab5-1d47-4b72-b504-1f4b3e2277a7] Running
addons_test.go:1040: (dbg) TestAddons/parallel/AmdGpuDevicePlugin: name=amd-gpu-device-plugin healthy within 5.003404622s
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-264018 addons disable amd-gpu-device-plugin --alsologtostderr -v=1
addons_test.go:1055: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-264018 addons disable amd-gpu-device-plugin --alsologtostderr -v=1: exit status 11 (259.757813ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1229 06:47:51.975443   22530 out.go:360] Setting OutFile to fd 1 ...
	I1229 06:47:51.975770   22530 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1229 06:47:51.975783   22530 out.go:374] Setting ErrFile to fd 2...
	I1229 06:47:51.975787   22530 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1229 06:47:51.976019   22530 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22353-9207/.minikube/bin
	I1229 06:47:51.976387   22530 mustload.go:66] Loading cluster: addons-264018
	I1229 06:47:51.976689   22530 config.go:182] Loaded profile config "addons-264018": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1229 06:47:51.976708   22530 addons.go:622] checking whether the cluster is paused
	I1229 06:47:51.976785   22530 config.go:182] Loaded profile config "addons-264018": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1229 06:47:51.976796   22530 host.go:66] Checking if "addons-264018" exists ...
	I1229 06:47:51.977190   22530 cli_runner.go:164] Run: docker container inspect addons-264018 --format={{.State.Status}}
	I1229 06:47:51.997744   22530 ssh_runner.go:195] Run: systemctl --version
	I1229 06:47:51.997804   22530 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-264018
	I1229 06:47:52.017994   22530 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22353-9207/.minikube/machines/addons-264018/id_rsa Username:docker}
	I1229 06:47:52.118254   22530 cri.go:61] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1229 06:47:52.118345   22530 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1229 06:47:52.151345   22530 cri.go:96] found id: "a6a0581d8c61611e2f272b96343a268fe5c70f7029e38edfe79b93a5fd1c926f"
	I1229 06:47:52.151374   22530 cri.go:96] found id: "40f1bbbc2f1bc41ffc575f885bace3f777cfb7eef12de99365091343767cae35"
	I1229 06:47:52.151380   22530 cri.go:96] found id: "cf76d1d0070f881a9ffa8a6c04533147ff62c7b1514e1c54068befd5af071e6e"
	I1229 06:47:52.151388   22530 cri.go:96] found id: "191723d7168d895c6bf51e40836a2fca9afd4a59b8bdb556490f53c792ed7d42"
	I1229 06:47:52.151392   22530 cri.go:96] found id: "ae96a02e9cf4dc9de5d7df316ab70ff8a9d5d492040546a516cfaf1820f86bad"
	I1229 06:47:52.151398   22530 cri.go:96] found id: "c0c3e5a743acf0570907049fa852e80d8eef97b9bad8644c6ff7a8e19a9c7733"
	I1229 06:47:52.151403   22530 cri.go:96] found id: "f3b99ec55e372e2b5c844e254b9ac4579df2d959ab772039faba6a6b067f14b3"
	I1229 06:47:52.151407   22530 cri.go:96] found id: "ebcf1d7d5c969c114009537b9defcb8f0588535464fbc0a2ff842fefb447bb8c"
	I1229 06:47:52.151412   22530 cri.go:96] found id: "c9b16a3ab994c277bfed1b0bf551b2af5be8fe356cf284ce509b41b234e1bbc2"
	I1229 06:47:52.151420   22530 cri.go:96] found id: "4bfc41143f5c8fbc4ac026d705315ecf51fb6fe6d2b5c89b603528b3b12a74db"
	I1229 06:47:52.151429   22530 cri.go:96] found id: "e4899f3db6ad29c94b5952375d6ee208724e8ad29644873f0cd737a68e8a703a"
	I1229 06:47:52.151433   22530 cri.go:96] found id: "292ecef6acbc2acee8ef472e9ea145feebe4df5df46f16f5454e0f099419d6ac"
	I1229 06:47:52.151438   22530 cri.go:96] found id: "b719d17a16ec647d0d77415a2263349a1c5df8d59d393b88bb52526b8c006bde"
	I1229 06:47:52.151442   22530 cri.go:96] found id: "f6f13110641eb941db49bd781fcb8fe293b7d6c4cd5c427879803c2ba59617a1"
	I1229 06:47:52.151447   22530 cri.go:96] found id: "17994c226aacad57fbda97a02fed4553e91bd134ce6d53975246176759124769"
	I1229 06:47:52.151455   22530 cri.go:96] found id: "dc2d3d8fc63a89092d1fb7c5c78aa3b5961a80f87adc9e764d27f5097df557c8"
	I1229 06:47:52.151461   22530 cri.go:96] found id: "21f189ec37313b547a78e6b7eefa6f39267001240605aa04be22ce21ebed4a7b"
	I1229 06:47:52.151467   22530 cri.go:96] found id: "eeb0fd8ef162839fa207c203bea491f6c9a658c71049e04fca885b95bea92e12"
	I1229 06:47:52.151472   22530 cri.go:96] found id: "33b335c36f36593b5f7bf67397f810f69aa60256086010a88acab17521368f9f"
	I1229 06:47:52.151476   22530 cri.go:96] found id: "4a3c44dcac1f9a83e3a787226cced9a0ab00baf1994255347f9606435fbde789"
	I1229 06:47:52.151486   22530 cri.go:96] found id: "51ed82b33b15063ed9584df91a5d6d235c8817d11e47a9a38a1d56e037f2ed6e"
	I1229 06:47:52.151491   22530 cri.go:96] found id: "3cbc1d555be4a7bc9b343c6dd9de17ccc7a1450486f6885704f6cf2ccfab7ebb"
	I1229 06:47:52.151496   22530 cri.go:96] found id: "76eadfaf130b7086d5045d8a47b4ca38794f2e222f8f835d379eb06fbf27c686"
	I1229 06:47:52.151500   22530 cri.go:96] found id: "1fdad9cd679e891d20b4bee4f017f2f3bb45002271534d814ea8b3133140ff54"
	I1229 06:47:52.151508   22530 cri.go:96] found id: ""
	I1229 06:47:52.151563   22530 ssh_runner.go:195] Run: sudo runc list -f json
	I1229 06:47:52.168387   22530 out.go:203] 
	W1229 06:47:52.169681   22530 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-29T06:47:52Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-29T06:47:52Z" level=error msg="open /run/runc: no such file or directory"
	
	W1229 06:47:52.169712   22530 out.go:285] * 
	* 
	W1229 06:47:52.170411   22530 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_d91df5e23a6c7812cf3b3b0d72c142ff742a541e_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_d91df5e23a6c7812cf3b3b0d72c142ff742a541e_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1229 06:47:52.171829   22530 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1057: failed to disable amd-gpu-device-plugin addon: args "out/minikube-linux-amd64 -p addons-264018 addons disable amd-gpu-device-plugin --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/AmdGpuDevicePlugin (5.27s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (2.17s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p json-output-100345 --output=json --user=testUser
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-linux-amd64 pause -p json-output-100345 --output=json --user=testUser: exit status 80 (2.169251597s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"969d5030-f6d7-4c3d-81b2-f1c8bbe08eda","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"Pausing node json-output-100345 ...","name":"Pausing","totalsteps":"1"}}
	{"specversion":"1.0","id":"45f074d0-77c7-4a6a-aec3-574fef66cd0f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"80","issues":"","message":"Pause: list running: runc: sudo runc list -f json: Process exited with status 1\nstdout:\n\nstderr:\ntime=\"2025-12-29T06:59:48Z\" level=error msg=\"open /run/runc: no such file or directory\"","name":"GUEST_PAUSE","url":""}}
	{"specversion":"1.0","id":"64724472-6283-4a37-971a-aff1fefb08a7","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"╭───────────────────────────────────────────────────────────────────────────────────────────╮\n│                                                                                           │\n│    If the above advice does not help, please let us know:                                 │\n│    https://github.com/kubernetes/minikube/issues/new/choose                               │\n│                                                                                           │\n│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │\n│    Please also attach the following f
ile to the GitHub issue:                             │\n│    - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │\n│                                                                                           │\n╰───────────────────────────────────────────────────────────────────────────────────────────╯"}}

                                                
                                                
-- /stdout --
json_output_test.go:65: failed to clean up: args "out/minikube-linux-amd64 pause -p json-output-100345 --output=json --user=testUser": exit status 80
--- FAIL: TestJSONOutput/pause/Command (2.17s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (1.8s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p json-output-100345 --output=json --user=testUser
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-linux-amd64 unpause -p json-output-100345 --output=json --user=testUser: exit status 80 (1.804522676s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"67952daa-bd25-4210-8e3e-7289881eec6d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"Unpausing node json-output-100345 ...","name":"Unpausing","totalsteps":"1"}}
	{"specversion":"1.0","id":"2ebaf55a-b3d3-40b6-9d8c-c094d0660c36","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"80","issues":"","message":"Pause: list paused: runc: sudo runc list -f json: Process exited with status 1\nstdout:\n\nstderr:\ntime=\"2025-12-29T06:59:50Z\" level=error msg=\"open /run/runc: no such file or directory\"","name":"GUEST_UNPAUSE","url":""}}
	{"specversion":"1.0","id":"3fb1a7b9-3250-48fe-9cd5-de2cd8dd4a0e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"╭───────────────────────────────────────────────────────────────────────────────────────────╮\n│                                                                                           │\n│    If the above advice does not help, please let us know:                                 │\n│    https://github.com/kubernetes/minikube/issues/new/choose                               │\n│                                                                                           │\n│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │\n│    Please also attach the following f
ile to the GitHub issue:                             │\n│    - /tmp/minikube_unpause_85c908ac827001a7ced33feb0caf7da086d17584_0.log                 │\n│                                                                                           │\n╰───────────────────────────────────────────────────────────────────────────────────────────╯"}}

                                                
                                                
-- /stdout --
json_output_test.go:65: failed to clean up: args "out/minikube-linux-amd64 unpause -p json-output-100345 --output=json --user=testUser": exit status 80
--- FAIL: TestJSONOutput/unpause/Command (1.80s)

                                                
                                    
x
+
TestPause/serial/Pause (6.85s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-481637 --alsologtostderr -v=5
pause_test.go:110: (dbg) Non-zero exit: out/minikube-linux-amd64 pause -p pause-481637 --alsologtostderr -v=5: exit status 80 (2.667840263s)

                                                
                                                
-- stdout --
	* Pausing node pause-481637 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1229 07:10:43.842781  188734 out.go:360] Setting OutFile to fd 1 ...
	I1229 07:10:43.842886  188734 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1229 07:10:43.842891  188734 out.go:374] Setting ErrFile to fd 2...
	I1229 07:10:43.842895  188734 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1229 07:10:43.843108  188734 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22353-9207/.minikube/bin
	I1229 07:10:43.843393  188734 out.go:368] Setting JSON to false
	I1229 07:10:43.843413  188734 mustload.go:66] Loading cluster: pause-481637
	I1229 07:10:43.843810  188734 config.go:182] Loaded profile config "pause-481637": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1229 07:10:43.844487  188734 cli_runner.go:164] Run: docker container inspect pause-481637 --format={{.State.Status}}
	I1229 07:10:43.867696  188734 host.go:66] Checking if "pause-481637" exists ...
	I1229 07:10:43.868005  188734 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1229 07:10:43.942169  188734 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:4 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:97 OomKillDisable:false NGoroutines:103 SystemTime:2025-12-29 07:10:43.929837799 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:
x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:dea7da592f5d1d2b7755e3a161be07f43fad8f75 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1229 07:10:43.943069  188734 pause.go:60] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766979815-22353@sha256:20dad5895b49b986a1253c0faab60865204843ac97fd3a6e6210da5896244409 binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/22353/minikube-v1.37.0-1766979747-22353-amd64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1766979747-22353/minikube-v1.37.0-1766979747-22353-amd64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1766979747-22353-amd64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) preload-source:auto profile:pause-481637 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) rosetta:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true
) wantupdatenotification:%!s(bool=true) wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1229 07:10:43.944618  188734 out.go:179] * Pausing node pause-481637 ... 
	I1229 07:10:43.945910  188734 host.go:66] Checking if "pause-481637" exists ...
	I1229 07:10:43.946293  188734 ssh_runner.go:195] Run: systemctl --version
	I1229 07:10:43.946337  188734 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-481637
	I1229 07:10:43.978376  188734 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32968 SSHKeyPath:/home/jenkins/minikube-integration/22353-9207/.minikube/machines/pause-481637/id_rsa Username:docker}
	I1229 07:10:44.084339  188734 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1229 07:10:44.099392  188734 pause.go:52] kubelet running: true
	I1229 07:10:44.099465  188734 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1229 07:10:44.239523  188734 cri.go:61] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1229 07:10:44.239628  188734 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1229 07:10:44.316341  188734 cri.go:96] found id: "58489a35e11e19b18e157e91660aa22fd062abb979b1048c95b030e927e43506"
	I1229 07:10:44.316364  188734 cri.go:96] found id: "5e860b5c12c78561beb41ccbc2085158a45919cc8ed9cb9abc4373d630fd84d2"
	I1229 07:10:44.316370  188734 cri.go:96] found id: "571cde0d2860e745d81cd271b5ab627d3d57c83abf1fdb3eb1c516a2e37d9f26"
	I1229 07:10:44.316374  188734 cri.go:96] found id: "15715c02ea0c8855cc0d760b1a4d47caf7aba7242bd633dedbf18b227f270ce2"
	I1229 07:10:44.316378  188734 cri.go:96] found id: "cbb7bcc884a54e8411d72312a0ada9c61edf897db2ff6699aa4e8a312e9735eb"
	I1229 07:10:44.316382  188734 cri.go:96] found id: "5f72b4ac83205cc93dc25e7f93237b1feb1132c848b45fa12af6205fda6bffd9"
	I1229 07:10:44.316386  188734 cri.go:96] found id: "febc3eb4806ae3ff3fe4ec5ade5c51f1b25306c094df505f7f1a60820d643d9a"
	I1229 07:10:44.316390  188734 cri.go:96] found id: ""
	I1229 07:10:44.316440  188734 ssh_runner.go:195] Run: sudo runc list -f json
	I1229 07:10:44.329529  188734 retry.go:84] will retry after 300ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-29T07:10:44Z" level=error msg="open /run/runc: no such file or directory"
	I1229 07:10:44.665147  188734 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1229 07:10:44.680803  188734 pause.go:52] kubelet running: false
	I1229 07:10:44.680880  188734 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1229 07:10:44.806803  188734 cri.go:61] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1229 07:10:44.806895  188734 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1229 07:10:44.882497  188734 cri.go:96] found id: "58489a35e11e19b18e157e91660aa22fd062abb979b1048c95b030e927e43506"
	I1229 07:10:44.882523  188734 cri.go:96] found id: "5e860b5c12c78561beb41ccbc2085158a45919cc8ed9cb9abc4373d630fd84d2"
	I1229 07:10:44.882529  188734 cri.go:96] found id: "571cde0d2860e745d81cd271b5ab627d3d57c83abf1fdb3eb1c516a2e37d9f26"
	I1229 07:10:44.882535  188734 cri.go:96] found id: "15715c02ea0c8855cc0d760b1a4d47caf7aba7242bd633dedbf18b227f270ce2"
	I1229 07:10:44.882540  188734 cri.go:96] found id: "cbb7bcc884a54e8411d72312a0ada9c61edf897db2ff6699aa4e8a312e9735eb"
	I1229 07:10:44.882545  188734 cri.go:96] found id: "5f72b4ac83205cc93dc25e7f93237b1feb1132c848b45fa12af6205fda6bffd9"
	I1229 07:10:44.882549  188734 cri.go:96] found id: "febc3eb4806ae3ff3fe4ec5ade5c51f1b25306c094df505f7f1a60820d643d9a"
	I1229 07:10:44.882553  188734 cri.go:96] found id: ""
	I1229 07:10:44.882610  188734 ssh_runner.go:195] Run: sudo runc list -f json
	I1229 07:10:45.302834  188734 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1229 07:10:45.315583  188734 pause.go:52] kubelet running: false
	I1229 07:10:45.315634  188734 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1229 07:10:45.433087  188734 cri.go:61] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1229 07:10:45.433232  188734 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1229 07:10:45.501080  188734 cri.go:96] found id: "58489a35e11e19b18e157e91660aa22fd062abb979b1048c95b030e927e43506"
	I1229 07:10:45.501107  188734 cri.go:96] found id: "5e860b5c12c78561beb41ccbc2085158a45919cc8ed9cb9abc4373d630fd84d2"
	I1229 07:10:45.501113  188734 cri.go:96] found id: "571cde0d2860e745d81cd271b5ab627d3d57c83abf1fdb3eb1c516a2e37d9f26"
	I1229 07:10:45.501119  188734 cri.go:96] found id: "15715c02ea0c8855cc0d760b1a4d47caf7aba7242bd633dedbf18b227f270ce2"
	I1229 07:10:45.501124  188734 cri.go:96] found id: "cbb7bcc884a54e8411d72312a0ada9c61edf897db2ff6699aa4e8a312e9735eb"
	I1229 07:10:45.501134  188734 cri.go:96] found id: "5f72b4ac83205cc93dc25e7f93237b1feb1132c848b45fa12af6205fda6bffd9"
	I1229 07:10:45.501139  188734 cri.go:96] found id: "febc3eb4806ae3ff3fe4ec5ade5c51f1b25306c094df505f7f1a60820d643d9a"
	I1229 07:10:45.501144  188734 cri.go:96] found id: ""
	I1229 07:10:45.501194  188734 ssh_runner.go:195] Run: sudo runc list -f json
	I1229 07:10:45.909426  188734 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1229 07:10:45.923326  188734 pause.go:52] kubelet running: false
	I1229 07:10:45.923398  188734 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1229 07:10:46.061201  188734 cri.go:61] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1229 07:10:46.061295  188734 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1229 07:10:46.136446  188734 cri.go:96] found id: "58489a35e11e19b18e157e91660aa22fd062abb979b1048c95b030e927e43506"
	I1229 07:10:46.136471  188734 cri.go:96] found id: "5e860b5c12c78561beb41ccbc2085158a45919cc8ed9cb9abc4373d630fd84d2"
	I1229 07:10:46.136475  188734 cri.go:96] found id: "571cde0d2860e745d81cd271b5ab627d3d57c83abf1fdb3eb1c516a2e37d9f26"
	I1229 07:10:46.136479  188734 cri.go:96] found id: "15715c02ea0c8855cc0d760b1a4d47caf7aba7242bd633dedbf18b227f270ce2"
	I1229 07:10:46.136482  188734 cri.go:96] found id: "cbb7bcc884a54e8411d72312a0ada9c61edf897db2ff6699aa4e8a312e9735eb"
	I1229 07:10:46.136485  188734 cri.go:96] found id: "5f72b4ac83205cc93dc25e7f93237b1feb1132c848b45fa12af6205fda6bffd9"
	I1229 07:10:46.136487  188734 cri.go:96] found id: "febc3eb4806ae3ff3fe4ec5ade5c51f1b25306c094df505f7f1a60820d643d9a"
	I1229 07:10:46.136490  188734 cri.go:96] found id: ""
	I1229 07:10:46.136543  188734 ssh_runner.go:195] Run: sudo runc list -f json
	I1229 07:10:46.299992  188734 out.go:203] 
	W1229 07:10:46.349022  188734 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-29T07:10:46Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-29T07:10:46Z" level=error msg="open /run/runc: no such file or directory"
	
	W1229 07:10:46.349044  188734 out.go:285] * 
	* 
	W1229 07:10:46.350627  188734 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1229 07:10:46.424292  188734 out.go:203] 

                                                
                                                
** /stderr **
pause_test.go:112: failed to pause minikube with args: "out/minikube-linux-amd64 pause -p pause-481637 --alsologtostderr -v=5" : exit status 80
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestPause/serial/Pause]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestPause/serial/Pause]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect pause-481637
helpers_test.go:244: (dbg) docker inspect pause-481637:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "e9175fa6027804591652f6f823d94b30b7ab9b566bb08090bd7dc45f48c1743c",
	        "Created": "2025-12-29T07:09:53.106581938Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 171376,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-29T07:09:57.286794192Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:a2c772d8c9235c5e8a66a3d0af7f5184436aa141d74f90a5659fe0d594ea14d5",
	        "ResolvConfPath": "/var/lib/docker/containers/e9175fa6027804591652f6f823d94b30b7ab9b566bb08090bd7dc45f48c1743c/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/e9175fa6027804591652f6f823d94b30b7ab9b566bb08090bd7dc45f48c1743c/hostname",
	        "HostsPath": "/var/lib/docker/containers/e9175fa6027804591652f6f823d94b30b7ab9b566bb08090bd7dc45f48c1743c/hosts",
	        "LogPath": "/var/lib/docker/containers/e9175fa6027804591652f6f823d94b30b7ab9b566bb08090bd7dc45f48c1743c/e9175fa6027804591652f6f823d94b30b7ab9b566bb08090bd7dc45f48c1743c-json.log",
	        "Name": "/pause-481637",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "pause-481637:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "pause-481637",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "e9175fa6027804591652f6f823d94b30b7ab9b566bb08090bd7dc45f48c1743c",
	                "LowerDir": "/var/lib/docker/overlay2/42f0982355f7780754931d8f6abfdec9884637d998a3843de0215782e9552341-init/diff:/var/lib/docker/overlay2/3c9e424a3298c821d4195922375c499065668db3230de879006c717dc268a220/diff",
	                "MergedDir": "/var/lib/docker/overlay2/42f0982355f7780754931d8f6abfdec9884637d998a3843de0215782e9552341/merged",
	                "UpperDir": "/var/lib/docker/overlay2/42f0982355f7780754931d8f6abfdec9884637d998a3843de0215782e9552341/diff",
	                "WorkDir": "/var/lib/docker/overlay2/42f0982355f7780754931d8f6abfdec9884637d998a3843de0215782e9552341/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "pause-481637",
	                "Source": "/var/lib/docker/volumes/pause-481637/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "pause-481637",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766979815-22353@sha256:20dad5895b49b986a1253c0faab60865204843ac97fd3a6e6210da5896244409",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "pause-481637",
	                "name.minikube.sigs.k8s.io": "pause-481637",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "bd68e0678f8ce9b277bf710cf03b049107f846d8f1694857199e098b9192a0b7",
	            "SandboxKey": "/var/run/docker/netns/bd68e0678f8c",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32968"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32969"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32972"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32970"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32971"
	                    }
	                ]
	            },
	            "Networks": {
	                "pause-481637": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "0f29fff9b01c97698f6fd6b28b92b63f7937eb8e57f0d4d7f7536ddb8174ceae",
	                    "EndpointID": "c117b2fad07e007922db927d86b250eee424acd7c999c176da773e71ad34e04c",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "MacAddress": "a2:70:b6:fd:04:bf",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "pause-481637",
	                        "e9175fa60278"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p pause-481637 -n pause-481637
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p pause-481637 -n pause-481637: exit status 2 (363.905889ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestPause/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestPause/serial/Pause]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p pause-481637 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-amd64 -p pause-481637 logs -n 25: (1.309494538s)
helpers_test.go:261: TestPause/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬─────────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                             ARGS                                                             │           PROFILE           │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼─────────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ stop    │ -p scheduled-stop-891761 --schedule 5m -v=5 --alsologtostderr                                                                │ scheduled-stop-891761       │ jenkins │ v1.37.0 │ 29 Dec 25 07:08 UTC │                     │
	│ stop    │ -p scheduled-stop-891761 --schedule 5m -v=5 --alsologtostderr                                                                │ scheduled-stop-891761       │ jenkins │ v1.37.0 │ 29 Dec 25 07:08 UTC │                     │
	│ stop    │ -p scheduled-stop-891761 --schedule 5m -v=5 --alsologtostderr                                                                │ scheduled-stop-891761       │ jenkins │ v1.37.0 │ 29 Dec 25 07:08 UTC │                     │
	│ stop    │ -p scheduled-stop-891761 --schedule 15s -v=5 --alsologtostderr                                                               │ scheduled-stop-891761       │ jenkins │ v1.37.0 │ 29 Dec 25 07:08 UTC │                     │
	│ stop    │ -p scheduled-stop-891761 --schedule 15s -v=5 --alsologtostderr                                                               │ scheduled-stop-891761       │ jenkins │ v1.37.0 │ 29 Dec 25 07:08 UTC │                     │
	│ stop    │ -p scheduled-stop-891761 --schedule 15s -v=5 --alsologtostderr                                                               │ scheduled-stop-891761       │ jenkins │ v1.37.0 │ 29 Dec 25 07:08 UTC │                     │
	│ stop    │ -p scheduled-stop-891761 --cancel-scheduled                                                                                  │ scheduled-stop-891761       │ jenkins │ v1.37.0 │ 29 Dec 25 07:08 UTC │ 29 Dec 25 07:08 UTC │
	│ stop    │ -p scheduled-stop-891761 --schedule 15s -v=5 --alsologtostderr                                                               │ scheduled-stop-891761       │ jenkins │ v1.37.0 │ 29 Dec 25 07:08 UTC │                     │
	│ stop    │ -p scheduled-stop-891761 --schedule 15s -v=5 --alsologtostderr                                                               │ scheduled-stop-891761       │ jenkins │ v1.37.0 │ 29 Dec 25 07:08 UTC │                     │
	│ stop    │ -p scheduled-stop-891761 --schedule 15s -v=5 --alsologtostderr                                                               │ scheduled-stop-891761       │ jenkins │ v1.37.0 │ 29 Dec 25 07:08 UTC │ 29 Dec 25 07:08 UTC │
	│ delete  │ -p scheduled-stop-891761                                                                                                     │ scheduled-stop-891761       │ jenkins │ v1.37.0 │ 29 Dec 25 07:09 UTC │ 29 Dec 25 07:09 UTC │
	│ start   │ -p insufficient-storage-899672 --memory=3072 --output=json --wait=true --driver=docker  --container-runtime=crio             │ insufficient-storage-899672 │ jenkins │ v1.37.0 │ 29 Dec 25 07:09 UTC │                     │
	│ delete  │ -p insufficient-storage-899672                                                                                               │ insufficient-storage-899672 │ jenkins │ v1.37.0 │ 29 Dec 25 07:09 UTC │ 29 Dec 25 07:09 UTC │
	│ start   │ -p pause-481637 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=crio                    │ pause-481637                │ jenkins │ v1.37.0 │ 29 Dec 25 07:09 UTC │ 29 Dec 25 07:10 UTC │
	│ start   │ -p offline-crio-469438 --alsologtostderr -v=1 --memory=3072 --wait=true --driver=docker  --container-runtime=crio            │ offline-crio-469438         │ jenkins │ v1.37.0 │ 29 Dec 25 07:09 UTC │ 29 Dec 25 07:10 UTC │
	│ start   │ -p stopped-upgrade-518014 --memory=3072 --vm-driver=docker  --container-runtime=crio                                         │ stopped-upgrade-518014      │ jenkins │ v1.35.0 │ 29 Dec 25 07:09 UTC │ 29 Dec 25 07:10 UTC │
	│ start   │ -p running-upgrade-796549 --memory=3072 --vm-driver=docker  --container-runtime=crio                                         │ running-upgrade-796549      │ jenkins │ v1.35.0 │ 29 Dec 25 07:09 UTC │ 29 Dec 25 07:10 UTC │
	│ start   │ -p running-upgrade-796549 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                     │ running-upgrade-796549      │ jenkins │ v1.37.0 │ 29 Dec 25 07:10 UTC │ 29 Dec 25 07:10 UTC │
	│ stop    │ stopped-upgrade-518014 stop                                                                                                  │ stopped-upgrade-518014      │ jenkins │ v1.35.0 │ 29 Dec 25 07:10 UTC │ 29 Dec 25 07:10 UTC │
	│ start   │ -p stopped-upgrade-518014 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                     │ stopped-upgrade-518014      │ jenkins │ v1.37.0 │ 29 Dec 25 07:10 UTC │                     │
	│ delete  │ -p offline-crio-469438                                                                                                       │ offline-crio-469438         │ jenkins │ v1.37.0 │ 29 Dec 25 07:10 UTC │ 29 Dec 25 07:10 UTC │
	│ start   │ -p pause-481637 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                             │ pause-481637                │ jenkins │ v1.37.0 │ 29 Dec 25 07:10 UTC │ 29 Dec 25 07:10 UTC │
	│ start   │ -p test-preload-457393 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio │ test-preload-457393         │ jenkins │ v1.37.0 │ 29 Dec 25 07:10 UTC │                     │
	│ pause   │ -p pause-481637 --alsologtostderr -v=5                                                                                       │ pause-481637                │ jenkins │ v1.37.0 │ 29 Dec 25 07:10 UTC │                     │
	│ delete  │ -p running-upgrade-796549                                                                                                    │ running-upgrade-796549      │ jenkins │ v1.37.0 │ 29 Dec 25 07:10 UTC │                     │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴─────────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/29 07:10:38
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1229 07:10:38.549644  185656 out.go:360] Setting OutFile to fd 1 ...
	I1229 07:10:38.549905  185656 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1229 07:10:38.549915  185656 out.go:374] Setting ErrFile to fd 2...
	I1229 07:10:38.549920  185656 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1229 07:10:38.550182  185656 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22353-9207/.minikube/bin
	I1229 07:10:38.550708  185656 out.go:368] Setting JSON to false
	I1229 07:10:38.551788  185656 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":3191,"bootTime":1766989048,"procs":306,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1229 07:10:38.551840  185656 start.go:143] virtualization: kvm guest
	I1229 07:10:38.553668  185656 out.go:179] * [test-preload-457393] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1229 07:10:38.555322  185656 notify.go:221] Checking for updates...
	I1229 07:10:38.555334  185656 out.go:179]   - MINIKUBE_LOCATION=22353
	I1229 07:10:38.556490  185656 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1229 07:10:38.557684  185656 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22353-9207/kubeconfig
	I1229 07:10:38.558806  185656 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22353-9207/.minikube
	I1229 07:10:38.559898  185656 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1229 07:10:38.561087  185656 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1229 07:10:38.562953  185656 config.go:182] Loaded profile config "pause-481637": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1229 07:10:38.563090  185656 config.go:182] Loaded profile config "running-upgrade-796549": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.0
	I1229 07:10:38.563323  185656 config.go:182] Loaded profile config "stopped-upgrade-518014": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.0
	I1229 07:10:38.563495  185656 driver.go:422] Setting default libvirt URI to qemu:///system
	I1229 07:10:38.592871  185656 docker.go:124] docker version: linux-29.1.3:Docker Engine - Community
	I1229 07:10:38.592976  185656 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1229 07:10:38.659037  185656 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:66 OomKillDisable:false NGoroutines:77 SystemTime:2025-12-29 07:10:38.646800686 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:dea7da592f5d1d2b7755e3a161be07f43fad8f75 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1229 07:10:38.659174  185656 docker.go:319] overlay module found
	I1229 07:10:38.661168  185656 out.go:179] * Using the docker driver based on user configuration
	I1229 07:10:38.663162  185656 start.go:309] selected driver: docker
	I1229 07:10:38.663182  185656 start.go:928] validating driver "docker" against <nil>
	I1229 07:10:38.663198  185656 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1229 07:10:38.663967  185656 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1229 07:10:38.730342  185656 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:66 OomKillDisable:false NGoroutines:77 SystemTime:2025-12-29 07:10:38.719654022 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:dea7da592f5d1d2b7755e3a161be07f43fad8f75 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1229 07:10:38.730515  185656 start_flags.go:333] no existing cluster config was found, will generate one from the flags 
	I1229 07:10:38.730708  185656 start_flags.go:1019] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1229 07:10:38.734341  185656 out.go:179] * Using Docker driver with root privileges
	I1229 07:10:38.738657  185656 cni.go:84] Creating CNI manager for ""
	I1229 07:10:38.738730  185656 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1229 07:10:38.738744  185656 start_flags.go:342] Found "CNI" CNI - setting NetworkPlugin=cni
	I1229 07:10:38.738812  185656 start.go:353] cluster config:
	{Name:test-preload-457393 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766979815-22353@sha256:20dad5895b49b986a1253c0faab60865204843ac97fd3a6e6210da5896244409 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:test-preload-457393 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Cont
ainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgen
tPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1229 07:10:38.740108  185656 out.go:179] * Starting "test-preload-457393" primary control-plane node in "test-preload-457393" cluster
	I1229 07:10:38.741238  185656 cache.go:134] Beginning downloading kic base image for docker with crio
	I1229 07:10:38.742434  185656 out.go:179] * Pulling base image v0.0.48-1766979815-22353 ...
	I1229 07:10:38.743483  185656 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime crio
	I1229 07:10:38.743609  185656 profile.go:143] Saving config to /home/jenkins/minikube-integration/22353-9207/.minikube/profiles/test-preload-457393/config.json ...
	I1229 07:10:38.743634  185656 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766979815-22353@sha256:20dad5895b49b986a1253c0faab60865204843ac97fd3a6e6210da5896244409 in local docker daemon
	I1229 07:10:38.743647  185656 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22353-9207/.minikube/profiles/test-preload-457393/config.json: {Name:mk5cf8a12208dd3c08d418702a3ca39eca199067 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1229 07:10:38.743884  185656 cache.go:107] acquiring lock: {Name:mkceb8935c60ed9a529274ab83854aa71dbe9a7d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1229 07:10:38.743918  185656 cache.go:107] acquiring lock: {Name:mk52f4077c79f8806c7eb2c6a7253ed35dcf09ce Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1229 07:10:38.743885  185656 cache.go:107] acquiring lock: {Name:mk524ccc7d3121d195adc7d1863af70c1e10cb09 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1229 07:10:38.744002  185656 cache.go:107] acquiring lock: {Name:mk6876db4017aa5ef89eab36b68c600dec62345c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1229 07:10:38.744056  185656 image.go:138] retrieving image: registry.k8s.io/kube-apiserver:v1.35.0
	I1229 07:10:38.744075  185656 cache.go:107] acquiring lock: {Name:mk4e3cc5ac4b58daa39b77bf4639b595a7b5e1bb Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1229 07:10:38.744113  185656 image.go:138] retrieving image: registry.k8s.io/kube-scheduler:v1.35.0
	I1229 07:10:38.744090  185656 cache.go:107] acquiring lock: {Name:mkca02c24b265c83f3ba73c3e4bff2d28831259c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1229 07:10:38.744179  185656 image.go:138] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1229 07:10:38.744176  185656 cache.go:107] acquiring lock: {Name:mkeb7d05fa98b741eb24c41313df007ce9bbb93e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1229 07:10:38.744058  185656 image.go:138] retrieving image: registry.k8s.io/kube-controller-manager:v1.35.0
	I1229 07:10:38.744294  185656 image.go:138] retrieving image: registry.k8s.io/etcd:3.6.6-0
	I1229 07:10:38.743974  185656 cache.go:107] acquiring lock: {Name:mk2827ee73a1c5c546c3035bd69b730bda1ef682 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1229 07:10:38.744358  185656 image.go:138] retrieving image: registry.k8s.io/coredns/coredns:v1.13.1
	I1229 07:10:38.744399  185656 image.go:138] retrieving image: registry.k8s.io/pause:3.10.1
	I1229 07:10:38.744433  185656 image.go:138] retrieving image: registry.k8s.io/kube-proxy:v1.35.0
	I1229 07:10:38.745439  185656 image.go:181] daemon lookup for registry.k8s.io/kube-scheduler:v1.35.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.35.0
	I1229 07:10:38.745454  185656 image.go:181] daemon lookup for registry.k8s.io/coredns/coredns:v1.13.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.13.1
	I1229 07:10:38.745441  185656 image.go:181] daemon lookup for registry.k8s.io/kube-controller-manager:v1.35.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.35.0
	I1229 07:10:38.745550  185656 image.go:181] daemon lookup for registry.k8s.io/kube-apiserver:v1.35.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.35.0
	I1229 07:10:38.745635  185656 image.go:181] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1229 07:10:38.745763  185656 image.go:181] daemon lookup for registry.k8s.io/pause:3.10.1: Error response from daemon: No such image: registry.k8s.io/pause:3.10.1
	I1229 07:10:38.745806  185656 image.go:181] daemon lookup for registry.k8s.io/etcd:3.6.6-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.6.6-0
	I1229 07:10:38.745915  185656 image.go:181] daemon lookup for registry.k8s.io/kube-proxy:v1.35.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.35.0
	I1229 07:10:38.767425  185656 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766979815-22353@sha256:20dad5895b49b986a1253c0faab60865204843ac97fd3a6e6210da5896244409 in local docker daemon, skipping pull
	I1229 07:10:38.767442  185656 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766979815-22353@sha256:20dad5895b49b986a1253c0faab60865204843ac97fd3a6e6210da5896244409 exists in daemon, skipping load
	I1229 07:10:38.767457  185656 cache.go:243] Successfully downloaded all kic artifacts
	I1229 07:10:38.767484  185656 start.go:360] acquireMachinesLock for test-preload-457393: {Name:mkb2388e448598aef3181efcc77795294aa54220 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1229 07:10:38.767561  185656 start.go:364] duration metric: took 65.135µs to acquireMachinesLock for "test-preload-457393"
	I1229 07:10:38.767581  185656 start.go:93] Provisioning new machine with config: &{Name:test-preload-457393 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766979815-22353@sha256:20dad5895b49b986a1253c0faab60865204843ac97fd3a6e6210da5896244409 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:test-preload-457393 Namespace:default APIServerHAVIP: APISe
rverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFi
rmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false} &{Name: IP: Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1229 07:10:38.767647  185656 start.go:125] createHost starting for "" (driver="docker")
	I1229 07:10:37.278064  184982 out.go:252] * Updating the running docker "pause-481637" container ...
	I1229 07:10:37.278108  184982 machine.go:94] provisionDockerMachine start ...
	I1229 07:10:37.278191  184982 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-481637
	I1229 07:10:37.300787  184982 main.go:144] libmachine: Using SSH client type: native
	I1229 07:10:37.301038  184982 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e300] 0x850fa0 <nil>  [] 0s} 127.0.0.1 32968 <nil> <nil>}
	I1229 07:10:37.301053  184982 main.go:144] libmachine: About to run SSH command:
	hostname
	I1229 07:10:37.437547  184982 main.go:144] libmachine: SSH cmd err, output: <nil>: pause-481637
	
	I1229 07:10:37.437576  184982 ubuntu.go:182] provisioning hostname "pause-481637"
	I1229 07:10:37.437652  184982 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-481637
	I1229 07:10:37.456991  184982 main.go:144] libmachine: Using SSH client type: native
	I1229 07:10:37.457317  184982 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e300] 0x850fa0 <nil>  [] 0s} 127.0.0.1 32968 <nil> <nil>}
	I1229 07:10:37.457334  184982 main.go:144] libmachine: About to run SSH command:
	sudo hostname pause-481637 && echo "pause-481637" | sudo tee /etc/hostname
	I1229 07:10:37.604254  184982 main.go:144] libmachine: SSH cmd err, output: <nil>: pause-481637
	
	I1229 07:10:37.604322  184982 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-481637
	I1229 07:10:37.625977  184982 main.go:144] libmachine: Using SSH client type: native
	I1229 07:10:37.626201  184982 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e300] 0x850fa0 <nil>  [] 0s} 127.0.0.1 32968 <nil> <nil>}
	I1229 07:10:37.626227  184982 main.go:144] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\spause-481637' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 pause-481637/g' /etc/hosts;
				else 
					echo '127.0.1.1 pause-481637' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1229 07:10:37.763943  184982 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	I1229 07:10:37.763971  184982 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22353-9207/.minikube CaCertPath:/home/jenkins/minikube-integration/22353-9207/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22353-9207/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22353-9207/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22353-9207/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22353-9207/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22353-9207/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22353-9207/.minikube}
	I1229 07:10:37.763996  184982 ubuntu.go:190] setting up certificates
	I1229 07:10:37.764006  184982 provision.go:84] configureAuth start
	I1229 07:10:37.764055  184982 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" pause-481637
	I1229 07:10:37.784301  184982 provision.go:143] copyHostCerts
	I1229 07:10:37.784357  184982 exec_runner.go:144] found /home/jenkins/minikube-integration/22353-9207/.minikube/ca.pem, removing ...
	I1229 07:10:37.784370  184982 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22353-9207/.minikube/ca.pem
	I1229 07:10:37.784434  184982 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22353-9207/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22353-9207/.minikube/ca.pem (1082 bytes)
	I1229 07:10:37.784582  184982 exec_runner.go:144] found /home/jenkins/minikube-integration/22353-9207/.minikube/cert.pem, removing ...
	I1229 07:10:37.784595  184982 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22353-9207/.minikube/cert.pem
	I1229 07:10:37.784622  184982 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22353-9207/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22353-9207/.minikube/cert.pem (1123 bytes)
	I1229 07:10:37.784677  184982 exec_runner.go:144] found /home/jenkins/minikube-integration/22353-9207/.minikube/key.pem, removing ...
	I1229 07:10:37.784685  184982 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22353-9207/.minikube/key.pem
	I1229 07:10:37.784726  184982 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22353-9207/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22353-9207/.minikube/key.pem (1675 bytes)
	I1229 07:10:37.784793  184982 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22353-9207/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22353-9207/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22353-9207/.minikube/certs/ca-key.pem org=jenkins.pause-481637 san=[127.0.0.1 192.168.76.2 localhost minikube pause-481637]
	I1229 07:10:37.933162  184982 provision.go:177] copyRemoteCerts
	I1229 07:10:37.933247  184982 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1229 07:10:37.933294  184982 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-481637
	I1229 07:10:37.954978  184982 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32968 SSHKeyPath:/home/jenkins/minikube-integration/22353-9207/.minikube/machines/pause-481637/id_rsa Username:docker}
	I1229 07:10:38.058281  184982 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-9207/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1229 07:10:38.076769  184982 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-9207/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1229 07:10:38.094650  184982 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-9207/.minikube/machines/server.pem --> /etc/docker/server.pem (1204 bytes)
	I1229 07:10:38.113168  184982 provision.go:87] duration metric: took 349.146028ms to configureAuth
	I1229 07:10:38.113198  184982 ubuntu.go:206] setting minikube options for container-runtime
	I1229 07:10:38.113470  184982 config.go:182] Loaded profile config "pause-481637": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1229 07:10:38.113596  184982 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-481637
	I1229 07:10:38.133875  184982 main.go:144] libmachine: Using SSH client type: native
	I1229 07:10:38.134204  184982 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e300] 0x850fa0 <nil>  [] 0s} 127.0.0.1 32968 <nil> <nil>}
	I1229 07:10:38.134247  184982 main.go:144] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1229 07:10:38.488767  184982 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1229 07:10:38.488793  184982 machine.go:97] duration metric: took 1.210676286s to provisionDockerMachine
	I1229 07:10:38.488806  184982 start.go:293] postStartSetup for "pause-481637" (driver="docker")
	I1229 07:10:38.488817  184982 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1229 07:10:38.488865  184982 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1229 07:10:38.488917  184982 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-481637
	I1229 07:10:38.512042  184982 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32968 SSHKeyPath:/home/jenkins/minikube-integration/22353-9207/.minikube/machines/pause-481637/id_rsa Username:docker}
	I1229 07:10:38.616507  184982 ssh_runner.go:195] Run: cat /etc/os-release
	I1229 07:10:38.621568  184982 main.go:144] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1229 07:10:38.621601  184982 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1229 07:10:38.621614  184982 filesync.go:126] Scanning /home/jenkins/minikube-integration/22353-9207/.minikube/addons for local assets ...
	I1229 07:10:38.621661  184982 filesync.go:126] Scanning /home/jenkins/minikube-integration/22353-9207/.minikube/files for local assets ...
	I1229 07:10:38.621736  184982 filesync.go:149] local asset: /home/jenkins/minikube-integration/22353-9207/.minikube/files/etc/ssl/certs/127332.pem -> 127332.pem in /etc/ssl/certs
	I1229 07:10:38.621832  184982 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1229 07:10:38.632006  184982 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-9207/.minikube/files/etc/ssl/certs/127332.pem --> /etc/ssl/certs/127332.pem (1708 bytes)
	I1229 07:10:38.651438  184982 start.go:296] duration metric: took 162.615309ms for postStartSetup
	I1229 07:10:38.651521  184982 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1229 07:10:38.651574  184982 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-481637
	I1229 07:10:38.673666  184982 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32968 SSHKeyPath:/home/jenkins/minikube-integration/22353-9207/.minikube/machines/pause-481637/id_rsa Username:docker}
	I1229 07:10:38.778064  184982 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1229 07:10:38.786756  184982 fix.go:56] duration metric: took 1.530554616s for fixHost
	I1229 07:10:38.786784  184982 start.go:83] releasing machines lock for "pause-481637", held for 1.530610847s
	I1229 07:10:38.786868  184982 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" pause-481637
	I1229 07:10:38.810853  184982 ssh_runner.go:195] Run: cat /version.json
	I1229 07:10:38.810913  184982 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-481637
	I1229 07:10:38.811175  184982 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1229 07:10:38.811446  184982 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-481637
	I1229 07:10:38.835403  184982 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32968 SSHKeyPath:/home/jenkins/minikube-integration/22353-9207/.minikube/machines/pause-481637/id_rsa Username:docker}
	I1229 07:10:38.839102  184982 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32968 SSHKeyPath:/home/jenkins/minikube-integration/22353-9207/.minikube/machines/pause-481637/id_rsa Username:docker}
	I1229 07:10:39.006047  184982 ssh_runner.go:195] Run: systemctl --version
	I1229 07:10:39.013797  184982 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1229 07:10:39.060677  184982 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1229 07:10:39.065647  184982 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1229 07:10:39.065715  184982 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1229 07:10:39.077663  184982 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1229 07:10:39.077701  184982 start.go:496] detecting cgroup driver to use...
	I1229 07:10:39.077740  184982 detect.go:190] detected "systemd" cgroup driver on host os
	I1229 07:10:39.077788  184982 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1229 07:10:39.092726  184982 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1229 07:10:39.109006  184982 docker.go:218] disabling cri-docker service (if available) ...
	I1229 07:10:39.109053  184982 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1229 07:10:39.129992  184982 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1229 07:10:39.144931  184982 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1229 07:10:39.285687  184982 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1229 07:10:39.422869  184982 docker.go:234] disabling docker service ...
	I1229 07:10:39.422935  184982 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1229 07:10:39.444429  184982 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1229 07:10:39.462713  184982 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1229 07:10:39.620413  184982 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1229 07:10:39.754596  184982 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1229 07:10:39.769492  184982 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1229 07:10:39.787729  184982 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1229 07:10:39.787787  184982 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1229 07:10:39.799791  184982 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1229 07:10:39.799888  184982 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1229 07:10:39.813781  184982 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1229 07:10:39.825683  184982 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1229 07:10:39.837324  184982 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1229 07:10:39.848120  184982 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1229 07:10:39.861132  184982 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1229 07:10:39.874824  184982 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1229 07:10:39.889544  184982 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1229 07:10:39.899886  184982 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1229 07:10:39.910761  184982 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1229 07:10:40.104887  184982 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1229 07:10:40.577985  184982 start.go:553] Will wait 60s for socket path /var/run/crio/crio.sock
	I1229 07:10:40.578077  184982 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1229 07:10:40.582195  184982 start.go:574] Will wait 60s for crictl version
	I1229 07:10:40.582276  184982 ssh_runner.go:195] Run: which crictl
	I1229 07:10:40.586245  184982 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1229 07:10:40.615451  184982 start.go:590] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.35.0
	RuntimeApiVersion:  v1
	I1229 07:10:40.615534  184982 ssh_runner.go:195] Run: crio --version
	I1229 07:10:40.651157  184982 ssh_runner.go:195] Run: crio --version
	I1229 07:10:40.688505  184982 out.go:179] * Preparing Kubernetes v1.35.0 on CRI-O 1.35.0 ...
	I1229 07:10:37.076372  180179 ssh_runner.go:235] Completed: sudo /usr/bin/crictl stop --timeout=10 c4de868d8e7cd7186e9c02689fcab4e20ee7cc69439f75bd6589b0e2f2a76f4c d7ffcb16033604da855d3b18308e6bb7ad051e797b9100e4247305ad296d88a0 61da7c7e3e67248740751476836d5878354d3474fb848f0a01a8b7669160c644 943e484309042e5301b70b2fb6123087713f3668ce7976ec3c6f7721a82e55ce c97134900ec2ce44a343e04081575d629c5b47b3647e400b5131235502c05084 a4331cf92202f1649123550cc1f1aff5703128efd0d525beed55f08f92be60ff eff234d4d3eb541870c2e6695eea1bf17e5c5969b67d630ff03d1f66226b5105 2fe4a9ae36e2b56d4729183e5dfb8897acdfc19fb4167fd4c8273e5692b46e84 04ce805b40e30e67ee90c543f4db6990461ae08ff1397ad544fe3439728e50af: (11.202149988s)
	I1229 07:10:37.076460  180179 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1229 07:10:37.110446  180179 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1229 07:10:37.121471  180179 kubeadm.go:158] found existing configuration files:
	-rw------- 1 root root 5647 Dec 29 07:10 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5657 Dec 29 07:10 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2027 Dec 29 07:10 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5605 Dec 29 07:10 /etc/kubernetes/scheduler.conf
	
	I1229 07:10:37.121540  180179 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1229 07:10:37.132716  180179 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1229 07:10:37.145339  180179 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1229 07:10:37.155372  180179 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1229 07:10:37.155432  180179 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1229 07:10:37.164869  180179 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1229 07:10:37.174773  180179 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1229 07:10:37.174830  180179 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1229 07:10:37.186110  180179 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1229 07:10:37.198401  180179 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1229 07:10:37.245006  180179 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1229 07:10:38.590799  180179 ssh_runner.go:235] Completed: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.345753434s)
	I1229 07:10:38.590872  180179 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1229 07:10:38.814543  180179 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1229 07:10:38.878517  180179 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1229 07:10:38.945353  180179 api_server.go:52] waiting for apiserver process to appear ...
	I1229 07:10:38.945429  180179 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1229 07:10:39.445622  180179 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1229 07:10:39.461670  180179 api_server.go:72] duration metric: took 516.322947ms to wait for apiserver process to appear ...
	I1229 07:10:39.461697  180179 api_server.go:88] waiting for apiserver healthz status ...
	I1229 07:10:39.461721  180179 api_server.go:299] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1229 07:10:39.462433  180179 api_server.go:315] stopped: https://192.168.103.2:8443/healthz: Get "https://192.168.103.2:8443/healthz": dial tcp 192.168.103.2:8443: connect: connection refused
	I1229 07:10:39.961837  180179 api_server.go:299] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1229 07:10:39.778309  182389 api_server.go:315] stopped: https://192.168.94.2:8443/healthz: Get "https://192.168.94.2:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1229 07:10:39.778363  182389 api_server.go:299] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1229 07:10:40.689578  184982 cli_runner.go:164] Run: docker network inspect pause-481637 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1229 07:10:40.708623  184982 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1229 07:10:40.712803  184982 kubeadm.go:884] updating cluster {Name:pause-481637 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766979815-22353@sha256:20dad5895b49b986a1253c0faab60865204843ac97fd3a6e6210da5896244409 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:pause-481637 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerName
s:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false regist
ry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false} ...
	I1229 07:10:40.712964  184982 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime crio
	I1229 07:10:40.713012  184982 ssh_runner.go:195] Run: sudo crictl images --output json
	I1229 07:10:40.749944  184982 crio.go:561] all images are preloaded for cri-o runtime.
	I1229 07:10:40.749966  184982 crio.go:433] Images already preloaded, skipping extraction
	I1229 07:10:40.750018  184982 ssh_runner.go:195] Run: sudo crictl images --output json
	I1229 07:10:40.796869  184982 crio.go:561] all images are preloaded for cri-o runtime.
	I1229 07:10:40.796894  184982 cache_images.go:86] Images are preloaded, skipping loading
	I1229 07:10:40.796903  184982 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.35.0 crio true true} ...
	I1229 07:10:40.797035  184982 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=pause-481637 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0 ClusterName:pause-481637 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1229 07:10:40.797123  184982 ssh_runner.go:195] Run: crio config
	I1229 07:10:40.847705  184982 cni.go:84] Creating CNI manager for ""
	I1229 07:10:40.847752  184982 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1229 07:10:40.847771  184982 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1229 07:10:40.847809  184982 kubeadm.go:197] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.35.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:pause-481637 NodeName:pause-481637 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes
/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1229 07:10:40.848011  184982 kubeadm.go:203] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "pause-481637"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1229 07:10:40.848096  184982 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0
	I1229 07:10:40.857098  184982 binaries.go:51] Found k8s binaries, skipping transfer
	I1229 07:10:40.857163  184982 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1229 07:10:40.871473  184982 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (362 bytes)
	I1229 07:10:40.895296  184982 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1229 07:10:40.918275  184982 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2208 bytes)
	I1229 07:10:40.934823  184982 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1229 07:10:40.939330  184982 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1229 07:10:41.069689  184982 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1229 07:10:41.082921  184982 certs.go:69] Setting up /home/jenkins/minikube-integration/22353-9207/.minikube/profiles/pause-481637 for IP: 192.168.76.2
	I1229 07:10:41.082943  184982 certs.go:195] generating shared ca certs ...
	I1229 07:10:41.082961  184982 certs.go:227] acquiring lock for ca certs: {Name:mk9ea2caa33885d3eff30cd6b31b0954826457bd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1229 07:10:41.083125  184982 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22353-9207/.minikube/ca.key
	I1229 07:10:41.083166  184982 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22353-9207/.minikube/proxy-client-ca.key
	I1229 07:10:41.083190  184982 certs.go:257] generating profile certs ...
	I1229 07:10:41.083305  184982 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22353-9207/.minikube/profiles/pause-481637/client.key
	I1229 07:10:41.083353  184982 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22353-9207/.minikube/profiles/pause-481637/apiserver.key.d7ad62ba
	I1229 07:10:41.083392  184982 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22353-9207/.minikube/profiles/pause-481637/proxy-client.key
	I1229 07:10:41.083491  184982 certs.go:484] found cert: /home/jenkins/minikube-integration/22353-9207/.minikube/certs/12733.pem (1338 bytes)
	W1229 07:10:41.083520  184982 certs.go:480] ignoring /home/jenkins/minikube-integration/22353-9207/.minikube/certs/12733_empty.pem, impossibly tiny 0 bytes
	I1229 07:10:41.083529  184982 certs.go:484] found cert: /home/jenkins/minikube-integration/22353-9207/.minikube/certs/ca-key.pem (1675 bytes)
	I1229 07:10:41.083553  184982 certs.go:484] found cert: /home/jenkins/minikube-integration/22353-9207/.minikube/certs/ca.pem (1082 bytes)
	I1229 07:10:41.083576  184982 certs.go:484] found cert: /home/jenkins/minikube-integration/22353-9207/.minikube/certs/cert.pem (1123 bytes)
	I1229 07:10:41.083603  184982 certs.go:484] found cert: /home/jenkins/minikube-integration/22353-9207/.minikube/certs/key.pem (1675 bytes)
	I1229 07:10:41.083642  184982 certs.go:484] found cert: /home/jenkins/minikube-integration/22353-9207/.minikube/files/etc/ssl/certs/127332.pem (1708 bytes)
	I1229 07:10:41.084194  184982 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-9207/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1229 07:10:41.102676  184982 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-9207/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1229 07:10:41.120848  184982 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-9207/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1229 07:10:41.139557  184982 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-9207/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1229 07:10:41.157083  184982 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-9207/.minikube/profiles/pause-481637/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1229 07:10:41.174643  184982 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-9207/.minikube/profiles/pause-481637/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1229 07:10:41.192344  184982 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-9207/.minikube/profiles/pause-481637/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1229 07:10:41.210591  184982 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-9207/.minikube/profiles/pause-481637/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1229 07:10:41.229746  184982 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-9207/.minikube/files/etc/ssl/certs/127332.pem --> /usr/share/ca-certificates/127332.pem (1708 bytes)
	I1229 07:10:41.248689  184982 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-9207/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1229 07:10:41.266606  184982 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-9207/.minikube/certs/12733.pem --> /usr/share/ca-certificates/12733.pem (1338 bytes)
	I1229 07:10:41.284827  184982 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (722 bytes)
	I1229 07:10:41.297270  184982 ssh_runner.go:195] Run: openssl version
	I1229 07:10:41.303303  184982 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1229 07:10:41.310480  184982 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1229 07:10:41.318141  184982 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1229 07:10:41.321971  184982 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 29 06:46 /usr/share/ca-certificates/minikubeCA.pem
	I1229 07:10:41.322015  184982 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1229 07:10:41.357324  184982 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1229 07:10:41.365850  184982 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/12733.pem
	I1229 07:10:41.373751  184982 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/12733.pem /etc/ssl/certs/12733.pem
	I1229 07:10:41.381244  184982 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12733.pem
	I1229 07:10:41.385141  184982 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 29 06:49 /usr/share/ca-certificates/12733.pem
	I1229 07:10:41.385193  184982 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12733.pem
	I1229 07:10:41.421035  184982 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1229 07:10:41.429649  184982 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/127332.pem
	I1229 07:10:41.437653  184982 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/127332.pem /etc/ssl/certs/127332.pem
	I1229 07:10:41.445448  184982 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/127332.pem
	I1229 07:10:41.449401  184982 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 29 06:49 /usr/share/ca-certificates/127332.pem
	I1229 07:10:41.449464  184982 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/127332.pem
	I1229 07:10:41.492040  184982 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1229 07:10:41.499879  184982 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1229 07:10:41.503808  184982 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1229 07:10:41.541019  184982 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1229 07:10:41.575973  184982 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1229 07:10:41.612096  184982 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1229 07:10:41.653183  184982 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1229 07:10:41.687845  184982 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1229 07:10:41.722133  184982 kubeadm.go:401] StartCluster: {Name:pause-481637 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766979815-22353@sha256:20dad5895b49b986a1253c0faab60865204843ac97fd3a6e6210da5896244409 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:pause-481637 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[
] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-
aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1229 07:10:41.722279  184982 cri.go:61] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1229 07:10:41.722365  184982 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1229 07:10:41.752203  184982 cri.go:96] found id: "58489a35e11e19b18e157e91660aa22fd062abb979b1048c95b030e927e43506"
	I1229 07:10:41.752237  184982 cri.go:96] found id: "5e860b5c12c78561beb41ccbc2085158a45919cc8ed9cb9abc4373d630fd84d2"
	I1229 07:10:41.752242  184982 cri.go:96] found id: "571cde0d2860e745d81cd271b5ab627d3d57c83abf1fdb3eb1c516a2e37d9f26"
	I1229 07:10:41.752247  184982 cri.go:96] found id: "15715c02ea0c8855cc0d760b1a4d47caf7aba7242bd633dedbf18b227f270ce2"
	I1229 07:10:41.752251  184982 cri.go:96] found id: "cbb7bcc884a54e8411d72312a0ada9c61edf897db2ff6699aa4e8a312e9735eb"
	I1229 07:10:41.752256  184982 cri.go:96] found id: "5f72b4ac83205cc93dc25e7f93237b1feb1132c848b45fa12af6205fda6bffd9"
	I1229 07:10:41.752260  184982 cri.go:96] found id: "febc3eb4806ae3ff3fe4ec5ade5c51f1b25306c094df505f7f1a60820d643d9a"
	I1229 07:10:41.752265  184982 cri.go:96] found id: ""
	I1229 07:10:41.752305  184982 ssh_runner.go:195] Run: sudo runc list -f json
	W1229 07:10:41.766893  184982 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-29T07:10:41Z" level=error msg="open /run/runc: no such file or directory"
	I1229 07:10:41.766987  184982 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1229 07:10:41.776924  184982 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1229 07:10:41.776942  184982 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1229 07:10:41.776982  184982 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1229 07:10:41.786787  184982 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1229 07:10:41.787705  184982 kubeconfig.go:125] found "pause-481637" server: "https://192.168.76.2:8443"
	I1229 07:10:41.789033  184982 kapi.go:59] client config for pause-481637: &rest.Config{Host:"https://192.168.76.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22353-9207/.minikube/profiles/pause-481637/client.crt", KeyFile:"/home/jenkins/minikube-integration/22353-9207/.minikube/profiles/pause-481637/client.key", CAFile:"/home/jenkins/minikube-integration/22353-9207/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(
nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2780200), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1229 07:10:41.789454  184982 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=true
	I1229 07:10:41.789475  184982 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1229 07:10:41.789483  184982 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1229 07:10:41.789490  184982 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1229 07:10:41.789495  184982 envvar.go:172] "Feature gate default state" feature="InOrderInformersBatchProcess" enabled=true
	I1229 07:10:41.789513  184982 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=true
	I1229 07:10:41.789880  184982 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1229 07:10:41.800195  184982 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.76.2
	I1229 07:10:41.800268  184982 kubeadm.go:602] duration metric: took 23.317611ms to restartPrimaryControlPlane
	I1229 07:10:41.800288  184982 kubeadm.go:403] duration metric: took 78.164ms to StartCluster
	I1229 07:10:41.800308  184982 settings.go:142] acquiring lock: {Name:mkc0a676e8e9f981283a1b55a28ae5261bf35d87 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1229 07:10:41.800376  184982 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22353-9207/kubeconfig
	I1229 07:10:41.801758  184982 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22353-9207/kubeconfig: {Name:mkdd37af1bd6d0f9cc034a64c07c8d33ee5c10ac Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1229 07:10:41.802018  184982 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1229 07:10:41.802092  184982 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1229 07:10:41.802331  184982 config.go:182] Loaded profile config "pause-481637": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1229 07:10:41.803616  184982 out.go:179] * Enabled addons: 
	I1229 07:10:41.803620  184982 out.go:179] * Verifying Kubernetes components...
	I1229 07:10:41.805098  184982 addons.go:530] duration metric: took 3.011979ms for enable addons: enabled=[]
	I1229 07:10:41.805130  184982 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1229 07:10:41.954437  184982 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1229 07:10:41.975730  184982 node_ready.go:35] waiting up to 6m0s for node "pause-481637" to be "Ready" ...
	I1229 07:10:41.984666  184982 node_ready.go:49] node "pause-481637" is "Ready"
	I1229 07:10:41.984696  184982 node_ready.go:38] duration metric: took 8.917893ms for node "pause-481637" to be "Ready" ...
	I1229 07:10:41.984712  184982 api_server.go:52] waiting for apiserver process to appear ...
	I1229 07:10:41.984760  184982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1229 07:10:42.000614  184982 api_server.go:72] duration metric: took 198.561288ms to wait for apiserver process to appear ...
	I1229 07:10:42.000642  184982 api_server.go:88] waiting for apiserver healthz status ...
	I1229 07:10:42.000661  184982 api_server.go:299] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1229 07:10:42.006201  184982 api_server.go:325] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1229 07:10:42.007293  184982 api_server.go:141] control plane version: v1.35.0
	I1229 07:10:42.007316  184982 api_server.go:131] duration metric: took 6.667614ms to wait for apiserver health ...
	I1229 07:10:42.007323  184982 system_pods.go:43] waiting for kube-system pods to appear ...
	I1229 07:10:42.011320  184982 system_pods.go:59] 7 kube-system pods found
	I1229 07:10:42.011370  184982 system_pods.go:61] "coredns-7d764666f9-5zm82" [c1d30c2b-41ab-4a8b-a50a-fc2a71b80ae0] Running
	I1229 07:10:42.011379  184982 system_pods.go:61] "etcd-pause-481637" [d12c815e-120b-4d95-94b1-ee02a8fa1164] Running
	I1229 07:10:42.011385  184982 system_pods.go:61] "kindnet-x4zst" [1bff808c-1d25-4aa1-911c-4635dae3d37b] Running
	I1229 07:10:42.011391  184982 system_pods.go:61] "kube-apiserver-pause-481637" [23bb01b3-1277-4348-b4bc-b17d459e9ecd] Running
	I1229 07:10:42.011396  184982 system_pods.go:61] "kube-controller-manager-pause-481637" [d66a44e6-6fc7-4479-9d4e-be6ba0212c2d] Running
	I1229 07:10:42.011402  184982 system_pods.go:61] "kube-proxy-2qrrw" [67bc54d5-c25b-4c66-b4b2-619f0b322789] Running
	I1229 07:10:42.011407  184982 system_pods.go:61] "kube-scheduler-pause-481637" [b5d71853-d628-474f-96c8-110e3dfc58f1] Running
	I1229 07:10:42.011415  184982 system_pods.go:74] duration metric: took 4.085358ms to wait for pod list to return data ...
	I1229 07:10:42.011424  184982 default_sa.go:34] waiting for default service account to be created ...
	I1229 07:10:42.013421  184982 default_sa.go:45] found service account: "default"
	I1229 07:10:42.013443  184982 default_sa.go:55] duration metric: took 2.012865ms for default service account to be created ...
	I1229 07:10:42.013453  184982 system_pods.go:116] waiting for k8s-apps to be running ...
	I1229 07:10:42.016150  184982 system_pods.go:86] 7 kube-system pods found
	I1229 07:10:42.016179  184982 system_pods.go:89] "coredns-7d764666f9-5zm82" [c1d30c2b-41ab-4a8b-a50a-fc2a71b80ae0] Running
	I1229 07:10:42.016187  184982 system_pods.go:89] "etcd-pause-481637" [d12c815e-120b-4d95-94b1-ee02a8fa1164] Running
	I1229 07:10:42.016193  184982 system_pods.go:89] "kindnet-x4zst" [1bff808c-1d25-4aa1-911c-4635dae3d37b] Running
	I1229 07:10:42.016199  184982 system_pods.go:89] "kube-apiserver-pause-481637" [23bb01b3-1277-4348-b4bc-b17d459e9ecd] Running
	I1229 07:10:42.016205  184982 system_pods.go:89] "kube-controller-manager-pause-481637" [d66a44e6-6fc7-4479-9d4e-be6ba0212c2d] Running
	I1229 07:10:42.016240  184982 system_pods.go:89] "kube-proxy-2qrrw" [67bc54d5-c25b-4c66-b4b2-619f0b322789] Running
	I1229 07:10:42.016247  184982 system_pods.go:89] "kube-scheduler-pause-481637" [b5d71853-d628-474f-96c8-110e3dfc58f1] Running
	I1229 07:10:42.016255  184982 system_pods.go:126] duration metric: took 2.7956ms to wait for k8s-apps to be running ...
	I1229 07:10:42.016263  184982 system_svc.go:44] waiting for kubelet service to be running ....
	I1229 07:10:42.016308  184982 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1229 07:10:42.033259  184982 system_svc.go:56] duration metric: took 16.985845ms WaitForService to wait for kubelet
	I1229 07:10:42.033289  184982 kubeadm.go:587] duration metric: took 231.24121ms to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1229 07:10:42.033318  184982 node_conditions.go:102] verifying NodePressure condition ...
	I1229 07:10:42.036157  184982 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1229 07:10:42.036190  184982 node_conditions.go:123] node cpu capacity is 8
	I1229 07:10:42.036209  184982 node_conditions.go:105] duration metric: took 2.883962ms to run NodePressure ...
	I1229 07:10:42.036240  184982 start.go:242] waiting for startup goroutines ...
	I1229 07:10:42.036251  184982 start.go:247] waiting for cluster config update ...
	I1229 07:10:42.036266  184982 start.go:256] writing updated cluster config ...
	I1229 07:10:42.036959  184982 ssh_runner.go:195] Run: rm -f paused
	I1229 07:10:42.041826  184982 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1229 07:10:42.043152  184982 kapi.go:59] client config for pause-481637: &rest.Config{Host:"https://192.168.76.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22353-9207/.minikube/profiles/pause-481637/client.crt", KeyFile:"/home/jenkins/minikube-integration/22353-9207/.minikube/profiles/pause-481637/client.key", CAFile:"/home/jenkins/minikube-integration/22353-9207/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(
nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2780200), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1229 07:10:42.047897  184982 pod_ready.go:83] waiting for pod "coredns-7d764666f9-5zm82" in "kube-system" namespace to be "Ready" or be gone ...
	I1229 07:10:42.053760  184982 pod_ready.go:94] pod "coredns-7d764666f9-5zm82" is "Ready"
	I1229 07:10:42.053781  184982 pod_ready.go:86] duration metric: took 5.854146ms for pod "coredns-7d764666f9-5zm82" in "kube-system" namespace to be "Ready" or be gone ...
	I1229 07:10:42.056598  184982 pod_ready.go:83] waiting for pod "etcd-pause-481637" in "kube-system" namespace to be "Ready" or be gone ...
	I1229 07:10:40.880995  180179 api_server.go:325] https://192.168.103.2:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1229 07:10:40.881020  180179 api_server.go:103] status: https://192.168.103.2:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1229 07:10:40.881036  180179 api_server.go:299] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1229 07:10:40.898208  180179 api_server.go:325] https://192.168.103.2:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1229 07:10:40.898249  180179 api_server.go:103] status: https://192.168.103.2:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1229 07:10:40.962495  180179 api_server.go:299] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1229 07:10:40.967036  180179 api_server.go:325] https://192.168.103.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1229 07:10:40.967078  180179 api_server.go:103] status: https://192.168.103.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1229 07:10:41.462371  180179 api_server.go:299] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1229 07:10:41.466479  180179 api_server.go:325] https://192.168.103.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1229 07:10:41.466506  180179 api_server.go:103] status: https://192.168.103.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1229 07:10:41.962941  180179 api_server.go:299] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1229 07:10:41.968061  180179 api_server.go:325] https://192.168.103.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1229 07:10:41.968090  180179 api_server.go:103] status: https://192.168.103.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1229 07:10:42.462762  180179 api_server.go:299] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1229 07:10:42.467647  180179 api_server.go:325] https://192.168.103.2:8443/healthz returned 200:
	ok
	I1229 07:10:42.474456  180179 api_server.go:141] control plane version: v1.32.0
	I1229 07:10:42.474483  180179 api_server.go:131] duration metric: took 3.012780254s to wait for apiserver health ...
	I1229 07:10:42.474492  180179 cni.go:84] Creating CNI manager for ""
	I1229 07:10:42.474497  180179 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1229 07:10:42.476257  180179 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1229 07:10:38.769670  185656 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1229 07:10:38.769848  185656 start.go:159] libmachine.API.Create for "test-preload-457393" (driver="docker")
	I1229 07:10:38.769874  185656 client.go:173] LocalClient.Create starting
	I1229 07:10:38.769927  185656 main.go:144] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22353-9207/.minikube/certs/ca.pem
	I1229 07:10:38.769956  185656 main.go:144] libmachine: Decoding PEM data...
	I1229 07:10:38.769971  185656 main.go:144] libmachine: Parsing certificate...
	I1229 07:10:38.770017  185656 main.go:144] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22353-9207/.minikube/certs/cert.pem
	I1229 07:10:38.770034  185656 main.go:144] libmachine: Decoding PEM data...
	I1229 07:10:38.770045  185656 main.go:144] libmachine: Parsing certificate...
	I1229 07:10:38.770367  185656 cli_runner.go:164] Run: docker network inspect test-preload-457393 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1229 07:10:38.793355  185656 cli_runner.go:211] docker network inspect test-preload-457393 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1229 07:10:38.793428  185656 network_create.go:284] running [docker network inspect test-preload-457393] to gather additional debugging logs...
	I1229 07:10:38.793445  185656 cli_runner.go:164] Run: docker network inspect test-preload-457393
	W1229 07:10:38.816704  185656 cli_runner.go:211] docker network inspect test-preload-457393 returned with exit code 1
	I1229 07:10:38.816734  185656 network_create.go:287] error running [docker network inspect test-preload-457393]: docker network inspect test-preload-457393: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network test-preload-457393 not found
	I1229 07:10:38.816750  185656 network_create.go:289] output of [docker network inspect test-preload-457393]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network test-preload-457393 not found
	
	** /stderr **
	I1229 07:10:38.816843  185656 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1229 07:10:38.839475  185656 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-0cdc02b57a9c IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:d2:92:f5:d8:8c:53} reservation:<nil>}
	I1229 07:10:38.840071  185656 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-09c86d5ed1ab IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:da:3f:ba:d0:a8:f3} reservation:<nil>}
	I1229 07:10:38.840731  185656 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-5eb2f52e9e64 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:9e:e7:f2:5b:43:1d} reservation:<nil>}
	I1229 07:10:38.841404  185656 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-0f29fff9b01c IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:7a:4b:71:52:08:de} reservation:<nil>}
	I1229 07:10:38.842143  185656 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001c27210}
	I1229 07:10:38.842202  185656 network_create.go:124] attempt to create docker network test-preload-457393 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I1229 07:10:38.842282  185656 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=test-preload-457393 test-preload-457393
	I1229 07:10:38.897030  185656 cache.go:162] opening:  /home/jenkins/minikube-integration/22353-9207/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.6-0
	I1229 07:10:38.900713  185656 cache.go:162] opening:  /home/jenkins/minikube-integration/22353-9207/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.35.0
	I1229 07:10:38.901403  185656 cache.go:162] opening:  /home/jenkins/minikube-integration/22353-9207/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.13.1
	I1229 07:10:38.902572  185656 cache.go:162] opening:  /home/jenkins/minikube-integration/22353-9207/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.35.0
	I1229 07:10:38.902939  185656 cache.go:162] opening:  /home/jenkins/minikube-integration/22353-9207/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.35.0
	I1229 07:10:38.906401  185656 network_create.go:108] docker network test-preload-457393 192.168.85.0/24 created
	I1229 07:10:38.906429  185656 kic.go:121] calculated static IP "192.168.85.2" for the "test-preload-457393" container
	I1229 07:10:38.906491  185656 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1229 07:10:38.921452  185656 cache.go:162] opening:  /home/jenkins/minikube-integration/22353-9207/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1
	I1229 07:10:38.921914  185656 cache.go:162] opening:  /home/jenkins/minikube-integration/22353-9207/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.35.0
	I1229 07:10:38.932840  185656 cli_runner.go:164] Run: docker volume create test-preload-457393 --label name.minikube.sigs.k8s.io=test-preload-457393 --label created_by.minikube.sigs.k8s.io=true
	I1229 07:10:38.955086  185656 oci.go:103] Successfully created a docker volume test-preload-457393
	I1229 07:10:38.955161  185656 cli_runner.go:164] Run: docker run --rm --name test-preload-457393-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=test-preload-457393 --entrypoint /usr/bin/test -v test-preload-457393:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766979815-22353@sha256:20dad5895b49b986a1253c0faab60865204843ac97fd3a6e6210da5896244409 -d /var/lib
	I1229 07:10:38.989637  185656 cache.go:157] /home/jenkins/minikube-integration/22353-9207/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1 exists
	I1229 07:10:38.989667  185656 cache.go:96] cache image "registry.k8s.io/pause:3.10.1" -> "/home/jenkins/minikube-integration/22353-9207/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1" took 245.699402ms
	I1229 07:10:38.989681  185656 cache.go:80] save to tar file registry.k8s.io/pause:3.10.1 -> /home/jenkins/minikube-integration/22353-9207/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1 succeeded
	I1229 07:10:39.206916  185656 cache.go:157] /home/jenkins/minikube-integration/22353-9207/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.35.0 exists
	I1229 07:10:39.206953  185656 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.35.0" -> "/home/jenkins/minikube-integration/22353-9207/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.35.0" took 462.917448ms
	I1229 07:10:39.206968  185656 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.35.0 -> /home/jenkins/minikube-integration/22353-9207/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.35.0 succeeded
	I1229 07:10:39.412853  185656 oci.go:107] Successfully prepared a docker volume test-preload-457393
	I1229 07:10:39.412914  185656 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime crio
	W1229 07:10:39.412989  185656 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1229 07:10:39.413142  185656 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1229 07:10:39.413205  185656 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1229 07:10:39.479464  185656 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname test-preload-457393 --name test-preload-457393 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=test-preload-457393 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=test-preload-457393 --network test-preload-457393 --ip 192.168.85.2 --volume test-preload-457393:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766979815-22353@sha256:20dad5895b49b986a1253c0faab60865204843ac97fd3a6e6210da5896244409
	I1229 07:10:39.714286  185656 cache.go:162] opening:  /home/jenkins/minikube-integration/22353-9207/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I1229 07:10:39.781794  185656 cli_runner.go:164] Run: docker container inspect test-preload-457393 --format={{.State.Running}}
	I1229 07:10:39.819530  185656 cli_runner.go:164] Run: docker container inspect test-preload-457393 --format={{.State.Status}}
	I1229 07:10:39.849666  185656 cli_runner.go:164] Run: docker exec test-preload-457393 stat /var/lib/dpkg/alternatives/iptables
	I1229 07:10:39.921734  185656 oci.go:144] the created container "test-preload-457393" has a running status.
	I1229 07:10:39.921775  185656 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/22353-9207/.minikube/machines/test-preload-457393/id_rsa...
	I1229 07:10:39.953474  185656 cache.go:157] /home/jenkins/minikube-integration/22353-9207/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1229 07:10:39.953580  185656 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/22353-9207/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5" took 1.209709339s
	I1229 07:10:39.953599  185656 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/22353-9207/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1229 07:10:40.048985  185656 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22353-9207/.minikube/machines/test-preload-457393/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1229 07:10:40.086827  185656 cache.go:157] /home/jenkins/minikube-integration/22353-9207/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.35.0 exists
	I1229 07:10:40.087277  185656 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.35.0" -> "/home/jenkins/minikube-integration/22353-9207/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.35.0" took 1.342863556s
	I1229 07:10:40.087307  185656 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.35.0 -> /home/jenkins/minikube-integration/22353-9207/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.35.0 succeeded
	I1229 07:10:40.098837  185656 cli_runner.go:164] Run: docker container inspect test-preload-457393 --format={{.State.Status}}
	I1229 07:10:40.119266  185656 cache.go:157] /home/jenkins/minikube-integration/22353-9207/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.35.0 exists
	I1229 07:10:40.119302  185656 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.35.0" -> "/home/jenkins/minikube-integration/22353-9207/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.35.0" took 1.37538339s
	I1229 07:10:40.119319  185656 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.35.0 -> /home/jenkins/minikube-integration/22353-9207/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.35.0 succeeded
	I1229 07:10:40.130514  185656 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1229 07:10:40.130575  185656 kic_runner.go:114] Args: [docker exec --privileged test-preload-457393 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1229 07:10:40.222560  185656 cache.go:157] /home/jenkins/minikube-integration/22353-9207/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.6-0 exists
	I1229 07:10:40.222598  185656 cache.go:96] cache image "registry.k8s.io/etcd:3.6.6-0" -> "/home/jenkins/minikube-integration/22353-9207/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.6-0" took 1.478509302s
	I1229 07:10:40.222615  185656 cache.go:80] save to tar file registry.k8s.io/etcd:3.6.6-0 -> /home/jenkins/minikube-integration/22353-9207/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.6-0 succeeded
	I1229 07:10:40.225043  185656 cache.go:157] /home/jenkins/minikube-integration/22353-9207/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.35.0 exists
	I1229 07:10:40.225079  185656 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.35.0" -> "/home/jenkins/minikube-integration/22353-9207/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.35.0" took 1.481221707s
	I1229 07:10:40.225093  185656 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.35.0 -> /home/jenkins/minikube-integration/22353-9207/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.35.0 succeeded
	I1229 07:10:40.332029  185656 cli_runner.go:164] Run: docker container inspect test-preload-457393 --format={{.State.Status}}
	I1229 07:10:40.357406  185656 machine.go:94] provisionDockerMachine start ...
	I1229 07:10:40.357500  185656 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-457393
	I1229 07:10:40.384300  185656 main.go:144] libmachine: Using SSH client type: native
	I1229 07:10:40.384616  185656 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e300] 0x850fa0 <nil>  [] 0s} 127.0.0.1 32988 <nil> <nil>}
	I1229 07:10:40.384637  185656 main.go:144] libmachine: About to run SSH command:
	hostname
	I1229 07:10:40.420257  185656 cache.go:157] /home/jenkins/minikube-integration/22353-9207/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.13.1 exists
	I1229 07:10:40.420288  185656 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.13.1" -> "/home/jenkins/minikube-integration/22353-9207/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.13.1" took 1.676113514s
	I1229 07:10:40.420304  185656 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.13.1 -> /home/jenkins/minikube-integration/22353-9207/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.13.1 succeeded
	I1229 07:10:40.420361  185656 cache.go:87] Successfully saved all images to host disk.
	I1229 07:10:40.538331  185656 main.go:144] libmachine: SSH cmd err, output: <nil>: test-preload-457393
	
	I1229 07:10:40.538363  185656 ubuntu.go:182] provisioning hostname "test-preload-457393"
	I1229 07:10:40.538431  185656 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-457393
	I1229 07:10:40.563916  185656 main.go:144] libmachine: Using SSH client type: native
	I1229 07:10:40.564283  185656 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e300] 0x850fa0 <nil>  [] 0s} 127.0.0.1 32988 <nil> <nil>}
	I1229 07:10:40.564309  185656 main.go:144] libmachine: About to run SSH command:
	sudo hostname test-preload-457393 && echo "test-preload-457393" | sudo tee /etc/hostname
	I1229 07:10:40.719700  185656 main.go:144] libmachine: SSH cmd err, output: <nil>: test-preload-457393
	
	I1229 07:10:40.719777  185656 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-457393
	I1229 07:10:40.743050  185656 main.go:144] libmachine: Using SSH client type: native
	I1229 07:10:40.743367  185656 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e300] 0x850fa0 <nil>  [] 0s} 127.0.0.1 32988 <nil> <nil>}
	I1229 07:10:40.743389  185656 main.go:144] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\stest-preload-457393' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 test-preload-457393/g' /etc/hosts;
				else 
					echo '127.0.1.1 test-preload-457393' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1229 07:10:40.915366  185656 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	I1229 07:10:40.915398  185656 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22353-9207/.minikube CaCertPath:/home/jenkins/minikube-integration/22353-9207/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22353-9207/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22353-9207/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22353-9207/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22353-9207/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22353-9207/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22353-9207/.minikube}
	I1229 07:10:40.915431  185656 ubuntu.go:190] setting up certificates
	I1229 07:10:40.915443  185656 provision.go:84] configureAuth start
	I1229 07:10:40.915505  185656 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" test-preload-457393
	I1229 07:10:40.938450  185656 provision.go:143] copyHostCerts
	I1229 07:10:40.938509  185656 exec_runner.go:144] found /home/jenkins/minikube-integration/22353-9207/.minikube/ca.pem, removing ...
	I1229 07:10:40.938520  185656 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22353-9207/.minikube/ca.pem
	I1229 07:10:40.938599  185656 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22353-9207/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22353-9207/.minikube/ca.pem (1082 bytes)
	I1229 07:10:40.938709  185656 exec_runner.go:144] found /home/jenkins/minikube-integration/22353-9207/.minikube/cert.pem, removing ...
	I1229 07:10:40.938720  185656 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22353-9207/.minikube/cert.pem
	I1229 07:10:40.938762  185656 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22353-9207/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22353-9207/.minikube/cert.pem (1123 bytes)
	I1229 07:10:40.938845  185656 exec_runner.go:144] found /home/jenkins/minikube-integration/22353-9207/.minikube/key.pem, removing ...
	I1229 07:10:40.938858  185656 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22353-9207/.minikube/key.pem
	I1229 07:10:40.938896  185656 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22353-9207/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22353-9207/.minikube/key.pem (1675 bytes)
	I1229 07:10:40.939031  185656 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22353-9207/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22353-9207/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22353-9207/.minikube/certs/ca-key.pem org=jenkins.test-preload-457393 san=[127.0.0.1 192.168.85.2 localhost minikube test-preload-457393]
	I1229 07:10:40.974174  185656 provision.go:177] copyRemoteCerts
	I1229 07:10:40.974371  185656 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1229 07:10:40.974436  185656 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-457393
	I1229 07:10:41.000198  185656 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32988 SSHKeyPath:/home/jenkins/minikube-integration/22353-9207/.minikube/machines/test-preload-457393/id_rsa Username:docker}
	I1229 07:10:41.106128  185656 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-9207/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1229 07:10:41.125269  185656 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-9207/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I1229 07:10:41.143330  185656 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-9207/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1229 07:10:41.161525  185656 provision.go:87] duration metric: took 246.058651ms to configureAuth
	I1229 07:10:41.161551  185656 ubuntu.go:206] setting minikube options for container-runtime
	I1229 07:10:41.161721  185656 config.go:182] Loaded profile config "test-preload-457393": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1229 07:10:41.161829  185656 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-457393
	I1229 07:10:41.181532  185656 main.go:144] libmachine: Using SSH client type: native
	I1229 07:10:41.181726  185656 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e300] 0x850fa0 <nil>  [] 0s} 127.0.0.1 32988 <nil> <nil>}
	I1229 07:10:41.181740  185656 main.go:144] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1229 07:10:41.458703  185656 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1229 07:10:41.458728  185656 machine.go:97] duration metric: took 1.101296516s to provisionDockerMachine
	I1229 07:10:41.458740  185656 client.go:176] duration metric: took 2.68885921s to LocalClient.Create
	I1229 07:10:41.458766  185656 start.go:167] duration metric: took 2.688916904s to libmachine.API.Create "test-preload-457393"
	I1229 07:10:41.458775  185656 start.go:293] postStartSetup for "test-preload-457393" (driver="docker")
	I1229 07:10:41.458791  185656 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1229 07:10:41.458862  185656 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1229 07:10:41.458913  185656 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-457393
	I1229 07:10:41.479498  185656 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32988 SSHKeyPath:/home/jenkins/minikube-integration/22353-9207/.minikube/machines/test-preload-457393/id_rsa Username:docker}
	I1229 07:10:41.580023  185656 ssh_runner.go:195] Run: cat /etc/os-release
	I1229 07:10:41.583449  185656 main.go:144] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1229 07:10:41.583489  185656 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1229 07:10:41.583501  185656 filesync.go:126] Scanning /home/jenkins/minikube-integration/22353-9207/.minikube/addons for local assets ...
	I1229 07:10:41.583570  185656 filesync.go:126] Scanning /home/jenkins/minikube-integration/22353-9207/.minikube/files for local assets ...
	I1229 07:10:41.583663  185656 filesync.go:149] local asset: /home/jenkins/minikube-integration/22353-9207/.minikube/files/etc/ssl/certs/127332.pem -> 127332.pem in /etc/ssl/certs
	I1229 07:10:41.583749  185656 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1229 07:10:41.592263  185656 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-9207/.minikube/files/etc/ssl/certs/127332.pem --> /etc/ssl/certs/127332.pem (1708 bytes)
	I1229 07:10:41.611797  185656 start.go:296] duration metric: took 153.00496ms for postStartSetup
	I1229 07:10:41.612132  185656 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" test-preload-457393
	I1229 07:10:41.632941  185656 profile.go:143] Saving config to /home/jenkins/minikube-integration/22353-9207/.minikube/profiles/test-preload-457393/config.json ...
	I1229 07:10:41.633273  185656 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1229 07:10:41.633326  185656 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-457393
	I1229 07:10:41.652989  185656 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32988 SSHKeyPath:/home/jenkins/minikube-integration/22353-9207/.minikube/machines/test-preload-457393/id_rsa Username:docker}
	I1229 07:10:41.749841  185656 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1229 07:10:41.755410  185656 start.go:128] duration metric: took 2.987750564s to createHost
	I1229 07:10:41.755436  185656 start.go:83] releasing machines lock for "test-preload-457393", held for 2.987864984s
	I1229 07:10:41.755501  185656 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" test-preload-457393
	I1229 07:10:41.776609  185656 ssh_runner.go:195] Run: cat /version.json
	I1229 07:10:41.776645  185656 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1229 07:10:41.776676  185656 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-457393
	I1229 07:10:41.776721  185656 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-457393
	I1229 07:10:41.798487  185656 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32988 SSHKeyPath:/home/jenkins/minikube-integration/22353-9207/.minikube/machines/test-preload-457393/id_rsa Username:docker}
	I1229 07:10:41.799621  185656 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32988 SSHKeyPath:/home/jenkins/minikube-integration/22353-9207/.minikube/machines/test-preload-457393/id_rsa Username:docker}
	I1229 07:10:41.905035  185656 ssh_runner.go:195] Run: systemctl --version
	I1229 07:10:41.977678  185656 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1229 07:10:42.021938  185656 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1229 07:10:42.027398  185656 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1229 07:10:42.027467  185656 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1229 07:10:42.059503  185656 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1229 07:10:42.059526  185656 start.go:496] detecting cgroup driver to use...
	I1229 07:10:42.059563  185656 detect.go:190] detected "systemd" cgroup driver on host os
	I1229 07:10:42.059614  185656 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1229 07:10:42.080622  185656 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1229 07:10:42.093713  185656 docker.go:218] disabling cri-docker service (if available) ...
	I1229 07:10:42.093782  185656 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1229 07:10:42.110520  185656 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1229 07:10:42.127559  185656 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1229 07:10:42.210969  185656 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1229 07:10:42.302098  185656 docker.go:234] disabling docker service ...
	I1229 07:10:42.302182  185656 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1229 07:10:42.320978  185656 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1229 07:10:42.333678  185656 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1229 07:10:42.416281  185656 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1229 07:10:42.507847  185656 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1229 07:10:42.520875  185656 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1229 07:10:42.536651  185656 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1229 07:10:42.536718  185656 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1229 07:10:42.546771  185656 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1229 07:10:42.546831  185656 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1229 07:10:42.555890  185656 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1229 07:10:42.565100  185656 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1229 07:10:42.575177  185656 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1229 07:10:42.583617  185656 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1229 07:10:42.592609  185656 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1229 07:10:42.608084  185656 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1229 07:10:42.618076  185656 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1229 07:10:42.625732  185656 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1229 07:10:42.633577  185656 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1229 07:10:42.718905  185656 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1229 07:10:42.864804  185656 start.go:553] Will wait 60s for socket path /var/run/crio/crio.sock
	I1229 07:10:42.864879  185656 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1229 07:10:42.869560  185656 start.go:574] Will wait 60s for crictl version
	I1229 07:10:42.869609  185656 ssh_runner.go:195] Run: which crictl
	I1229 07:10:42.873848  185656 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1229 07:10:42.902276  185656 start.go:590] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.35.0
	RuntimeApiVersion:  v1
	I1229 07:10:42.902362  185656 ssh_runner.go:195] Run: crio --version
	I1229 07:10:42.935087  185656 ssh_runner.go:195] Run: crio --version
	I1229 07:10:42.970437  185656 out.go:179] * Preparing Kubernetes v1.35.0 on CRI-O 1.35.0 ...
	I1229 07:10:42.477496  180179 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1229 07:10:42.481669  180179 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.32.0/kubectl ...
	I1229 07:10:42.481685  180179 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2620 bytes)
	I1229 07:10:42.500507  180179 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1229 07:10:42.824933  180179 system_pods.go:43] waiting for kube-system pods to appear ...
	I1229 07:10:42.828181  180179 system_pods.go:59] 8 kube-system pods found
	I1229 07:10:42.828246  180179 system_pods.go:61] "coredns-668d6bf9bc-8vskg" [27eae4f9-5b4d-41e4-8308-030b9bdec27f] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1229 07:10:42.828257  180179 system_pods.go:61] "etcd-running-upgrade-796549" [80f62618-d4f6-4401-88bf-60e467bc5eb7] Pending
	I1229 07:10:42.828265  180179 system_pods.go:61] "kindnet-qdnfs" [719ce600-24d5-4321-ad7b-510500e0e8f6] Running
	I1229 07:10:42.828276  180179 system_pods.go:61] "kube-apiserver-running-upgrade-796549" [633129a9-c21d-435b-90ca-406195a1fc32] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1229 07:10:42.828289  180179 system_pods.go:61] "kube-controller-manager-running-upgrade-796549" [43d10c97-30c5-485e-998a-f2d9216a5356] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1229 07:10:42.828297  180179 system_pods.go:61] "kube-proxy-s9r7n" [9bdc6553-5cc8-4d45-89ea-ee6fe1bf128c] Running
	I1229 07:10:42.828302  180179 system_pods.go:61] "kube-scheduler-running-upgrade-796549" [d4c9be11-9d59-437a-9a5c-d8e7e7413f24] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1229 07:10:42.828310  180179 system_pods.go:61] "storage-provisioner" [0991efd5-99d2-4ad5-8dbf-9e535bd584c4] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1229 07:10:42.828317  180179 system_pods.go:74] duration metric: took 3.362142ms to wait for pod list to return data ...
	I1229 07:10:42.828325  180179 node_conditions.go:102] verifying NodePressure condition ...
	I1229 07:10:42.830684  180179 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1229 07:10:42.830719  180179 node_conditions.go:123] node cpu capacity is 8
	I1229 07:10:42.830733  180179 node_conditions.go:105] duration metric: took 2.401089ms to run NodePressure ...
	I1229 07:10:42.830799  180179 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1229 07:10:43.079705  180179 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1229 07:10:43.087802  180179 ops.go:34] apiserver oom_adj: -16
	I1229 07:10:43.087822  180179 kubeadm.go:602] duration metric: took 17.297023495s to restartPrimaryControlPlane
	I1229 07:10:43.087831  180179 kubeadm.go:403] duration metric: took 17.393329502s to StartCluster
	I1229 07:10:43.087846  180179 settings.go:142] acquiring lock: {Name:mkc0a676e8e9f981283a1b55a28ae5261bf35d87 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1229 07:10:43.087916  180179 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22353-9207/kubeconfig
	I1229 07:10:43.088821  180179 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22353-9207/kubeconfig: {Name:mkdd37af1bd6d0f9cc034a64c07c8d33ee5c10ac Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1229 07:10:43.089073  180179 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1229 07:10:43.089142  180179 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1229 07:10:43.089260  180179 addons.go:70] Setting storage-provisioner=true in profile "running-upgrade-796549"
	I1229 07:10:43.089285  180179 addons.go:239] Setting addon storage-provisioner=true in "running-upgrade-796549"
	I1229 07:10:43.089291  180179 addons.go:70] Setting default-storageclass=true in profile "running-upgrade-796549"
	I1229 07:10:43.089311  180179 config.go:182] Loaded profile config "running-upgrade-796549": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.0
	I1229 07:10:43.089320  180179 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "running-upgrade-796549"
	W1229 07:10:43.089298  180179 addons.go:248] addon storage-provisioner should already be in state true
	I1229 07:10:43.089421  180179 host.go:66] Checking if "running-upgrade-796549" exists ...
	I1229 07:10:43.089568  180179 cli_runner.go:164] Run: docker container inspect running-upgrade-796549 --format={{.State.Status}}
	I1229 07:10:43.089862  180179 cli_runner.go:164] Run: docker container inspect running-upgrade-796549 --format={{.State.Status}}
	I1229 07:10:43.090759  180179 out.go:179] * Verifying Kubernetes components...
	I1229 07:10:43.092057  180179 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1229 07:10:43.110674  180179 kapi.go:59] client config for running-upgrade-796549: &rest.Config{Host:"https://192.168.103.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22353-9207/.minikube/profiles/running-upgrade-796549/client.crt", KeyFile:"/home/jenkins/minikube-integration/22353-9207/.minikube/profiles/running-upgrade-796549/client.key", CAFile:"/home/jenkins/minikube-integration/22353-9207/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]u
int8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2780200), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1229 07:10:43.111066  180179 addons.go:239] Setting addon default-storageclass=true in "running-upgrade-796549"
	W1229 07:10:43.111101  180179 addons.go:248] addon default-storageclass should already be in state true
	I1229 07:10:43.111134  180179 host.go:66] Checking if "running-upgrade-796549" exists ...
	I1229 07:10:43.111614  180179 cli_runner.go:164] Run: docker container inspect running-upgrade-796549 --format={{.State.Status}}
	I1229 07:10:43.112552  180179 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1229 07:10:42.971617  185656 cli_runner.go:164] Run: docker network inspect test-preload-457393 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1229 07:10:42.990654  185656 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1229 07:10:42.994942  185656 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1229 07:10:43.005266  185656 kubeadm.go:884] updating cluster {Name:test-preload-457393 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766979815-22353@sha256:20dad5895b49b986a1253c0faab60865204843ac97fd3a6e6210da5896244409 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:test-preload-457393 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwareP
ath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false} ...
	I1229 07:10:43.005372  185656 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime crio
	I1229 07:10:43.005413  185656 ssh_runner.go:195] Run: sudo crictl images --output json
	I1229 07:10:43.029052  185656 crio.go:557] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.35.0". assuming images are not preloaded.
	I1229 07:10:43.029073  185656 cache_images.go:90] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.35.0 registry.k8s.io/kube-controller-manager:v1.35.0 registry.k8s.io/kube-scheduler:v1.35.0 registry.k8s.io/kube-proxy:v1.35.0 registry.k8s.io/pause:3.10.1 registry.k8s.io/etcd:3.6.6-0 registry.k8s.io/coredns/coredns:v1.13.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1229 07:10:43.029123  185656 image.go:138] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1229 07:10:43.029159  185656 image.go:138] retrieving image: registry.k8s.io/kube-proxy:v1.35.0
	I1229 07:10:43.029177  185656 image.go:138] retrieving image: registry.k8s.io/coredns/coredns:v1.13.1
	I1229 07:10:43.029204  185656 image.go:138] retrieving image: registry.k8s.io/kube-controller-manager:v1.35.0
	I1229 07:10:43.029241  185656 image.go:138] retrieving image: registry.k8s.io/kube-apiserver:v1.35.0
	I1229 07:10:43.029187  185656 image.go:138] retrieving image: registry.k8s.io/pause:3.10.1
	I1229 07:10:43.029207  185656 image.go:138] retrieving image: registry.k8s.io/kube-scheduler:v1.35.0
	I1229 07:10:43.029159  185656 image.go:138] retrieving image: registry.k8s.io/etcd:3.6.6-0
	I1229 07:10:43.030406  185656 image.go:181] daemon lookup for registry.k8s.io/kube-apiserver:v1.35.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.35.0
	I1229 07:10:43.030452  185656 image.go:181] daemon lookup for registry.k8s.io/kube-scheduler:v1.35.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.35.0
	I1229 07:10:43.030664  185656 image.go:181] daemon lookup for registry.k8s.io/kube-proxy:v1.35.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.35.0
	I1229 07:10:43.030672  185656 image.go:181] daemon lookup for registry.k8s.io/pause:3.10.1: Error response from daemon: No such image: registry.k8s.io/pause:3.10.1
	I1229 07:10:43.030681  185656 image.go:181] daemon lookup for registry.k8s.io/kube-controller-manager:v1.35.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.35.0
	I1229 07:10:43.030727  185656 image.go:181] daemon lookup for registry.k8s.io/etcd:3.6.6-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.6.6-0
	I1229 07:10:43.030752  185656 image.go:181] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1229 07:10:43.030762  185656 image.go:181] daemon lookup for registry.k8s.io/coredns/coredns:v1.13.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.13.1
	I1229 07:10:43.147738  185656 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.13.1
	I1229 07:10:43.152516  185656 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.6.6-0
	I1229 07:10:43.153616  185656 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.35.0
	I1229 07:10:43.155850  185656 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.35.0
	I1229 07:10:43.156844  185656 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.35.0
	I1229 07:10:43.179571  185656 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.35.0
	I1229 07:10:43.184636  185656 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.10.1
	I1229 07:10:43.258946  185656 cache_images.go:118] "registry.k8s.io/coredns/coredns:v1.13.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.13.1" does not exist at hash "aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139" in container runtime
	I1229 07:10:43.259000  185656 cri.go:226] Removing image: registry.k8s.io/coredns/coredns:v1.13.1
	I1229 07:10:43.259020  185656 cache_images.go:118] "registry.k8s.io/etcd:3.6.6-0" needs transfer: "registry.k8s.io/etcd:3.6.6-0" does not exist at hash "0a108f7189562e99793bdecab61fdf1a7c9d913af3385de9da17fb9d6ff430e2" in container runtime
	I1229 07:10:43.259048  185656 ssh_runner.go:195] Run: which crictl
	I1229 07:10:43.259050  185656 cri.go:226] Removing image: registry.k8s.io/etcd:3.6.6-0
	I1229 07:10:43.259089  185656 ssh_runner.go:195] Run: which crictl
	I1229 07:10:43.259175  185656 cache_images.go:118] "registry.k8s.io/kube-controller-manager:v1.35.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.35.0" does not exist at hash "2c9a4b058bd7e6ec479c38f9e8a1dac2f5ee5b0a3ebda6dfac968e8720229508" in container runtime
	I1229 07:10:43.259202  185656 cache_images.go:118] "registry.k8s.io/kube-scheduler:v1.35.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.35.0" does not exist at hash "550794e3b12ac21ec7fd940bdfb45f7c8dae3c52acd8c70e580b882de20c3dcc" in container runtime
	I1229 07:10:43.259207  185656 cri.go:226] Removing image: registry.k8s.io/kube-controller-manager:v1.35.0
	I1229 07:10:43.259253  185656 cri.go:226] Removing image: registry.k8s.io/kube-scheduler:v1.35.0
	I1229 07:10:43.259280  185656 ssh_runner.go:195] Run: which crictl
	I1229 07:10:43.259285  185656 ssh_runner.go:195] Run: which crictl
	I1229 07:10:43.259353  185656 cache_images.go:118] "registry.k8s.io/kube-apiserver:v1.35.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.35.0" does not exist at hash "5c6acd67e9cd10eb60a246cd233db251ef62ea97e6572f897e873f0cb648f499" in container runtime
	I1229 07:10:43.259387  185656 cri.go:226] Removing image: registry.k8s.io/kube-apiserver:v1.35.0
	I1229 07:10:43.259408  185656 cache_images.go:118] "registry.k8s.io/kube-proxy:v1.35.0" needs transfer: "registry.k8s.io/kube-proxy:v1.35.0" does not exist at hash "32652ff1bbe6b6df16a3dc621fcad4e6e185a2852bffc5847aab79e36c54bfb8" in container runtime
	I1229 07:10:43.259421  185656 ssh_runner.go:195] Run: which crictl
	I1229 07:10:43.259426  185656 cri.go:226] Removing image: registry.k8s.io/kube-proxy:v1.35.0
	I1229 07:10:43.259452  185656 ssh_runner.go:195] Run: which crictl
	I1229 07:10:43.259506  185656 cache_images.go:118] "registry.k8s.io/pause:3.10.1" needs transfer: "registry.k8s.io/pause:3.10.1" does not exist at hash "cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f" in container runtime
	I1229 07:10:43.259524  185656 cri.go:226] Removing image: registry.k8s.io/pause:3.10.1
	I1229 07:10:43.259547  185656 ssh_runner.go:195] Run: which crictl
	I1229 07:10:43.268018  185656 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.13.1
	I1229 07:10:43.268100  185656 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.35.0
	I1229 07:10:43.268139  185656 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.35.0
	I1229 07:10:43.268191  185656 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.35.0
	W1229 07:10:43.270140  185656 ssh_runner.go:129] session error, resetting client: ssh: rejected: connect failed (open failed)
	I1229 07:10:43.270204  185656 retry.go:84] will retry after 300ms: ssh: rejected: connect failed (open failed)
	I1229 07:10:43.270406  185656 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I1229 07:10:43.270542  185656 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-457393
	I1229 07:10:43.272714  185656 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.6.6-0
	I1229 07:10:43.272801  185656 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-457393
	I1229 07:10:43.272991  185656 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.35.0
	I1229 07:10:43.273070  185656 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-457393
	I1229 07:10:43.299639  185656 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32988 SSHKeyPath:/home/jenkins/minikube-integration/22353-9207/.minikube/machines/test-preload-457393/id_rsa Username:docker}
	I1229 07:10:43.300872  185656 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32988 SSHKeyPath:/home/jenkins/minikube-integration/22353-9207/.minikube/machines/test-preload-457393/id_rsa Username:docker}
	I1229 07:10:43.301717  185656 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32988 SSHKeyPath:/home/jenkins/minikube-integration/22353-9207/.minikube/machines/test-preload-457393/id_rsa Username:docker}
	I1229 07:10:43.319238  185656 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.35.0
	I1229 07:10:43.319333  185656 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.13.1
	I1229 07:10:43.319539  185656 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-457393
	I1229 07:10:43.319372  185656 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.35.0
	I1229 07:10:43.319605  185656 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-457393
	I1229 07:10:43.319666  185656 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-457393
	I1229 07:10:43.346610  185656 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32988 SSHKeyPath:/home/jenkins/minikube-integration/22353-9207/.minikube/machines/test-preload-457393/id_rsa Username:docker}
	I1229 07:10:43.346764  185656 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32988 SSHKeyPath:/home/jenkins/minikube-integration/22353-9207/.minikube/machines/test-preload-457393/id_rsa Username:docker}
	I1229 07:10:43.357239  185656 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32988 SSHKeyPath:/home/jenkins/minikube-integration/22353-9207/.minikube/machines/test-preload-457393/id_rsa Username:docker}
	I1229 07:10:43.438848  185656 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I1229 07:10:43.446272  185656 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.35.0
	I1229 07:10:43.451381  185656 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.6.6-0
	I1229 07:10:43.482435  185656 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.13.1
	I1229 07:10:43.482469  185656 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I1229 07:10:43.482522  185656 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.35.0
	I1229 07:10:43.488076  185656 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.35.0
	I1229 07:10:43.488141  185656 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.35.0
	I1229 07:10:43.494917  185656 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.6.6-0
	I1229 07:10:43.524440  185656 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22353-9207/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.35.0
	I1229 07:10:43.524487  185656 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22353-9207/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.13.1
	I1229 07:10:43.524555  185656 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.35.0
	I1229 07:10:43.524567  185656 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.13.1
	I1229 07:10:43.524582  185656 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22353-9207/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1
	I1229 07:10:43.524641  185656 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.10.1
	I1229 07:10:43.527580  185656 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22353-9207/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.35.0
	I1229 07:10:43.527662  185656 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22353-9207/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.35.0
	I1229 07:10:43.527666  185656 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.35.0
	I1229 07:10:43.527736  185656 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.35.0
	I1229 07:10:43.534794  185656 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22353-9207/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.6-0
	I1229 07:10:43.534840  185656 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.10.1: stat -c "%s %y" /var/lib/minikube/images/pause_3.10.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/pause_3.10.1': No such file or directory
	I1229 07:10:43.534864  185656 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-9207/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1 --> /var/lib/minikube/images/pause_3.10.1 (321024 bytes)
	I1229 07:10:43.534884  185656 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.6.6-0
	I1229 07:10:43.534896  185656 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-scheduler_v1.35.0: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.35.0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-scheduler_v1.35.0': No such file or directory
	I1229 07:10:43.534915  185656 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-9207/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.35.0 --> /var/lib/minikube/images/kube-scheduler_v1.35.0 (17248256 bytes)
	I1229 07:10:43.534948  185656 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.13.1: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.13.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/coredns_v1.13.1': No such file or directory
	I1229 07:10:43.534974  185656 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-proxy_v1.35.0: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.35.0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-proxy_v1.35.0': No such file or directory
	I1229 07:10:43.534976  185656 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-9207/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.13.1 --> /var/lib/minikube/images/coredns_v1.13.1 (23562752 bytes)
	I1229 07:10:43.534987  185656 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-9207/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.35.0 --> /var/lib/minikube/images/kube-proxy_v1.35.0 (25791488 bytes)
	I1229 07:10:43.535006  185656 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-controller-manager_v1.35.0: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.35.0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-controller-manager_v1.35.0': No such file or directory
	I1229 07:10:43.535027  185656 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-9207/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.35.0 --> /var/lib/minikube/images/kube-controller-manager_v1.35.0 (23144960 bytes)
	I1229 07:10:43.542966  185656 ssh_runner.go:352] existence check for /var/lib/minikube/images/etcd_3.6.6-0: stat -c "%s %y" /var/lib/minikube/images/etcd_3.6.6-0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/etcd_3.6.6-0': No such file or directory
	I1229 07:10:43.542995  185656 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-9207/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.6-0 --> /var/lib/minikube/images/etcd_3.6.6-0 (23653376 bytes)
	I1229 07:10:42.061831  184982 pod_ready.go:94] pod "etcd-pause-481637" is "Ready"
	I1229 07:10:42.061854  184982 pod_ready.go:86] duration metric: took 5.234612ms for pod "etcd-pause-481637" in "kube-system" namespace to be "Ready" or be gone ...
	I1229 07:10:42.063852  184982 pod_ready.go:83] waiting for pod "kube-apiserver-pause-481637" in "kube-system" namespace to be "Ready" or be gone ...
	I1229 07:10:42.068050  184982 pod_ready.go:94] pod "kube-apiserver-pause-481637" is "Ready"
	I1229 07:10:42.068080  184982 pod_ready.go:86] duration metric: took 4.20574ms for pod "kube-apiserver-pause-481637" in "kube-system" namespace to be "Ready" or be gone ...
	I1229 07:10:42.070122  184982 pod_ready.go:83] waiting for pod "kube-controller-manager-pause-481637" in "kube-system" namespace to be "Ready" or be gone ...
	I1229 07:10:42.448969  184982 pod_ready.go:94] pod "kube-controller-manager-pause-481637" is "Ready"
	I1229 07:10:42.449006  184982 pod_ready.go:86] duration metric: took 378.85733ms for pod "kube-controller-manager-pause-481637" in "kube-system" namespace to be "Ready" or be gone ...
	I1229 07:10:42.646904  184982 pod_ready.go:83] waiting for pod "kube-proxy-2qrrw" in "kube-system" namespace to be "Ready" or be gone ...
	I1229 07:10:43.047153  184982 pod_ready.go:94] pod "kube-proxy-2qrrw" is "Ready"
	I1229 07:10:43.047185  184982 pod_ready.go:86] duration metric: took 400.254537ms for pod "kube-proxy-2qrrw" in "kube-system" namespace to be "Ready" or be gone ...
	I1229 07:10:43.247485  184982 pod_ready.go:83] waiting for pod "kube-scheduler-pause-481637" in "kube-system" namespace to be "Ready" or be gone ...
	I1229 07:10:43.647106  184982 pod_ready.go:94] pod "kube-scheduler-pause-481637" is "Ready"
	I1229 07:10:43.647139  184982 pod_ready.go:86] duration metric: took 399.627212ms for pod "kube-scheduler-pause-481637" in "kube-system" namespace to be "Ready" or be gone ...
	I1229 07:10:43.647159  184982 pod_ready.go:40] duration metric: took 1.605291429s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1229 07:10:43.717979  184982 start.go:625] kubectl: 1.35.0, cluster: 1.35.0 (minor skew: 0)
	I1229 07:10:43.720096  184982 out.go:179] * Done! kubectl is now configured to use "pause-481637" cluster and "default" namespace by default
	I1229 07:10:43.113813  180179 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1229 07:10:43.113835  180179 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1229 07:10:43.113885  180179 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" running-upgrade-796549
	I1229 07:10:43.139386  180179 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32973 SSHKeyPath:/home/jenkins/minikube-integration/22353-9207/.minikube/machines/running-upgrade-796549/id_rsa Username:docker}
	I1229 07:10:43.139681  180179 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1229 07:10:43.139707  180179 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1229 07:10:43.139760  180179 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" running-upgrade-796549
	I1229 07:10:43.162883  180179 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32973 SSHKeyPath:/home/jenkins/minikube-integration/22353-9207/.minikube/machines/running-upgrade-796549/id_rsa Username:docker}
	I1229 07:10:43.251540  180179 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1229 07:10:43.254689  180179 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1229 07:10:43.288792  180179 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1229 07:10:44.051379  180179 api_server.go:52] waiting for apiserver process to appear ...
	I1229 07:10:44.051435  180179 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1229 07:10:44.060588  180179 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1229 07:10:44.061686  180179 addons.go:530] duration metric: took 972.549609ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I1229 07:10:44.064523  180179 api_server.go:72] duration metric: took 975.417827ms to wait for apiserver process to appear ...
	I1229 07:10:44.064545  180179 api_server.go:88] waiting for apiserver healthz status ...
	I1229 07:10:44.064563  180179 api_server.go:299] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1229 07:10:44.069975  180179 api_server.go:325] https://192.168.103.2:8443/healthz returned 200:
	ok
	I1229 07:10:44.070859  180179 api_server.go:141] control plane version: v1.32.0
	I1229 07:10:44.070882  180179 api_server.go:131] duration metric: took 6.330295ms to wait for apiserver health ...
	I1229 07:10:44.070893  180179 system_pods.go:43] waiting for kube-system pods to appear ...
	I1229 07:10:44.074423  180179 system_pods.go:59] 8 kube-system pods found
	I1229 07:10:44.074457  180179 system_pods.go:61] "coredns-668d6bf9bc-8vskg" [27eae4f9-5b4d-41e4-8308-030b9bdec27f] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1229 07:10:44.074465  180179 system_pods.go:61] "etcd-running-upgrade-796549" [80f62618-d4f6-4401-88bf-60e467bc5eb7] Pending
	I1229 07:10:44.074472  180179 system_pods.go:61] "kindnet-qdnfs" [719ce600-24d5-4321-ad7b-510500e0e8f6] Running
	I1229 07:10:44.074485  180179 system_pods.go:61] "kube-apiserver-running-upgrade-796549" [633129a9-c21d-435b-90ca-406195a1fc32] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1229 07:10:44.074496  180179 system_pods.go:61] "kube-controller-manager-running-upgrade-796549" [43d10c97-30c5-485e-998a-f2d9216a5356] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1229 07:10:44.074546  180179 system_pods.go:61] "kube-proxy-s9r7n" [9bdc6553-5cc8-4d45-89ea-ee6fe1bf128c] Running
	I1229 07:10:44.074558  180179 system_pods.go:61] "kube-scheduler-running-upgrade-796549" [d4c9be11-9d59-437a-9a5c-d8e7e7413f24] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1229 07:10:44.074564  180179 system_pods.go:61] "storage-provisioner" [0991efd5-99d2-4ad5-8dbf-9e535bd584c4] Running
	I1229 07:10:44.074575  180179 system_pods.go:74] duration metric: took 3.675324ms to wait for pod list to return data ...
	I1229 07:10:44.074586  180179 kubeadm.go:587] duration metric: took 985.483817ms to wait for: map[apiserver:true system_pods:true]
	I1229 07:10:44.074603  180179 node_conditions.go:102] verifying NodePressure condition ...
	I1229 07:10:44.076959  180179 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1229 07:10:44.076977  180179 node_conditions.go:123] node cpu capacity is 8
	I1229 07:10:44.076989  180179 node_conditions.go:105] duration metric: took 2.380677ms to run NodePressure ...
	I1229 07:10:44.077004  180179 start.go:242] waiting for startup goroutines ...
	I1229 07:10:44.077019  180179 start.go:247] waiting for cluster config update ...
	I1229 07:10:44.077036  180179 start.go:256] writing updated cluster config ...
	I1229 07:10:44.077346  180179 ssh_runner.go:195] Run: rm -f paused
	I1229 07:10:44.133844  180179 start.go:625] kubectl: 1.35.0, cluster: 1.32.0 (minor skew: 3)
	I1229 07:10:44.138522  180179 out.go:203] 
	W1229 07:10:44.139613  180179 out.go:285] ! /usr/local/bin/kubectl is version 1.35.0, which may have incompatibilities with Kubernetes 1.32.0.
	I1229 07:10:44.140902  180179 out.go:179]   - Want kubectl v1.32.0? Try 'minikube kubectl -- get pods -A'
	I1229 07:10:44.143522  180179 out.go:179] * Done! kubectl is now configured to use "running-upgrade-796549" cluster and "default" namespace by default
	I1229 07:10:44.780329  182389 api_server.go:315] stopped: https://192.168.94.2:8443/healthz: Get "https://192.168.94.2:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1229 07:10:44.780398  182389 api_server.go:299] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	
	
	==> CRI-O <==
	Dec 29 07:10:40 pause-481637 crio[2203]: time="2025-12-29T07:10:40.479069056Z" level=info msg="RDT not available in the host system"
	Dec 29 07:10:40 pause-481637 crio[2203]: time="2025-12-29T07:10:40.479108602Z" level=info msg="Using conmon executable: /usr/libexec/crio/conmon"
	Dec 29 07:10:40 pause-481637 crio[2203]: time="2025-12-29T07:10:40.480142982Z" level=info msg="Conmon does support the --sync option"
	Dec 29 07:10:40 pause-481637 crio[2203]: time="2025-12-29T07:10:40.480161024Z" level=info msg="Conmon does support the --log-global-size-max option"
	Dec 29 07:10:40 pause-481637 crio[2203]: time="2025-12-29T07:10:40.48017363Z" level=info msg="Using conmon executable: /usr/libexec/crio/conmon"
	Dec 29 07:10:40 pause-481637 crio[2203]: time="2025-12-29T07:10:40.480924815Z" level=info msg="Conmon does support the --sync option"
	Dec 29 07:10:40 pause-481637 crio[2203]: time="2025-12-29T07:10:40.480947156Z" level=info msg="Conmon does support the --log-global-size-max option"
	Dec 29 07:10:40 pause-481637 crio[2203]: time="2025-12-29T07:10:40.486378159Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 29 07:10:40 pause-481637 crio[2203]: time="2025-12-29T07:10:40.486397359Z" level=info msg="Updated default CNI network name to kindnet"
	Dec 29 07:10:40 pause-481637 crio[2203]: time="2025-12-29T07:10:40.487067503Z" level=info msg="Current CRI-O configuration:\n[crio]\n  root = \"/var/lib/containers/storage\"\n  runroot = \"/run/containers/storage\"\n  imagestore = \"\"\n  storage_driver = \"overlay\"\n  log_dir = \"/var/log/crio/pods\"\n  version_file = \"/var/run/crio/version\"\n  version_file_persist = \"\"\n  clean_shutdown_file = \"/var/lib/crio/clean.shutdown\"\n  internal_wipe = true\n  internal_repair = true\n  [crio.api]\n    grpc_max_send_msg_size = 83886080\n    grpc_max_recv_msg_size = 83886080\n    listen = \"/var/run/crio/crio.sock\"\n    stream_address = \"127.0.0.1\"\n    stream_port = \"0\"\n    stream_enable_tls = false\n    stream_tls_cert = \"\"\n    stream_tls_key = \"\"\n    stream_tls_ca = \"\"\n    stream_idle_timeout = \"\"\n  [crio.runtime]\n    no_pivot = false\n    selinux = false\n    log_to_journald = false\n    drop_infra_ctr = true\n    read_only = false\n    hooks_dir = [\"/usr/share/containers/oci/hoo
ks.d\"]\n    default_capabilities = [\"CHOWN\", \"DAC_OVERRIDE\", \"FSETID\", \"FOWNER\", \"SETGID\", \"SETUID\", \"SETPCAP\", \"NET_BIND_SERVICE\", \"KILL\"]\n    add_inheritable_capabilities = false\n    default_sysctls = [\"net.ipv4.ip_unprivileged_port_start=0\"]\n    allowed_devices = [\"/dev/fuse\", \"/dev/net/tun\"]\n    cdi_spec_dirs = [\"/etc/cdi\", \"/var/run/cdi\"]\n    device_ownership_from_security_context = false\n    default_runtime = \"crun\"\n    decryption_keys_path = \"/etc/crio/keys/\"\n    conmon = \"\"\n    conmon_cgroup = \"pod\"\n    seccomp_profile = \"\"\n    privileged_seccomp_profile = \"\"\n    apparmor_profile = \"crio-default\"\n    blockio_config_file = \"\"\n    blockio_reload = false\n    irqbalance_config_file = \"/etc/sysconfig/irqbalance\"\n    rdt_config_file = \"\"\n    cgroup_manager = \"systemd\"\n    default_mounts_file = \"\"\n    container_exits_dir = \"/var/run/crio/exits\"\n    container_attach_socket_dir = \"/var/run/crio\"\n    bind_mount_prefix = \"\"\n    uid_
mappings = \"\"\n    minimum_mappable_uid = -1\n    gid_mappings = \"\"\n    minimum_mappable_gid = -1\n    log_level = \"info\"\n    log_filter = \"\"\n    namespaces_dir = \"/var/run\"\n    pinns_path = \"/usr/bin/pinns\"\n    enable_criu_support = false\n    pids_limit = -1\n    log_size_max = -1\n    ctr_stop_timeout = 30\n    separate_pull_cgroup = \"\"\n    infra_ctr_cpuset = \"\"\n    shared_cpuset = \"\"\n    enable_pod_events = false\n    irqbalance_config_restore_file = \"/etc/sysconfig/orig_irq_banned_cpus\"\n    hostnetwork_disable_selinux = true\n    disable_hostport_mapping = false\n    timezone = \"\"\n    [crio.runtime.runtimes]\n      [crio.runtime.runtimes.crun]\n        runtime_config_path = \"\"\n        runtime_path = \"/usr/libexec/crio/crun\"\n        runtime_type = \"\"\n        runtime_root = \"/run/crun\"\n        allowed_annotations = [\"io.containers.trace-syscall\"]\n        monitor_path = \"/usr/libexec/crio/conmon\"\n        monitor_cgroup = \"pod\"\n        container_min_memory
= \"12MiB\"\n        no_sync_log = false\n        container_create_timeout = 240\n      [crio.runtime.runtimes.runc]\n        runtime_config_path = \"\"\n        runtime_path = \"/usr/libexec/crio/runc\"\n        runtime_type = \"\"\n        runtime_root = \"/run/runc\"\n        monitor_path = \"/usr/libexec/crio/conmon\"\n        monitor_cgroup = \"pod\"\n        container_min_memory = \"12MiB\"\n        no_sync_log = false\n        container_create_timeout = 240\n  [crio.image]\n    default_transport = \"docker://\"\n    global_auth_file = \"\"\n    namespaced_auth_dir = \"/etc/crio/auth\"\n    pause_image = \"registry.k8s.io/pause:3.10.1\"\n    pause_image_auth_file = \"\"\n    pause_command = \"/pause\"\n    signature_policy = \"/etc/crio/policy.json\"\n    signature_policy_dir = \"/etc/crio/policies\"\n    image_volumes = \"mkdir\"\n    big_files_temporary_dir = \"\"\n    auto_reload_registries = false\n    pull_progress_timeout = \"0s\"\n    oci_artifact_mount_support = true\n    short_name_mode = \"en
forcing\"\n  [crio.network]\n    cni_default_network = \"\"\n    network_dir = \"/etc/cni/net.d/\"\n    plugin_dirs = [\"/opt/cni/bin/\"]\n  [crio.metrics]\n    enable_metrics = false\n    metrics_collectors = [\"image_pulls_layer_size\", \"containers_events_dropped_total\", \"containers_oom_total\", \"processes_defunct\", \"operations_total\", \"operations_latency_seconds\", \"operations_latency_seconds_total\", \"operations_errors_total\", \"image_pulls_bytes_total\", \"image_pulls_skipped_bytes_total\", \"image_pulls_failure_total\", \"image_pulls_success_total\", \"image_layer_reuse_total\", \"containers_oom_count_total\", \"containers_seccomp_notifier_count_total\", \"resources_stalled_at_stage\", \"containers_stopped_monitor_count\"]\n    metrics_host = \"127.0.0.1\"\n    metrics_port = 9090\n    metrics_socket = \"\"\n    metrics_cert = \"\"\n    metrics_key = \"\"\n  [crio.tracing]\n    enable_tracing = false\n    tracing_endpoint = \"127.0.0.1:4317\"\n    tracing_sampling_rate_per_million = 0\n  [cri
o.stats]\n    stats_collection_period = 0\n    collection_period = 0\n  [crio.nri]\n    enable_nri = true\n    nri_listen = \"/var/run/nri/nri.sock\"\n    nri_plugin_dir = \"/opt/nri/plugins\"\n    nri_plugin_config_dir = \"/etc/nri/conf.d\"\n    nri_plugin_registration_timeout = \"5s\"\n    nri_plugin_request_timeout = \"2s\"\n    nri_disable_connections = false\n    [crio.nri.default_validator]\n      nri_enable_default_validator = false\n      nri_validator_reject_oci_hook_adjustment = false\n      nri_validator_reject_runtime_default_seccomp_adjustment = false\n      nri_validator_reject_unconfined_seccomp_adjustment = false\n      nri_validator_reject_custom_seccomp_adjustment = false\n      nri_validator_reject_namespace_adjustment = false\n      nri_validator_tolerate_missing_plugins_annotation = \"\"\n"
	Dec 29 07:10:40 pause-481637 crio[2203]: time="2025-12-29T07:10:40.487532663Z" level=info msg="Attempting to restore irqbalance config from /etc/sysconfig/orig_irq_banned_cpus"
	Dec 29 07:10:40 pause-481637 crio[2203]: time="2025-12-29T07:10:40.487591064Z" level=info msg="Restore irqbalance config: failed to get current CPU ban list, ignoring"
	Dec 29 07:10:40 pause-481637 crio[2203]: time="2025-12-29T07:10:40.572714916Z" level=info msg="Got pod network &{Name:coredns-7d764666f9-5zm82 Namespace:kube-system ID:69bf5bb60ce7ac9ab4633fe1abc38d1f5a4243343bd274dda125871ed4235b17 UID:c1d30c2b-41ab-4a8b-a50a-fc2a71b80ae0 NetNS:/var/run/netns/b4282dd8-76b0-44da-88e3-d793db68d105 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc00007e2a0}] Aliases:map[]}"
	Dec 29 07:10:40 pause-481637 crio[2203]: time="2025-12-29T07:10:40.572965558Z" level=info msg="Checking pod kube-system_coredns-7d764666f9-5zm82 for CNI network kindnet (type=ptp)"
	Dec 29 07:10:40 pause-481637 crio[2203]: time="2025-12-29T07:10:40.573544557Z" level=info msg="Registered SIGHUP reload watcher"
	Dec 29 07:10:40 pause-481637 crio[2203]: time="2025-12-29T07:10:40.573577287Z" level=info msg="Starting seccomp notifier watcher"
	Dec 29 07:10:40 pause-481637 crio[2203]: time="2025-12-29T07:10:40.573630033Z" level=info msg="Create NRI interface"
	Dec 29 07:10:40 pause-481637 crio[2203]: time="2025-12-29T07:10:40.573809605Z" level=info msg="built-in NRI default validator is disabled"
	Dec 29 07:10:40 pause-481637 crio[2203]: time="2025-12-29T07:10:40.57382701Z" level=info msg="runtime interface created"
	Dec 29 07:10:40 pause-481637 crio[2203]: time="2025-12-29T07:10:40.573844414Z" level=info msg="Registered domain \"k8s.io\" with NRI"
	Dec 29 07:10:40 pause-481637 crio[2203]: time="2025-12-29T07:10:40.573854231Z" level=info msg="runtime interface starting up..."
	Dec 29 07:10:40 pause-481637 crio[2203]: time="2025-12-29T07:10:40.573861464Z" level=info msg="starting plugins..."
	Dec 29 07:10:40 pause-481637 crio[2203]: time="2025-12-29T07:10:40.573875365Z" level=info msg="Synchronizing NRI (plugin) with current runtime state"
	Dec 29 07:10:40 pause-481637 crio[2203]: time="2025-12-29T07:10:40.574291246Z" level=info msg="No systemd watchdog enabled"
	Dec 29 07:10:40 pause-481637 systemd[1]: Started crio.service - Container Runtime Interface for OCI (CRI-O).
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                    NAMESPACE
	58489a35e11e1       aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139                                     13 seconds ago      Running             coredns                   0                   69bf5bb60ce7a       coredns-7d764666f9-5zm82               kube-system
	5e860b5c12c78       docker.io/kindest/kindnetd@sha256:7c22558dc06a570d46ea6e8a73b23cdc754eb81f7c08d3441a3171ad359ffc27   24 seconds ago      Running             kindnet-cni               0                   93ae597140bad       kindnet-x4zst                          kube-system
	571cde0d2860e       32652ff1bbe6b6df16a3dc621fcad4e6e185a2852bffc5847aab79e36c54bfb8                                     26 seconds ago      Running             kube-proxy                0                   932caf88e8adb       kube-proxy-2qrrw                       kube-system
	15715c02ea0c8       0a108f7189562e99793bdecab61fdf1a7c9d913af3385de9da17fb9d6ff430e2                                     37 seconds ago      Running             etcd                      0                   8ab35fe59fc6f       etcd-pause-481637                      kube-system
	cbb7bcc884a54       5c6acd67e9cd10eb60a246cd233db251ef62ea97e6572f897e873f0cb648f499                                     37 seconds ago      Running             kube-apiserver            0                   4baa8fc74b44a       kube-apiserver-pause-481637            kube-system
	5f72b4ac83205       2c9a4b058bd7e6ec479c38f9e8a1dac2f5ee5b0a3ebda6dfac968e8720229508                                     37 seconds ago      Running             kube-controller-manager   0                   23dfbbf854fef       kube-controller-manager-pause-481637   kube-system
	febc3eb4806ae       550794e3b12ac21ec7fd940bdfb45f7c8dae3c52acd8c70e580b882de20c3dcc                                     37 seconds ago      Running             kube-scheduler            0                   a9608981a41a0       kube-scheduler-pause-481637            kube-system
	
	
	==> coredns [58489a35e11e19b18e157e91660aa22fd062abb979b1048c95b030e927e43506] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 3e2243e8b9e7116f563b83b1933f477a68ba9ad4a829ed5d7e54629fb2ce53528b9bc6023030be20be434ad805fd246296dd428c64e9bbef3a70f22b8621f560
	CoreDNS-1.13.1
	linux/amd64, go1.25.2, 1db4568
	[INFO] 127.0.0.1:37970 - 60113 "HINFO IN 3060421740110709993.3405178809552654186. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.026325781s
	
	
	==> describe nodes <==
	Name:               pause-481637
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=pause-481637
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=306c26d738eaa3534a776d7c684ed563998a48b8
	                    minikube.k8s.io/name=pause-481637
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_29T07_10_15_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 29 Dec 2025 07:10:12 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-481637
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 29 Dec 2025 07:10:35 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 29 Dec 2025 07:10:35 +0000   Mon, 29 Dec 2025 07:10:10 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 29 Dec 2025 07:10:35 +0000   Mon, 29 Dec 2025 07:10:10 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 29 Dec 2025 07:10:35 +0000   Mon, 29 Dec 2025 07:10:10 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 29 Dec 2025 07:10:35 +0000   Mon, 29 Dec 2025 07:10:33 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    pause-481637
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863360Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863360Ki
	  pods:               110
	System Info:
	  Machine ID:                 44f990608ba801cb32708aeb6951f96d
	  System UUID:                c9ae9aeb-5a7b-46f5-a34a-ae1c057846c3
	  Boot ID:                    38b59c2f-2e15-4538-b2f7-46a8b6545a02
	  Kernel Version:             6.8.0-1045-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.35.0
	  Kubelet Version:            v1.35.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-7d764666f9-5zm82                100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     27s
	  kube-system                 etcd-pause-481637                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         32s
	  kube-system                 kindnet-x4zst                           100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      27s
	  kube-system                 kube-apiserver-pause-481637             250m (3%)     0 (0%)      0 (0%)           0 (0%)         32s
	  kube-system                 kube-controller-manager-pause-481637    200m (2%)     0 (0%)      0 (0%)           0 (0%)         32s
	  kube-system                 kube-proxy-2qrrw                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         27s
	  kube-system                 kube-scheduler-pause-481637             100m (1%)     0 (0%)      0 (0%)           0 (0%)         32s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason          Age   From             Message
	  ----    ------          ----  ----             -------
	  Normal  RegisteredNode  28s   node-controller  Node pause-481637 event: Registered Node pause-481637 in Controller
	
	
	==> dmesg <==
	[Dec29 06:17] MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details.
	[  +0.001714] TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details.
	[  +0.001000] MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details.
	[  +0.084008] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge
	[  +0.380672] i8042: Warning: Keylock active
	[  +0.013979] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.493300] block sda: the capability attribute has been deprecated.
	[  +0.084603] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.024785] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +4.737934] kauditd_printk_skb: 47 callbacks suppressed
	
	
	==> etcd [15715c02ea0c8855cc0d760b1a4d47caf7aba7242bd633dedbf18b227f270ce2] <==
	{"level":"info","ts":"2025-12-29T07:10:10.231909Z","caller":"embed/etcd.go:890","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2025-12-29T07:10:11.122088Z","logger":"raft","caller":"v3@v3.6.0/raft.go:988","msg":"ea7e25599daad906 is starting a new election at term 1"}
	{"level":"info","ts":"2025-12-29T07:10:11.122152Z","logger":"raft","caller":"v3@v3.6.0/raft.go:930","msg":"ea7e25599daad906 became pre-candidate at term 1"}
	{"level":"info","ts":"2025-12-29T07:10:11.122238Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1077","msg":"ea7e25599daad906 received MsgPreVoteResp from ea7e25599daad906 at term 1"}
	{"level":"info","ts":"2025-12-29T07:10:11.122333Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1693","msg":"ea7e25599daad906 has received 1 MsgPreVoteResp votes and 0 vote rejections"}
	{"level":"info","ts":"2025-12-29T07:10:11.122388Z","logger":"raft","caller":"v3@v3.6.0/raft.go:912","msg":"ea7e25599daad906 became candidate at term 2"}
	{"level":"info","ts":"2025-12-29T07:10:11.123123Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1077","msg":"ea7e25599daad906 received MsgVoteResp from ea7e25599daad906 at term 2"}
	{"level":"info","ts":"2025-12-29T07:10:11.123156Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1693","msg":"ea7e25599daad906 has received 1 MsgVoteResp votes and 0 vote rejections"}
	{"level":"info","ts":"2025-12-29T07:10:11.123187Z","logger":"raft","caller":"v3@v3.6.0/raft.go:970","msg":"ea7e25599daad906 became leader at term 2"}
	{"level":"info","ts":"2025-12-29T07:10:11.123205Z","logger":"raft","caller":"v3@v3.6.0/node.go:370","msg":"raft.node: ea7e25599daad906 elected leader ea7e25599daad906 at term 2"}
	{"level":"info","ts":"2025-12-29T07:10:11.124042Z","caller":"etcdserver/server.go:2420","msg":"setting up initial cluster version using v3 API","cluster-version":"3.6"}
	{"level":"info","ts":"2025-12-29T07:10:11.124570Z","caller":"embed/serve.go:138","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-12-29T07:10:11.124590Z","caller":"embed/serve.go:138","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-12-29T07:10:11.124566Z","caller":"etcdserver/server.go:1820","msg":"published local member to cluster through raft","local-member-id":"ea7e25599daad906","local-member-attributes":"{Name:pause-481637 ClientURLs:[https://192.168.76.2:2379]}","cluster-id":"6f20f2c4b2fb5f8a","publish-timeout":"7s"}
	{"level":"info","ts":"2025-12-29T07:10:11.124937Z","caller":"membership/cluster.go:682","msg":"set initial cluster version","cluster-id":"6f20f2c4b2fb5f8a","local-member-id":"ea7e25599daad906","cluster-version":"3.6"}
	{"level":"info","ts":"2025-12-29T07:10:11.124920Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-12-29T07:10:11.124972Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-12-29T07:10:11.125011Z","caller":"api/capability.go:76","msg":"enabled capabilities for version","cluster-version":"3.6"}
	{"level":"info","ts":"2025-12-29T07:10:11.125067Z","caller":"etcdserver/server.go:2440","msg":"cluster version is updated","cluster-version":"3.6"}
	{"level":"info","ts":"2025-12-29T07:10:11.125103Z","caller":"version/monitor.go:116","msg":"cluster version differs from storage version.","cluster-version":"3.6.0","storage-version":"3.5.0"}
	{"level":"info","ts":"2025-12-29T07:10:11.125199Z","caller":"schema/migration.go:65","msg":"updated storage version","new-storage-version":"3.6.0"}
	{"level":"info","ts":"2025-12-29T07:10:11.125831Z","caller":"v3rpc/health.go:63","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-12-29T07:10:11.125996Z","caller":"v3rpc/health.go:63","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-12-29T07:10:11.129997Z","caller":"embed/serve.go:283","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.76.2:2379"}
	{"level":"info","ts":"2025-12-29T07:10:11.130643Z","caller":"embed/serve.go:283","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	
	
	==> kernel <==
	 07:10:47 up 53 min,  0 user,  load average: 4.12, 2.16, 1.46
	Linux pause-481637 6.8.0-1045-gcp #48~22.04.1-Ubuntu SMP Tue Nov 25 13:07:56 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [5e860b5c12c78561beb41ccbc2085158a45919cc8ed9cb9abc4373d630fd84d2] <==
	I1229 07:10:23.208640       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1229 07:10:23.209052       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1229 07:10:23.209195       1 main.go:148] setting mtu 1500 for CNI 
	I1229 07:10:23.209241       1 main.go:178] kindnetd IP family: "ipv4"
	I1229 07:10:23.209271       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-12-29T07:10:23Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1229 07:10:23.506342       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1229 07:10:23.506376       1 controller.go:381] "Waiting for informer caches to sync"
	I1229 07:10:23.506388       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1229 07:10:23.507098       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1229 07:10:23.712205       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1229 07:10:23.712253       1 metrics.go:72] Registering metrics
	I1229 07:10:23.712332       1 controller.go:711] "Syncing nftables rules"
	I1229 07:10:33.506397       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1229 07:10:33.506468       1 main.go:301] handling current node
	I1229 07:10:43.512323       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1229 07:10:43.512368       1 main.go:301] handling current node
	
	
	==> kube-apiserver [cbb7bcc884a54e8411d72312a0ada9c61edf897db2ff6699aa4e8a312e9735eb] <==
	I1229 07:10:12.414862       1 shared_informer.go:377] "Caches are synced"
	I1229 07:10:12.414879       1 policy_source.go:248] refreshing policies
	E1229 07:10:12.448898       1 controller.go:156] "Error while syncing ConfigMap" err="namespaces \"kube-system\" not found" logger="UnhandledError" configmap="kube-system/kube-apiserver-legacy-service-account-token-tracking"
	I1229 07:10:12.495988       1 controller.go:667] quota admission added evaluator for: namespaces
	I1229 07:10:12.499669       1 cidrallocator.go:302] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1229 07:10:12.500768       1 default_servicecidr_controller.go:231] Setting default ServiceCIDR condition Ready to True
	I1229 07:10:12.505369       1 cidrallocator.go:278] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1229 07:10:12.606631       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1229 07:10:13.300654       1 storage_scheduling.go:123] created PriorityClass system-node-critical with value 2000001000
	I1229 07:10:13.305076       1 storage_scheduling.go:123] created PriorityClass system-cluster-critical with value 2000000000
	I1229 07:10:13.305105       1 storage_scheduling.go:139] all system priority classes are created successfully or already exist.
	I1229 07:10:13.805528       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1229 07:10:13.849168       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1229 07:10:13.903360       1 alloc.go:329] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1229 07:10:13.909624       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.76.2]
	I1229 07:10:13.910880       1 controller.go:667] quota admission added evaluator for: endpoints
	I1229 07:10:13.915817       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1229 07:10:14.317611       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1229 07:10:15.102153       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1229 07:10:15.116852       1 alloc.go:329] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1229 07:10:15.128106       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1229 07:10:19.772608       1 cidrallocator.go:278] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1229 07:10:19.779668       1 cidrallocator.go:278] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1229 07:10:20.273186       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1229 07:10:20.347400       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	
	
	==> kube-controller-manager [5f72b4ac83205cc93dc25e7f93237b1feb1132c848b45fa12af6205fda6bffd9] <==
	I1229 07:10:19.122878       1 shared_informer.go:377] "Caches are synced"
	I1229 07:10:19.122883       1 shared_informer.go:377] "Caches are synced"
	I1229 07:10:19.122889       1 shared_informer.go:377] "Caches are synced"
	I1229 07:10:19.122892       1 shared_informer.go:377] "Caches are synced"
	I1229 07:10:19.122899       1 shared_informer.go:377] "Caches are synced"
	I1229 07:10:19.122899       1 shared_informer.go:377] "Caches are synced"
	I1229 07:10:19.122907       1 shared_informer.go:377] "Caches are synced"
	I1229 07:10:19.122909       1 shared_informer.go:377] "Caches are synced"
	I1229 07:10:19.122923       1 shared_informer.go:377] "Caches are synced"
	I1229 07:10:19.122930       1 shared_informer.go:377] "Caches are synced"
	I1229 07:10:19.122931       1 shared_informer.go:377] "Caches are synced"
	I1229 07:10:19.122949       1 shared_informer.go:377] "Caches are synced"
	I1229 07:10:19.122958       1 shared_informer.go:377] "Caches are synced"
	I1229 07:10:19.122966       1 shared_informer.go:377] "Caches are synced"
	I1229 07:10:19.122323       1 node_lifecycle_controller.go:886] "Missing timestamp for Node. Assuming now as a timestamp" node="pause-481637"
	I1229 07:10:19.125127       1 node_lifecycle_controller.go:1038] "Controller detected that all Nodes are not-Ready. Entering master disruption mode"
	I1229 07:10:19.125950       1 shared_informer.go:377] "Caches are synced"
	I1229 07:10:19.127118       1 shared_informer.go:377] "Caches are synced"
	I1229 07:10:19.128435       1 shared_informer.go:370] "Waiting for caches to sync"
	I1229 07:10:19.133562       1 range_allocator.go:433] "Set node PodCIDR" node="pause-481637" podCIDRs=["10.244.0.0/24"]
	I1229 07:10:19.222624       1 shared_informer.go:377] "Caches are synced"
	I1229 07:10:19.222646       1 garbagecollector.go:166] "Garbage collector: all resource monitors have synced"
	I1229 07:10:19.222651       1 garbagecollector.go:169] "Proceeding to collect garbage"
	I1229 07:10:19.228979       1 shared_informer.go:377] "Caches are synced"
	I1229 07:10:34.128272       1 node_lifecycle_controller.go:1057] "Controller detected that some Nodes are Ready. Exiting master disruption mode"
	
	
	==> kube-proxy [571cde0d2860e745d81cd271b5ab627d3d57c83abf1fdb3eb1c516a2e37d9f26] <==
	I1229 07:10:21.426180       1 server_linux.go:53] "Using iptables proxy"
	I1229 07:10:21.489498       1 shared_informer.go:370] "Waiting for caches to sync"
	I1229 07:10:21.589826       1 shared_informer.go:377] "Caches are synced"
	I1229 07:10:21.589869       1 server.go:218] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1229 07:10:21.589950       1 server.go:255] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1229 07:10:21.620780       1 server.go:264] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1229 07:10:21.620839       1 server_linux.go:136] "Using iptables Proxier"
	I1229 07:10:21.627409       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1229 07:10:21.627864       1 server.go:529] "Version info" version="v1.35.0"
	I1229 07:10:21.627889       1 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1229 07:10:21.630585       1 config.go:200] "Starting service config controller"
	I1229 07:10:21.630608       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1229 07:10:21.630673       1 config.go:106] "Starting endpoint slice config controller"
	I1229 07:10:21.630767       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1229 07:10:21.630922       1 config.go:309] "Starting node config controller"
	I1229 07:10:21.630952       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1229 07:10:21.630961       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1229 07:10:21.631383       1 config.go:403] "Starting serviceCIDR config controller"
	I1229 07:10:21.632018       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1229 07:10:21.730978       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1229 07:10:21.732284       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1229 07:10:21.732411       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [febc3eb4806ae3ff3fe4ec5ade5c51f1b25306c094df505f7f1a60820d643d9a] <==
	E1229 07:10:12.366243       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Service"
	E1229 07:10:12.366333       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Namespace"
	E1229 07:10:12.366421       1 reflector.go:204] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.PersistentVolume"
	E1229 07:10:12.366546       1 reflector.go:204] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.CSIStorageCapacity"
	E1229 07:10:12.366561       1 reflector.go:204] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.CSINode"
	E1229 07:10:12.366577       1 reflector.go:204] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.PodDisruptionBudget"
	E1229 07:10:12.366718       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Pod"
	E1229 07:10:12.366744       1 reflector.go:204] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.CSIDriver"
	E1229 07:10:12.366758       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Node"
	E1229 07:10:12.366831       1 reflector.go:204] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.PersistentVolumeClaim"
	E1229 07:10:13.172140       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Service"
	E1229 07:10:13.177247       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ReplicationController"
	E1229 07:10:13.203040       1 reflector.go:204] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.PersistentVolume"
	E1229 07:10:13.228702       1 reflector.go:204] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.DeviceClass"
	E1229 07:10:13.297762       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Namespace"
	E1229 07:10:13.303553       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ResourceSlice"
	E1229 07:10:13.338884       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1693" type="*v1.ConfigMap"
	E1229 07:10:13.372496       1 reflector.go:204] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.StatefulSet"
	E1229 07:10:13.376600       1 reflector.go:204] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.StorageClass"
	E1229 07:10:13.428847       1 reflector.go:204] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.VolumeAttachment"
	E1229 07:10:13.467174       1 reflector.go:204] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.PodDisruptionBudget"
	E1229 07:10:13.536231       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ResourceClaim"
	E1229 07:10:13.550497       1 reflector.go:204] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.CSIStorageCapacity"
	E1229 07:10:13.585854       1 reflector.go:204] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.PersistentVolumeClaim"
	I1229 07:10:16.460305       1 shared_informer.go:377] "Caches are synced"
	
	
	==> kubelet <==
	Dec 29 07:10:22 pause-481637 kubelet[1293]: I1229 07:10:22.547577    1293 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kube-proxy-2qrrw" podStartSLOduration=2.547553282 podStartE2EDuration="2.547553282s" podCreationTimestamp="2025-12-29 07:10:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-29 07:10:22.121574919 +0000 UTC m=+7.204269131" watchObservedRunningTime="2025-12-29 07:10:22.547553282 +0000 UTC m=+7.630247496"
	Dec 29 07:10:22 pause-481637 kubelet[1293]: E1229 07:10:22.985613    1293 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/etcd-pause-481637" containerName="etcd"
	Dec 29 07:10:23 pause-481637 kubelet[1293]: E1229 07:10:23.047954    1293 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-pause-481637" containerName="kube-scheduler"
	Dec 29 07:10:23 pause-481637 kubelet[1293]: I1229 07:10:23.127526    1293 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kindnet-x4zst" podStartSLOduration=1.6128638290000001 podStartE2EDuration="3.127498013s" podCreationTimestamp="2025-12-29 07:10:20 +0000 UTC" firstStartedPulling="2025-12-29 07:10:21.334675663 +0000 UTC m=+6.417369856" lastFinishedPulling="2025-12-29 07:10:22.84930986 +0000 UTC m=+7.932004040" observedRunningTime="2025-12-29 07:10:23.126625988 +0000 UTC m=+8.209320189" watchObservedRunningTime="2025-12-29 07:10:23.127498013 +0000 UTC m=+8.210192214"
	Dec 29 07:10:25 pause-481637 kubelet[1293]: E1229 07:10:25.775504    1293 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-controller-manager-pause-481637" containerName="kube-controller-manager"
	Dec 29 07:10:32 pause-481637 kubelet[1293]: E1229 07:10:32.473090    1293 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-pause-481637" containerName="kube-apiserver"
	Dec 29 07:10:32 pause-481637 kubelet[1293]: E1229 07:10:32.987381    1293 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/etcd-pause-481637" containerName="etcd"
	Dec 29 07:10:33 pause-481637 kubelet[1293]: E1229 07:10:33.054727    1293 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-pause-481637" containerName="kube-scheduler"
	Dec 29 07:10:33 pause-481637 kubelet[1293]: I1229 07:10:33.868741    1293 kubelet_node_status.go:427] "Fast updating node status as it just became ready"
	Dec 29 07:10:33 pause-481637 kubelet[1293]: I1229 07:10:33.967731    1293 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c1d30c2b-41ab-4a8b-a50a-fc2a71b80ae0-config-volume\") pod \"coredns-7d764666f9-5zm82\" (UID: \"c1d30c2b-41ab-4a8b-a50a-fc2a71b80ae0\") " pod="kube-system/coredns-7d764666f9-5zm82"
	Dec 29 07:10:33 pause-481637 kubelet[1293]: I1229 07:10:33.967781    1293 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bhkjq\" (UniqueName: \"kubernetes.io/projected/c1d30c2b-41ab-4a8b-a50a-fc2a71b80ae0-kube-api-access-bhkjq\") pod \"coredns-7d764666f9-5zm82\" (UID: \"c1d30c2b-41ab-4a8b-a50a-fc2a71b80ae0\") " pod="kube-system/coredns-7d764666f9-5zm82"
	Dec 29 07:10:35 pause-481637 kubelet[1293]: E1229 07:10:35.136604    1293 prober_manager.go:209] "Readiness probe already exists for container" pod="kube-system/coredns-7d764666f9-5zm82" containerName="coredns"
	Dec 29 07:10:35 pause-481637 kubelet[1293]: I1229 07:10:35.146410    1293 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/coredns-7d764666f9-5zm82" podStartSLOduration=15.146393815 podStartE2EDuration="15.146393815s" podCreationTimestamp="2025-12-29 07:10:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-29 07:10:35.146323273 +0000 UTC m=+20.229017473" watchObservedRunningTime="2025-12-29 07:10:35.146393815 +0000 UTC m=+20.229088016"
	Dec 29 07:10:36 pause-481637 kubelet[1293]: E1229 07:10:36.138627    1293 prober_manager.go:209] "Readiness probe already exists for container" pod="kube-system/coredns-7d764666f9-5zm82" containerName="coredns"
	Dec 29 07:10:37 pause-481637 kubelet[1293]: E1229 07:10:37.141020    1293 prober_manager.go:209] "Readiness probe already exists for container" pod="kube-system/coredns-7d764666f9-5zm82" containerName="coredns"
	Dec 29 07:10:40 pause-481637 kubelet[1293]: W1229 07:10:40.145864    1293 logging.go:55] [core] [Channel #1 SubChannel #2]grpc: addrConn.createTransport failed to connect to {Addr: "/var/run/crio/crio.sock", ServerName: "localhost", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	Dec 29 07:10:40 pause-481637 kubelet[1293]: E1229 07:10:40.146007    1293 log.go:32] "ListPodSandbox with filter from runtime service failed" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory\"" filter="<nil>"
	Dec 29 07:10:40 pause-481637 kubelet[1293]: E1229 07:10:40.146067    1293 kuberuntime_sandbox.go:297] "Failed to list pod sandboxes" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory\""
	Dec 29 07:10:40 pause-481637 kubelet[1293]: E1229 07:10:40.146085    1293 generic.go:254] "GenericPLEG: Unable to retrieve pods" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory\""
	Dec 29 07:10:40 pause-481637 kubelet[1293]: W1229 07:10:40.247160    1293 logging.go:55] [core] [Channel #1 SubChannel #2]grpc: addrConn.createTransport failed to connect to {Addr: "/var/run/crio/crio.sock", ServerName: "localhost", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	Dec 29 07:10:40 pause-481637 kubelet[1293]: W1229 07:10:40.418513    1293 logging.go:55] [core] [Channel #1 SubChannel #2]grpc: addrConn.createTransport failed to connect to {Addr: "/var/run/crio/crio.sock", ServerName: "localhost", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	Dec 29 07:10:44 pause-481637 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Dec 29 07:10:44 pause-481637 systemd[1]: kubelet.service: Deactivated successfully.
	Dec 29 07:10:44 pause-481637 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 29 07:10:44 pause-481637 systemd[1]: kubelet.service: Consumed 1.311s CPU time.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-481637 -n pause-481637
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-481637 -n pause-481637: exit status 2 (365.464033ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:270: (dbg) Run:  kubectl --context pause-481637 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:294: <<< TestPause/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:295: ---------------------/post-mortem---------------------------------
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestPause/serial/Pause]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestPause/serial/Pause]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect pause-481637
helpers_test.go:244: (dbg) docker inspect pause-481637:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "e9175fa6027804591652f6f823d94b30b7ab9b566bb08090bd7dc45f48c1743c",
	        "Created": "2025-12-29T07:09:53.106581938Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 171376,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-29T07:09:57.286794192Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:a2c772d8c9235c5e8a66a3d0af7f5184436aa141d74f90a5659fe0d594ea14d5",
	        "ResolvConfPath": "/var/lib/docker/containers/e9175fa6027804591652f6f823d94b30b7ab9b566bb08090bd7dc45f48c1743c/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/e9175fa6027804591652f6f823d94b30b7ab9b566bb08090bd7dc45f48c1743c/hostname",
	        "HostsPath": "/var/lib/docker/containers/e9175fa6027804591652f6f823d94b30b7ab9b566bb08090bd7dc45f48c1743c/hosts",
	        "LogPath": "/var/lib/docker/containers/e9175fa6027804591652f6f823d94b30b7ab9b566bb08090bd7dc45f48c1743c/e9175fa6027804591652f6f823d94b30b7ab9b566bb08090bd7dc45f48c1743c-json.log",
	        "Name": "/pause-481637",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "pause-481637:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "pause-481637",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "e9175fa6027804591652f6f823d94b30b7ab9b566bb08090bd7dc45f48c1743c",
	                "LowerDir": "/var/lib/docker/overlay2/42f0982355f7780754931d8f6abfdec9884637d998a3843de0215782e9552341-init/diff:/var/lib/docker/overlay2/3c9e424a3298c821d4195922375c499065668db3230de879006c717dc268a220/diff",
	                "MergedDir": "/var/lib/docker/overlay2/42f0982355f7780754931d8f6abfdec9884637d998a3843de0215782e9552341/merged",
	                "UpperDir": "/var/lib/docker/overlay2/42f0982355f7780754931d8f6abfdec9884637d998a3843de0215782e9552341/diff",
	                "WorkDir": "/var/lib/docker/overlay2/42f0982355f7780754931d8f6abfdec9884637d998a3843de0215782e9552341/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "pause-481637",
	                "Source": "/var/lib/docker/volumes/pause-481637/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "pause-481637",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766979815-22353@sha256:20dad5895b49b986a1253c0faab60865204843ac97fd3a6e6210da5896244409",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "pause-481637",
	                "name.minikube.sigs.k8s.io": "pause-481637",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "bd68e0678f8ce9b277bf710cf03b049107f846d8f1694857199e098b9192a0b7",
	            "SandboxKey": "/var/run/docker/netns/bd68e0678f8c",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32968"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32969"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32972"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32970"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32971"
	                    }
	                ]
	            },
	            "Networks": {
	                "pause-481637": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "0f29fff9b01c97698f6fd6b28b92b63f7937eb8e57f0d4d7f7536ddb8174ceae",
	                    "EndpointID": "c117b2fad07e007922db927d86b250eee424acd7c999c176da773e71ad34e04c",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "MacAddress": "a2:70:b6:fd:04:bf",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "pause-481637",
	                        "e9175fa60278"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p pause-481637 -n pause-481637
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p pause-481637 -n pause-481637: exit status 2 (389.498643ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestPause/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestPause/serial/Pause]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p pause-481637 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-amd64 -p pause-481637 logs -n 25: (1.060117686s)
helpers_test.go:261: TestPause/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬─────────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                             ARGS                                                             │           PROFILE           │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼─────────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ stop    │ -p scheduled-stop-891761 --schedule 5m -v=5 --alsologtostderr                                                                │ scheduled-stop-891761       │ jenkins │ v1.37.0 │ 29 Dec 25 07:08 UTC │                     │
	│ stop    │ -p scheduled-stop-891761 --schedule 5m -v=5 --alsologtostderr                                                                │ scheduled-stop-891761       │ jenkins │ v1.37.0 │ 29 Dec 25 07:08 UTC │                     │
	│ stop    │ -p scheduled-stop-891761 --schedule 15s -v=5 --alsologtostderr                                                               │ scheduled-stop-891761       │ jenkins │ v1.37.0 │ 29 Dec 25 07:08 UTC │                     │
	│ stop    │ -p scheduled-stop-891761 --schedule 15s -v=5 --alsologtostderr                                                               │ scheduled-stop-891761       │ jenkins │ v1.37.0 │ 29 Dec 25 07:08 UTC │                     │
	│ stop    │ -p scheduled-stop-891761 --schedule 15s -v=5 --alsologtostderr                                                               │ scheduled-stop-891761       │ jenkins │ v1.37.0 │ 29 Dec 25 07:08 UTC │                     │
	│ stop    │ -p scheduled-stop-891761 --cancel-scheduled                                                                                  │ scheduled-stop-891761       │ jenkins │ v1.37.0 │ 29 Dec 25 07:08 UTC │ 29 Dec 25 07:08 UTC │
	│ stop    │ -p scheduled-stop-891761 --schedule 15s -v=5 --alsologtostderr                                                               │ scheduled-stop-891761       │ jenkins │ v1.37.0 │ 29 Dec 25 07:08 UTC │                     │
	│ stop    │ -p scheduled-stop-891761 --schedule 15s -v=5 --alsologtostderr                                                               │ scheduled-stop-891761       │ jenkins │ v1.37.0 │ 29 Dec 25 07:08 UTC │                     │
	│ stop    │ -p scheduled-stop-891761 --schedule 15s -v=5 --alsologtostderr                                                               │ scheduled-stop-891761       │ jenkins │ v1.37.0 │ 29 Dec 25 07:08 UTC │ 29 Dec 25 07:08 UTC │
	│ delete  │ -p scheduled-stop-891761                                                                                                     │ scheduled-stop-891761       │ jenkins │ v1.37.0 │ 29 Dec 25 07:09 UTC │ 29 Dec 25 07:09 UTC │
	│ start   │ -p insufficient-storage-899672 --memory=3072 --output=json --wait=true --driver=docker  --container-runtime=crio             │ insufficient-storage-899672 │ jenkins │ v1.37.0 │ 29 Dec 25 07:09 UTC │                     │
	│ delete  │ -p insufficient-storage-899672                                                                                               │ insufficient-storage-899672 │ jenkins │ v1.37.0 │ 29 Dec 25 07:09 UTC │ 29 Dec 25 07:09 UTC │
	│ start   │ -p pause-481637 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=crio                    │ pause-481637                │ jenkins │ v1.37.0 │ 29 Dec 25 07:09 UTC │ 29 Dec 25 07:10 UTC │
	│ start   │ -p offline-crio-469438 --alsologtostderr -v=1 --memory=3072 --wait=true --driver=docker  --container-runtime=crio            │ offline-crio-469438         │ jenkins │ v1.37.0 │ 29 Dec 25 07:09 UTC │ 29 Dec 25 07:10 UTC │
	│ start   │ -p stopped-upgrade-518014 --memory=3072 --vm-driver=docker  --container-runtime=crio                                         │ stopped-upgrade-518014      │ jenkins │ v1.35.0 │ 29 Dec 25 07:09 UTC │ 29 Dec 25 07:10 UTC │
	│ start   │ -p running-upgrade-796549 --memory=3072 --vm-driver=docker  --container-runtime=crio                                         │ running-upgrade-796549      │ jenkins │ v1.35.0 │ 29 Dec 25 07:09 UTC │ 29 Dec 25 07:10 UTC │
	│ start   │ -p running-upgrade-796549 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                     │ running-upgrade-796549      │ jenkins │ v1.37.0 │ 29 Dec 25 07:10 UTC │ 29 Dec 25 07:10 UTC │
	│ stop    │ stopped-upgrade-518014 stop                                                                                                  │ stopped-upgrade-518014      │ jenkins │ v1.35.0 │ 29 Dec 25 07:10 UTC │ 29 Dec 25 07:10 UTC │
	│ start   │ -p stopped-upgrade-518014 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                     │ stopped-upgrade-518014      │ jenkins │ v1.37.0 │ 29 Dec 25 07:10 UTC │                     │
	│ delete  │ -p offline-crio-469438                                                                                                       │ offline-crio-469438         │ jenkins │ v1.37.0 │ 29 Dec 25 07:10 UTC │ 29 Dec 25 07:10 UTC │
	│ start   │ -p pause-481637 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                             │ pause-481637                │ jenkins │ v1.37.0 │ 29 Dec 25 07:10 UTC │ 29 Dec 25 07:10 UTC │
	│ start   │ -p test-preload-457393 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio │ test-preload-457393         │ jenkins │ v1.37.0 │ 29 Dec 25 07:10 UTC │                     │
	│ pause   │ -p pause-481637 --alsologtostderr -v=5                                                                                       │ pause-481637                │ jenkins │ v1.37.0 │ 29 Dec 25 07:10 UTC │                     │
	│ delete  │ -p running-upgrade-796549                                                                                                    │ running-upgrade-796549      │ jenkins │ v1.37.0 │ 29 Dec 25 07:10 UTC │ 29 Dec 25 07:10 UTC │
	│ start   │ -p force-systemd-env-879774 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                   │ force-systemd-env-879774    │ jenkins │ v1.37.0 │ 29 Dec 25 07:10 UTC │                     │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴─────────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/29 07:10:47
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1229 07:10:47.114804  189940 out.go:360] Setting OutFile to fd 1 ...
	I1229 07:10:47.115125  189940 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1229 07:10:47.115137  189940 out.go:374] Setting ErrFile to fd 2...
	I1229 07:10:47.115142  189940 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1229 07:10:47.115342  189940 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22353-9207/.minikube/bin
	I1229 07:10:47.115793  189940 out.go:368] Setting JSON to false
	I1229 07:10:47.116838  189940 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":3199,"bootTime":1766989048,"procs":306,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1229 07:10:47.116902  189940 start.go:143] virtualization: kvm guest
	I1229 07:10:47.118911  189940 out.go:179] * [force-systemd-env-879774] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1229 07:10:47.120171  189940 out.go:179]   - MINIKUBE_LOCATION=22353
	I1229 07:10:47.120174  189940 notify.go:221] Checking for updates...
	I1229 07:10:47.122342  189940 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1229 07:10:47.124019  189940 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22353-9207/kubeconfig
	I1229 07:10:47.125299  189940 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22353-9207/.minikube
	I1229 07:10:47.126486  189940 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1229 07:10:47.127672  189940 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=true
	I1229 07:10:47.129650  189940 config.go:182] Loaded profile config "pause-481637": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1229 07:10:47.129804  189940 config.go:182] Loaded profile config "stopped-upgrade-518014": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.0
	I1229 07:10:47.129948  189940 config.go:182] Loaded profile config "test-preload-457393": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1229 07:10:47.130056  189940 driver.go:422] Setting default libvirt URI to qemu:///system
	I1229 07:10:47.159165  189940 docker.go:124] docker version: linux-29.1.3:Docker Engine - Community
	I1229 07:10:47.159289  189940 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1229 07:10:47.232288  189940 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:81 OomKillDisable:false NGoroutines:92 SystemTime:2025-12-29 07:10:47.219062599 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:dea7da592f5d1d2b7755e3a161be07f43fad8f75 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1229 07:10:47.232421  189940 docker.go:319] overlay module found
	I1229 07:10:47.234094  189940 out.go:179] * Using the docker driver based on user configuration
	I1229 07:10:47.235251  189940 start.go:309] selected driver: docker
	I1229 07:10:47.235267  189940 start.go:928] validating driver "docker" against <nil>
	I1229 07:10:47.235281  189940 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1229 07:10:47.236049  189940 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1229 07:10:47.303387  189940 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:81 OomKillDisable:false NGoroutines:92 SystemTime:2025-12-29 07:10:47.291912327 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:dea7da592f5d1d2b7755e3a161be07f43fad8f75 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1229 07:10:47.303588  189940 start_flags.go:333] no existing cluster config was found, will generate one from the flags 
	I1229 07:10:47.303838  189940 start_flags.go:1001] Wait components to verify : map[apiserver:true system_pods:true]
	I1229 07:10:47.305733  189940 out.go:179] * Using Docker driver with root privileges
	I1229 07:10:47.306910  189940 cni.go:84] Creating CNI manager for ""
	I1229 07:10:47.306988  189940 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1229 07:10:47.307002  189940 start_flags.go:342] Found "CNI" CNI - setting NetworkPlugin=cni
	I1229 07:10:47.307085  189940 start.go:353] cluster config:
	{Name:force-systemd-env-879774 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766979815-22353@sha256:20dad5895b49b986a1253c0faab60865204843ac97fd3a6e6210da5896244409 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:force-systemd-env-879774 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.
local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1229 07:10:47.308336  189940 out.go:179] * Starting "force-systemd-env-879774" primary control-plane node in "force-systemd-env-879774" cluster
	I1229 07:10:47.309478  189940 cache.go:134] Beginning downloading kic base image for docker with crio
	I1229 07:10:47.310716  189940 out.go:179] * Pulling base image v0.0.48-1766979815-22353 ...
	I1229 07:10:47.311803  189940 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime crio
	I1229 07:10:47.311840  189940 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22353-9207/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-cri-o-overlay-amd64.tar.lz4
	I1229 07:10:47.311853  189940 cache.go:65] Caching tarball of preloaded images
	I1229 07:10:47.311889  189940 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766979815-22353@sha256:20dad5895b49b986a1253c0faab60865204843ac97fd3a6e6210da5896244409 in local docker daemon
	I1229 07:10:47.312000  189940 preload.go:251] Found /home/jenkins/minikube-integration/22353-9207/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1229 07:10:47.312020  189940 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0 on crio
	I1229 07:10:47.312137  189940 profile.go:143] Saving config to /home/jenkins/minikube-integration/22353-9207/.minikube/profiles/force-systemd-env-879774/config.json ...
	I1229 07:10:47.312168  189940 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22353-9207/.minikube/profiles/force-systemd-env-879774/config.json: {Name:mkada5c1eba895dc3b1651101c7071bf9c51bbcc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1229 07:10:47.335837  189940 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766979815-22353@sha256:20dad5895b49b986a1253c0faab60865204843ac97fd3a6e6210da5896244409 in local docker daemon, skipping pull
	I1229 07:10:47.335857  189940 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766979815-22353@sha256:20dad5895b49b986a1253c0faab60865204843ac97fd3a6e6210da5896244409 exists in daemon, skipping load
	I1229 07:10:47.335883  189940 cache.go:243] Successfully downloaded all kic artifacts
	I1229 07:10:47.335922  189940 start.go:360] acquireMachinesLock for force-systemd-env-879774: {Name:mk241f78e48ee406ac31de0618173f70aad1a08a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1229 07:10:47.336028  189940 start.go:364] duration metric: took 85.287µs to acquireMachinesLock for "force-systemd-env-879774"
	I1229 07:10:47.336056  189940 start.go:93] Provisioning new machine with config: &{Name:force-systemd-env-879774 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766979815-22353@sha256:20dad5895b49b986a1253c0faab60865204843ac97fd3a6e6210da5896244409 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:force-systemd-env-879774 Namespace:default APIServerHA
VIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SS
HAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false} &{Name: IP: Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1229 07:10:47.336136  189940 start.go:125] createHost starting for "" (driver="docker")
	I1229 07:10:43.608587  185656 crio.go:275] Loading image: /var/lib/minikube/images/pause_3.10.1
	I1229 07:10:43.608677  185656 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/pause_3.10.1
	I1229 07:10:43.652606  185656 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.35.0
	I1229 07:10:43.898083  185656 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1229 07:10:44.016034  185656 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22353-9207/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1 from cache
	I1229 07:10:44.016083  185656 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.35.0
	I1229 07:10:44.016122  185656 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.35.0
	I1229 07:10:44.016130  185656 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.35.0
	I1229 07:10:44.016172  185656 cache_images.go:118] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I1229 07:10:44.016228  185656 cri.go:226] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1229 07:10:44.016273  185656 ssh_runner.go:195] Run: which crictl
	I1229 07:10:44.049387  185656 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22353-9207/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.35.0
	I1229 07:10:44.049483  185656 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.35.0
	I1229 07:10:45.179526  185656 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.35.0: (1.163373249s)
	I1229 07:10:45.179557  185656 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22353-9207/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.35.0 from cache
	I1229 07:10:45.179561  185656 ssh_runner.go:235] Completed: which crictl: (1.16326885s)
	I1229 07:10:45.179582  185656 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.35.0
	I1229 07:10:45.179602  185656 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.35.0: (1.130102038s)
	I1229 07:10:45.179630  185656 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-apiserver_v1.35.0: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.35.0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-apiserver_v1.35.0': No such file or directory
	I1229 07:10:45.179635  185656 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.35.0
	I1229 07:10:45.179640  185656 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1229 07:10:45.179650  185656 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-9207/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.35.0 --> /var/lib/minikube/images/kube-apiserver_v1.35.0 (27696640 bytes)
	I1229 07:10:46.632637  185656 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.35.0: (1.452973541s)
	I1229 07:10:46.632665  185656 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22353-9207/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.35.0 from cache
	I1229 07:10:46.632689  185656 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.13.1
	I1229 07:10:46.632688  185656 ssh_runner.go:235] Completed: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.453025597s)
	I1229 07:10:46.632741  185656 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.13.1
	I1229 07:10:46.632747  185656 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1229 07:10:46.662054  185656 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1229 07:10:48.236185  185656 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.13.1: (1.60342181s)
	I1229 07:10:48.236214  185656 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22353-9207/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.13.1 from cache
	I1229 07:10:48.236279  185656 ssh_runner.go:235] Completed: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.574191499s)
	I1229 07:10:48.236325  185656 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22353-9207/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I1229 07:10:48.236288  185656 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.35.0
	I1229 07:10:48.236425  185656 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I1229 07:10:48.236457  185656 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.35.0
	
	
	==> CRI-O <==
	Dec 29 07:10:40 pause-481637 crio[2203]: time="2025-12-29T07:10:40.479069056Z" level=info msg="RDT not available in the host system"
	Dec 29 07:10:40 pause-481637 crio[2203]: time="2025-12-29T07:10:40.479108602Z" level=info msg="Using conmon executable: /usr/libexec/crio/conmon"
	Dec 29 07:10:40 pause-481637 crio[2203]: time="2025-12-29T07:10:40.480142982Z" level=info msg="Conmon does support the --sync option"
	Dec 29 07:10:40 pause-481637 crio[2203]: time="2025-12-29T07:10:40.480161024Z" level=info msg="Conmon does support the --log-global-size-max option"
	Dec 29 07:10:40 pause-481637 crio[2203]: time="2025-12-29T07:10:40.48017363Z" level=info msg="Using conmon executable: /usr/libexec/crio/conmon"
	Dec 29 07:10:40 pause-481637 crio[2203]: time="2025-12-29T07:10:40.480924815Z" level=info msg="Conmon does support the --sync option"
	Dec 29 07:10:40 pause-481637 crio[2203]: time="2025-12-29T07:10:40.480947156Z" level=info msg="Conmon does support the --log-global-size-max option"
	Dec 29 07:10:40 pause-481637 crio[2203]: time="2025-12-29T07:10:40.486378159Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 29 07:10:40 pause-481637 crio[2203]: time="2025-12-29T07:10:40.486397359Z" level=info msg="Updated default CNI network name to kindnet"
	Dec 29 07:10:40 pause-481637 crio[2203]: time="2025-12-29T07:10:40.487067503Z" level=info msg="Current CRI-O configuration:\n[crio]\n  root = \"/var/lib/containers/storage\"\n  runroot = \"/run/containers/storage\"\n  imagestore = \"\"\n  storage_driver = \"overlay\"\n  log_dir = \"/var/log/crio/pods\"\n  version_file = \"/var/run/crio/version\"\n  version_file_persist = \"\"\n  clean_shutdown_file = \"/var/lib/crio/clean.shutdown\"\n  internal_wipe = true\n  internal_repair = true\n  [crio.api]\n    grpc_max_send_msg_size = 83886080\n    grpc_max_recv_msg_size = 83886080\n    listen = \"/var/run/crio/crio.sock\"\n    stream_address = \"127.0.0.1\"\n    stream_port = \"0\"\n    stream_enable_tls = false\n    stream_tls_cert = \"\"\n    stream_tls_key = \"\"\n    stream_tls_ca = \"\"\n    stream_idle_timeout = \"\"\n  [crio.runtime]\n    no_pivot = false\n    selinux = false\n    log_to_journald = false\n    drop_infra_ctr = true\n    read_only = false\n    hooks_dir = [\"/usr/share/containers/oci/hoo
ks.d\"]\n    default_capabilities = [\"CHOWN\", \"DAC_OVERRIDE\", \"FSETID\", \"FOWNER\", \"SETGID\", \"SETUID\", \"SETPCAP\", \"NET_BIND_SERVICE\", \"KILL\"]\n    add_inheritable_capabilities = false\n    default_sysctls = [\"net.ipv4.ip_unprivileged_port_start=0\"]\n    allowed_devices = [\"/dev/fuse\", \"/dev/net/tun\"]\n    cdi_spec_dirs = [\"/etc/cdi\", \"/var/run/cdi\"]\n    device_ownership_from_security_context = false\n    default_runtime = \"crun\"\n    decryption_keys_path = \"/etc/crio/keys/\"\n    conmon = \"\"\n    conmon_cgroup = \"pod\"\n    seccomp_profile = \"\"\n    privileged_seccomp_profile = \"\"\n    apparmor_profile = \"crio-default\"\n    blockio_config_file = \"\"\n    blockio_reload = false\n    irqbalance_config_file = \"/etc/sysconfig/irqbalance\"\n    rdt_config_file = \"\"\n    cgroup_manager = \"systemd\"\n    default_mounts_file = \"\"\n    container_exits_dir = \"/var/run/crio/exits\"\n    container_attach_socket_dir = \"/var/run/crio\"\n    bind_mount_prefix = \"\"\n    uid_
mappings = \"\"\n    minimum_mappable_uid = -1\n    gid_mappings = \"\"\n    minimum_mappable_gid = -1\n    log_level = \"info\"\n    log_filter = \"\"\n    namespaces_dir = \"/var/run\"\n    pinns_path = \"/usr/bin/pinns\"\n    enable_criu_support = false\n    pids_limit = -1\n    log_size_max = -1\n    ctr_stop_timeout = 30\n    separate_pull_cgroup = \"\"\n    infra_ctr_cpuset = \"\"\n    shared_cpuset = \"\"\n    enable_pod_events = false\n    irqbalance_config_restore_file = \"/etc/sysconfig/orig_irq_banned_cpus\"\n    hostnetwork_disable_selinux = true\n    disable_hostport_mapping = false\n    timezone = \"\"\n    [crio.runtime.runtimes]\n      [crio.runtime.runtimes.crun]\n        runtime_config_path = \"\"\n        runtime_path = \"/usr/libexec/crio/crun\"\n        runtime_type = \"\"\n        runtime_root = \"/run/crun\"\n        allowed_annotations = [\"io.containers.trace-syscall\"]\n        monitor_path = \"/usr/libexec/crio/conmon\"\n        monitor_cgroup = \"pod\"\n        container_min_memory
= \"12MiB\"\n        no_sync_log = false\n        container_create_timeout = 240\n      [crio.runtime.runtimes.runc]\n        runtime_config_path = \"\"\n        runtime_path = \"/usr/libexec/crio/runc\"\n        runtime_type = \"\"\n        runtime_root = \"/run/runc\"\n        monitor_path = \"/usr/libexec/crio/conmon\"\n        monitor_cgroup = \"pod\"\n        container_min_memory = \"12MiB\"\n        no_sync_log = false\n        container_create_timeout = 240\n  [crio.image]\n    default_transport = \"docker://\"\n    global_auth_file = \"\"\n    namespaced_auth_dir = \"/etc/crio/auth\"\n    pause_image = \"registry.k8s.io/pause:3.10.1\"\n    pause_image_auth_file = \"\"\n    pause_command = \"/pause\"\n    signature_policy = \"/etc/crio/policy.json\"\n    signature_policy_dir = \"/etc/crio/policies\"\n    image_volumes = \"mkdir\"\n    big_files_temporary_dir = \"\"\n    auto_reload_registries = false\n    pull_progress_timeout = \"0s\"\n    oci_artifact_mount_support = true\n    short_name_mode = \"en
forcing\"\n  [crio.network]\n    cni_default_network = \"\"\n    network_dir = \"/etc/cni/net.d/\"\n    plugin_dirs = [\"/opt/cni/bin/\"]\n  [crio.metrics]\n    enable_metrics = false\n    metrics_collectors = [\"image_pulls_layer_size\", \"containers_events_dropped_total\", \"containers_oom_total\", \"processes_defunct\", \"operations_total\", \"operations_latency_seconds\", \"operations_latency_seconds_total\", \"operations_errors_total\", \"image_pulls_bytes_total\", \"image_pulls_skipped_bytes_total\", \"image_pulls_failure_total\", \"image_pulls_success_total\", \"image_layer_reuse_total\", \"containers_oom_count_total\", \"containers_seccomp_notifier_count_total\", \"resources_stalled_at_stage\", \"containers_stopped_monitor_count\"]\n    metrics_host = \"127.0.0.1\"\n    metrics_port = 9090\n    metrics_socket = \"\"\n    metrics_cert = \"\"\n    metrics_key = \"\"\n  [crio.tracing]\n    enable_tracing = false\n    tracing_endpoint = \"127.0.0.1:4317\"\n    tracing_sampling_rate_per_million = 0\n  [cri
o.stats]\n    stats_collection_period = 0\n    collection_period = 0\n  [crio.nri]\n    enable_nri = true\n    nri_listen = \"/var/run/nri/nri.sock\"\n    nri_plugin_dir = \"/opt/nri/plugins\"\n    nri_plugin_config_dir = \"/etc/nri/conf.d\"\n    nri_plugin_registration_timeout = \"5s\"\n    nri_plugin_request_timeout = \"2s\"\n    nri_disable_connections = false\n    [crio.nri.default_validator]\n      nri_enable_default_validator = false\n      nri_validator_reject_oci_hook_adjustment = false\n      nri_validator_reject_runtime_default_seccomp_adjustment = false\n      nri_validator_reject_unconfined_seccomp_adjustment = false\n      nri_validator_reject_custom_seccomp_adjustment = false\n      nri_validator_reject_namespace_adjustment = false\n      nri_validator_tolerate_missing_plugins_annotation = \"\"\n"
	Dec 29 07:10:40 pause-481637 crio[2203]: time="2025-12-29T07:10:40.487532663Z" level=info msg="Attempting to restore irqbalance config from /etc/sysconfig/orig_irq_banned_cpus"
	Dec 29 07:10:40 pause-481637 crio[2203]: time="2025-12-29T07:10:40.487591064Z" level=info msg="Restore irqbalance config: failed to get current CPU ban list, ignoring"
	Dec 29 07:10:40 pause-481637 crio[2203]: time="2025-12-29T07:10:40.572714916Z" level=info msg="Got pod network &{Name:coredns-7d764666f9-5zm82 Namespace:kube-system ID:69bf5bb60ce7ac9ab4633fe1abc38d1f5a4243343bd274dda125871ed4235b17 UID:c1d30c2b-41ab-4a8b-a50a-fc2a71b80ae0 NetNS:/var/run/netns/b4282dd8-76b0-44da-88e3-d793db68d105 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc00007e2a0}] Aliases:map[]}"
	Dec 29 07:10:40 pause-481637 crio[2203]: time="2025-12-29T07:10:40.572965558Z" level=info msg="Checking pod kube-system_coredns-7d764666f9-5zm82 for CNI network kindnet (type=ptp)"
	Dec 29 07:10:40 pause-481637 crio[2203]: time="2025-12-29T07:10:40.573544557Z" level=info msg="Registered SIGHUP reload watcher"
	Dec 29 07:10:40 pause-481637 crio[2203]: time="2025-12-29T07:10:40.573577287Z" level=info msg="Starting seccomp notifier watcher"
	Dec 29 07:10:40 pause-481637 crio[2203]: time="2025-12-29T07:10:40.573630033Z" level=info msg="Create NRI interface"
	Dec 29 07:10:40 pause-481637 crio[2203]: time="2025-12-29T07:10:40.573809605Z" level=info msg="built-in NRI default validator is disabled"
	Dec 29 07:10:40 pause-481637 crio[2203]: time="2025-12-29T07:10:40.57382701Z" level=info msg="runtime interface created"
	Dec 29 07:10:40 pause-481637 crio[2203]: time="2025-12-29T07:10:40.573844414Z" level=info msg="Registered domain \"k8s.io\" with NRI"
	Dec 29 07:10:40 pause-481637 crio[2203]: time="2025-12-29T07:10:40.573854231Z" level=info msg="runtime interface starting up..."
	Dec 29 07:10:40 pause-481637 crio[2203]: time="2025-12-29T07:10:40.573861464Z" level=info msg="starting plugins..."
	Dec 29 07:10:40 pause-481637 crio[2203]: time="2025-12-29T07:10:40.573875365Z" level=info msg="Synchronizing NRI (plugin) with current runtime state"
	Dec 29 07:10:40 pause-481637 crio[2203]: time="2025-12-29T07:10:40.574291246Z" level=info msg="No systemd watchdog enabled"
	Dec 29 07:10:40 pause-481637 systemd[1]: Started crio.service - Container Runtime Interface for OCI (CRI-O).
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                    NAMESPACE
	58489a35e11e1       aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139                                     15 seconds ago      Running             coredns                   0                   69bf5bb60ce7a       coredns-7d764666f9-5zm82               kube-system
	5e860b5c12c78       docker.io/kindest/kindnetd@sha256:7c22558dc06a570d46ea6e8a73b23cdc754eb81f7c08d3441a3171ad359ffc27   26 seconds ago      Running             kindnet-cni               0                   93ae597140bad       kindnet-x4zst                          kube-system
	571cde0d2860e       32652ff1bbe6b6df16a3dc621fcad4e6e185a2852bffc5847aab79e36c54bfb8                                     28 seconds ago      Running             kube-proxy                0                   932caf88e8adb       kube-proxy-2qrrw                       kube-system
	15715c02ea0c8       0a108f7189562e99793bdecab61fdf1a7c9d913af3385de9da17fb9d6ff430e2                                     39 seconds ago      Running             etcd                      0                   8ab35fe59fc6f       etcd-pause-481637                      kube-system
	cbb7bcc884a54       5c6acd67e9cd10eb60a246cd233db251ef62ea97e6572f897e873f0cb648f499                                     39 seconds ago      Running             kube-apiserver            0                   4baa8fc74b44a       kube-apiserver-pause-481637            kube-system
	5f72b4ac83205       2c9a4b058bd7e6ec479c38f9e8a1dac2f5ee5b0a3ebda6dfac968e8720229508                                     39 seconds ago      Running             kube-controller-manager   0                   23dfbbf854fef       kube-controller-manager-pause-481637   kube-system
	febc3eb4806ae       550794e3b12ac21ec7fd940bdfb45f7c8dae3c52acd8c70e580b882de20c3dcc                                     39 seconds ago      Running             kube-scheduler            0                   a9608981a41a0       kube-scheduler-pause-481637            kube-system
	
	
	==> coredns [58489a35e11e19b18e157e91660aa22fd062abb979b1048c95b030e927e43506] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 3e2243e8b9e7116f563b83b1933f477a68ba9ad4a829ed5d7e54629fb2ce53528b9bc6023030be20be434ad805fd246296dd428c64e9bbef3a70f22b8621f560
	CoreDNS-1.13.1
	linux/amd64, go1.25.2, 1db4568
	[INFO] 127.0.0.1:37970 - 60113 "HINFO IN 3060421740110709993.3405178809552654186. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.026325781s
	
	
	==> describe nodes <==
	Name:               pause-481637
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=pause-481637
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=306c26d738eaa3534a776d7c684ed563998a48b8
	                    minikube.k8s.io/name=pause-481637
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_29T07_10_15_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 29 Dec 2025 07:10:12 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-481637
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 29 Dec 2025 07:10:35 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 29 Dec 2025 07:10:35 +0000   Mon, 29 Dec 2025 07:10:10 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 29 Dec 2025 07:10:35 +0000   Mon, 29 Dec 2025 07:10:10 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 29 Dec 2025 07:10:35 +0000   Mon, 29 Dec 2025 07:10:10 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 29 Dec 2025 07:10:35 +0000   Mon, 29 Dec 2025 07:10:33 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    pause-481637
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863360Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863360Ki
	  pods:               110
	System Info:
	  Machine ID:                 44f990608ba801cb32708aeb6951f96d
	  System UUID:                c9ae9aeb-5a7b-46f5-a34a-ae1c057846c3
	  Boot ID:                    38b59c2f-2e15-4538-b2f7-46a8b6545a02
	  Kernel Version:             6.8.0-1045-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.35.0
	  Kubelet Version:            v1.35.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-7d764666f9-5zm82                100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     29s
	  kube-system                 etcd-pause-481637                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         34s
	  kube-system                 kindnet-x4zst                           100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      29s
	  kube-system                 kube-apiserver-pause-481637             250m (3%)     0 (0%)      0 (0%)           0 (0%)         34s
	  kube-system                 kube-controller-manager-pause-481637    200m (2%)     0 (0%)      0 (0%)           0 (0%)         34s
	  kube-system                 kube-proxy-2qrrw                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         29s
	  kube-system                 kube-scheduler-pause-481637             100m (1%)     0 (0%)      0 (0%)           0 (0%)         34s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason          Age   From             Message
	  ----    ------          ----  ----             -------
	  Normal  RegisteredNode  30s   node-controller  Node pause-481637 event: Registered Node pause-481637 in Controller
	
	
	==> dmesg <==
	[Dec29 06:17] MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details.
	[  +0.001714] TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details.
	[  +0.001000] MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details.
	[  +0.084008] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge
	[  +0.380672] i8042: Warning: Keylock active
	[  +0.013979] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.493300] block sda: the capability attribute has been deprecated.
	[  +0.084603] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.024785] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +4.737934] kauditd_printk_skb: 47 callbacks suppressed
	
	
	==> etcd [15715c02ea0c8855cc0d760b1a4d47caf7aba7242bd633dedbf18b227f270ce2] <==
	{"level":"info","ts":"2025-12-29T07:10:10.231909Z","caller":"embed/etcd.go:890","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2025-12-29T07:10:11.122088Z","logger":"raft","caller":"v3@v3.6.0/raft.go:988","msg":"ea7e25599daad906 is starting a new election at term 1"}
	{"level":"info","ts":"2025-12-29T07:10:11.122152Z","logger":"raft","caller":"v3@v3.6.0/raft.go:930","msg":"ea7e25599daad906 became pre-candidate at term 1"}
	{"level":"info","ts":"2025-12-29T07:10:11.122238Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1077","msg":"ea7e25599daad906 received MsgPreVoteResp from ea7e25599daad906 at term 1"}
	{"level":"info","ts":"2025-12-29T07:10:11.122333Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1693","msg":"ea7e25599daad906 has received 1 MsgPreVoteResp votes and 0 vote rejections"}
	{"level":"info","ts":"2025-12-29T07:10:11.122388Z","logger":"raft","caller":"v3@v3.6.0/raft.go:912","msg":"ea7e25599daad906 became candidate at term 2"}
	{"level":"info","ts":"2025-12-29T07:10:11.123123Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1077","msg":"ea7e25599daad906 received MsgVoteResp from ea7e25599daad906 at term 2"}
	{"level":"info","ts":"2025-12-29T07:10:11.123156Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1693","msg":"ea7e25599daad906 has received 1 MsgVoteResp votes and 0 vote rejections"}
	{"level":"info","ts":"2025-12-29T07:10:11.123187Z","logger":"raft","caller":"v3@v3.6.0/raft.go:970","msg":"ea7e25599daad906 became leader at term 2"}
	{"level":"info","ts":"2025-12-29T07:10:11.123205Z","logger":"raft","caller":"v3@v3.6.0/node.go:370","msg":"raft.node: ea7e25599daad906 elected leader ea7e25599daad906 at term 2"}
	{"level":"info","ts":"2025-12-29T07:10:11.124042Z","caller":"etcdserver/server.go:2420","msg":"setting up initial cluster version using v3 API","cluster-version":"3.6"}
	{"level":"info","ts":"2025-12-29T07:10:11.124570Z","caller":"embed/serve.go:138","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-12-29T07:10:11.124590Z","caller":"embed/serve.go:138","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-12-29T07:10:11.124566Z","caller":"etcdserver/server.go:1820","msg":"published local member to cluster through raft","local-member-id":"ea7e25599daad906","local-member-attributes":"{Name:pause-481637 ClientURLs:[https://192.168.76.2:2379]}","cluster-id":"6f20f2c4b2fb5f8a","publish-timeout":"7s"}
	{"level":"info","ts":"2025-12-29T07:10:11.124937Z","caller":"membership/cluster.go:682","msg":"set initial cluster version","cluster-id":"6f20f2c4b2fb5f8a","local-member-id":"ea7e25599daad906","cluster-version":"3.6"}
	{"level":"info","ts":"2025-12-29T07:10:11.124920Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-12-29T07:10:11.124972Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-12-29T07:10:11.125011Z","caller":"api/capability.go:76","msg":"enabled capabilities for version","cluster-version":"3.6"}
	{"level":"info","ts":"2025-12-29T07:10:11.125067Z","caller":"etcdserver/server.go:2440","msg":"cluster version is updated","cluster-version":"3.6"}
	{"level":"info","ts":"2025-12-29T07:10:11.125103Z","caller":"version/monitor.go:116","msg":"cluster version differs from storage version.","cluster-version":"3.6.0","storage-version":"3.5.0"}
	{"level":"info","ts":"2025-12-29T07:10:11.125199Z","caller":"schema/migration.go:65","msg":"updated storage version","new-storage-version":"3.6.0"}
	{"level":"info","ts":"2025-12-29T07:10:11.125831Z","caller":"v3rpc/health.go:63","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-12-29T07:10:11.125996Z","caller":"v3rpc/health.go:63","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-12-29T07:10:11.129997Z","caller":"embed/serve.go:283","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.76.2:2379"}
	{"level":"info","ts":"2025-12-29T07:10:11.130643Z","caller":"embed/serve.go:283","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	
	
	==> kernel <==
	 07:10:49 up 53 min,  0 user,  load average: 4.19, 2.20, 1.48
	Linux pause-481637 6.8.0-1045-gcp #48~22.04.1-Ubuntu SMP Tue Nov 25 13:07:56 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [5e860b5c12c78561beb41ccbc2085158a45919cc8ed9cb9abc4373d630fd84d2] <==
	I1229 07:10:23.208640       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1229 07:10:23.209052       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1229 07:10:23.209195       1 main.go:148] setting mtu 1500 for CNI 
	I1229 07:10:23.209241       1 main.go:178] kindnetd IP family: "ipv4"
	I1229 07:10:23.209271       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-12-29T07:10:23Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1229 07:10:23.506342       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1229 07:10:23.506376       1 controller.go:381] "Waiting for informer caches to sync"
	I1229 07:10:23.506388       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1229 07:10:23.507098       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1229 07:10:23.712205       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1229 07:10:23.712253       1 metrics.go:72] Registering metrics
	I1229 07:10:23.712332       1 controller.go:711] "Syncing nftables rules"
	I1229 07:10:33.506397       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1229 07:10:33.506468       1 main.go:301] handling current node
	I1229 07:10:43.512323       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1229 07:10:43.512368       1 main.go:301] handling current node
	
	
	==> kube-apiserver [cbb7bcc884a54e8411d72312a0ada9c61edf897db2ff6699aa4e8a312e9735eb] <==
	I1229 07:10:12.414862       1 shared_informer.go:377] "Caches are synced"
	I1229 07:10:12.414879       1 policy_source.go:248] refreshing policies
	E1229 07:10:12.448898       1 controller.go:156] "Error while syncing ConfigMap" err="namespaces \"kube-system\" not found" logger="UnhandledError" configmap="kube-system/kube-apiserver-legacy-service-account-token-tracking"
	I1229 07:10:12.495988       1 controller.go:667] quota admission added evaluator for: namespaces
	I1229 07:10:12.499669       1 cidrallocator.go:302] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1229 07:10:12.500768       1 default_servicecidr_controller.go:231] Setting default ServiceCIDR condition Ready to True
	I1229 07:10:12.505369       1 cidrallocator.go:278] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1229 07:10:12.606631       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1229 07:10:13.300654       1 storage_scheduling.go:123] created PriorityClass system-node-critical with value 2000001000
	I1229 07:10:13.305076       1 storage_scheduling.go:123] created PriorityClass system-cluster-critical with value 2000000000
	I1229 07:10:13.305105       1 storage_scheduling.go:139] all system priority classes are created successfully or already exist.
	I1229 07:10:13.805528       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1229 07:10:13.849168       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1229 07:10:13.903360       1 alloc.go:329] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1229 07:10:13.909624       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.76.2]
	I1229 07:10:13.910880       1 controller.go:667] quota admission added evaluator for: endpoints
	I1229 07:10:13.915817       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1229 07:10:14.317611       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1229 07:10:15.102153       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1229 07:10:15.116852       1 alloc.go:329] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1229 07:10:15.128106       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1229 07:10:19.772608       1 cidrallocator.go:278] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1229 07:10:19.779668       1 cidrallocator.go:278] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1229 07:10:20.273186       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1229 07:10:20.347400       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	
	
	==> kube-controller-manager [5f72b4ac83205cc93dc25e7f93237b1feb1132c848b45fa12af6205fda6bffd9] <==
	I1229 07:10:19.122878       1 shared_informer.go:377] "Caches are synced"
	I1229 07:10:19.122883       1 shared_informer.go:377] "Caches are synced"
	I1229 07:10:19.122889       1 shared_informer.go:377] "Caches are synced"
	I1229 07:10:19.122892       1 shared_informer.go:377] "Caches are synced"
	I1229 07:10:19.122899       1 shared_informer.go:377] "Caches are synced"
	I1229 07:10:19.122899       1 shared_informer.go:377] "Caches are synced"
	I1229 07:10:19.122907       1 shared_informer.go:377] "Caches are synced"
	I1229 07:10:19.122909       1 shared_informer.go:377] "Caches are synced"
	I1229 07:10:19.122923       1 shared_informer.go:377] "Caches are synced"
	I1229 07:10:19.122930       1 shared_informer.go:377] "Caches are synced"
	I1229 07:10:19.122931       1 shared_informer.go:377] "Caches are synced"
	I1229 07:10:19.122949       1 shared_informer.go:377] "Caches are synced"
	I1229 07:10:19.122958       1 shared_informer.go:377] "Caches are synced"
	I1229 07:10:19.122966       1 shared_informer.go:377] "Caches are synced"
	I1229 07:10:19.122323       1 node_lifecycle_controller.go:886] "Missing timestamp for Node. Assuming now as a timestamp" node="pause-481637"
	I1229 07:10:19.125127       1 node_lifecycle_controller.go:1038] "Controller detected that all Nodes are not-Ready. Entering master disruption mode"
	I1229 07:10:19.125950       1 shared_informer.go:377] "Caches are synced"
	I1229 07:10:19.127118       1 shared_informer.go:377] "Caches are synced"
	I1229 07:10:19.128435       1 shared_informer.go:370] "Waiting for caches to sync"
	I1229 07:10:19.133562       1 range_allocator.go:433] "Set node PodCIDR" node="pause-481637" podCIDRs=["10.244.0.0/24"]
	I1229 07:10:19.222624       1 shared_informer.go:377] "Caches are synced"
	I1229 07:10:19.222646       1 garbagecollector.go:166] "Garbage collector: all resource monitors have synced"
	I1229 07:10:19.222651       1 garbagecollector.go:169] "Proceeding to collect garbage"
	I1229 07:10:19.228979       1 shared_informer.go:377] "Caches are synced"
	I1229 07:10:34.128272       1 node_lifecycle_controller.go:1057] "Controller detected that some Nodes are Ready. Exiting master disruption mode"
	
	
	==> kube-proxy [571cde0d2860e745d81cd271b5ab627d3d57c83abf1fdb3eb1c516a2e37d9f26] <==
	I1229 07:10:21.426180       1 server_linux.go:53] "Using iptables proxy"
	I1229 07:10:21.489498       1 shared_informer.go:370] "Waiting for caches to sync"
	I1229 07:10:21.589826       1 shared_informer.go:377] "Caches are synced"
	I1229 07:10:21.589869       1 server.go:218] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1229 07:10:21.589950       1 server.go:255] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1229 07:10:21.620780       1 server.go:264] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1229 07:10:21.620839       1 server_linux.go:136] "Using iptables Proxier"
	I1229 07:10:21.627409       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1229 07:10:21.627864       1 server.go:529] "Version info" version="v1.35.0"
	I1229 07:10:21.627889       1 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1229 07:10:21.630585       1 config.go:200] "Starting service config controller"
	I1229 07:10:21.630608       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1229 07:10:21.630673       1 config.go:106] "Starting endpoint slice config controller"
	I1229 07:10:21.630767       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1229 07:10:21.630922       1 config.go:309] "Starting node config controller"
	I1229 07:10:21.630952       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1229 07:10:21.630961       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1229 07:10:21.631383       1 config.go:403] "Starting serviceCIDR config controller"
	I1229 07:10:21.632018       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1229 07:10:21.730978       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1229 07:10:21.732284       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1229 07:10:21.732411       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [febc3eb4806ae3ff3fe4ec5ade5c51f1b25306c094df505f7f1a60820d643d9a] <==
	E1229 07:10:12.366243       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Service"
	E1229 07:10:12.366333       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Namespace"
	E1229 07:10:12.366421       1 reflector.go:204] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.PersistentVolume"
	E1229 07:10:12.366546       1 reflector.go:204] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.CSIStorageCapacity"
	E1229 07:10:12.366561       1 reflector.go:204] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.CSINode"
	E1229 07:10:12.366577       1 reflector.go:204] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.PodDisruptionBudget"
	E1229 07:10:12.366718       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Pod"
	E1229 07:10:12.366744       1 reflector.go:204] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.CSIDriver"
	E1229 07:10:12.366758       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Node"
	E1229 07:10:12.366831       1 reflector.go:204] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.PersistentVolumeClaim"
	E1229 07:10:13.172140       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Service"
	E1229 07:10:13.177247       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ReplicationController"
	E1229 07:10:13.203040       1 reflector.go:204] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.PersistentVolume"
	E1229 07:10:13.228702       1 reflector.go:204] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.DeviceClass"
	E1229 07:10:13.297762       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Namespace"
	E1229 07:10:13.303553       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ResourceSlice"
	E1229 07:10:13.338884       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1693" type="*v1.ConfigMap"
	E1229 07:10:13.372496       1 reflector.go:204] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.StatefulSet"
	E1229 07:10:13.376600       1 reflector.go:204] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.StorageClass"
	E1229 07:10:13.428847       1 reflector.go:204] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.VolumeAttachment"
	E1229 07:10:13.467174       1 reflector.go:204] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.PodDisruptionBudget"
	E1229 07:10:13.536231       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ResourceClaim"
	E1229 07:10:13.550497       1 reflector.go:204] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.CSIStorageCapacity"
	E1229 07:10:13.585854       1 reflector.go:204] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.PersistentVolumeClaim"
	I1229 07:10:16.460305       1 shared_informer.go:377] "Caches are synced"
	
	
	==> kubelet <==
	Dec 29 07:10:22 pause-481637 kubelet[1293]: I1229 07:10:22.547577    1293 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kube-proxy-2qrrw" podStartSLOduration=2.547553282 podStartE2EDuration="2.547553282s" podCreationTimestamp="2025-12-29 07:10:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-29 07:10:22.121574919 +0000 UTC m=+7.204269131" watchObservedRunningTime="2025-12-29 07:10:22.547553282 +0000 UTC m=+7.630247496"
	Dec 29 07:10:22 pause-481637 kubelet[1293]: E1229 07:10:22.985613    1293 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/etcd-pause-481637" containerName="etcd"
	Dec 29 07:10:23 pause-481637 kubelet[1293]: E1229 07:10:23.047954    1293 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-pause-481637" containerName="kube-scheduler"
	Dec 29 07:10:23 pause-481637 kubelet[1293]: I1229 07:10:23.127526    1293 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kindnet-x4zst" podStartSLOduration=1.6128638290000001 podStartE2EDuration="3.127498013s" podCreationTimestamp="2025-12-29 07:10:20 +0000 UTC" firstStartedPulling="2025-12-29 07:10:21.334675663 +0000 UTC m=+6.417369856" lastFinishedPulling="2025-12-29 07:10:22.84930986 +0000 UTC m=+7.932004040" observedRunningTime="2025-12-29 07:10:23.126625988 +0000 UTC m=+8.209320189" watchObservedRunningTime="2025-12-29 07:10:23.127498013 +0000 UTC m=+8.210192214"
	Dec 29 07:10:25 pause-481637 kubelet[1293]: E1229 07:10:25.775504    1293 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-controller-manager-pause-481637" containerName="kube-controller-manager"
	Dec 29 07:10:32 pause-481637 kubelet[1293]: E1229 07:10:32.473090    1293 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-pause-481637" containerName="kube-apiserver"
	Dec 29 07:10:32 pause-481637 kubelet[1293]: E1229 07:10:32.987381    1293 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/etcd-pause-481637" containerName="etcd"
	Dec 29 07:10:33 pause-481637 kubelet[1293]: E1229 07:10:33.054727    1293 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-pause-481637" containerName="kube-scheduler"
	Dec 29 07:10:33 pause-481637 kubelet[1293]: I1229 07:10:33.868741    1293 kubelet_node_status.go:427] "Fast updating node status as it just became ready"
	Dec 29 07:10:33 pause-481637 kubelet[1293]: I1229 07:10:33.967731    1293 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c1d30c2b-41ab-4a8b-a50a-fc2a71b80ae0-config-volume\") pod \"coredns-7d764666f9-5zm82\" (UID: \"c1d30c2b-41ab-4a8b-a50a-fc2a71b80ae0\") " pod="kube-system/coredns-7d764666f9-5zm82"
	Dec 29 07:10:33 pause-481637 kubelet[1293]: I1229 07:10:33.967781    1293 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bhkjq\" (UniqueName: \"kubernetes.io/projected/c1d30c2b-41ab-4a8b-a50a-fc2a71b80ae0-kube-api-access-bhkjq\") pod \"coredns-7d764666f9-5zm82\" (UID: \"c1d30c2b-41ab-4a8b-a50a-fc2a71b80ae0\") " pod="kube-system/coredns-7d764666f9-5zm82"
	Dec 29 07:10:35 pause-481637 kubelet[1293]: E1229 07:10:35.136604    1293 prober_manager.go:209] "Readiness probe already exists for container" pod="kube-system/coredns-7d764666f9-5zm82" containerName="coredns"
	Dec 29 07:10:35 pause-481637 kubelet[1293]: I1229 07:10:35.146410    1293 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/coredns-7d764666f9-5zm82" podStartSLOduration=15.146393815 podStartE2EDuration="15.146393815s" podCreationTimestamp="2025-12-29 07:10:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-29 07:10:35.146323273 +0000 UTC m=+20.229017473" watchObservedRunningTime="2025-12-29 07:10:35.146393815 +0000 UTC m=+20.229088016"
	Dec 29 07:10:36 pause-481637 kubelet[1293]: E1229 07:10:36.138627    1293 prober_manager.go:209] "Readiness probe already exists for container" pod="kube-system/coredns-7d764666f9-5zm82" containerName="coredns"
	Dec 29 07:10:37 pause-481637 kubelet[1293]: E1229 07:10:37.141020    1293 prober_manager.go:209] "Readiness probe already exists for container" pod="kube-system/coredns-7d764666f9-5zm82" containerName="coredns"
	Dec 29 07:10:40 pause-481637 kubelet[1293]: W1229 07:10:40.145864    1293 logging.go:55] [core] [Channel #1 SubChannel #2]grpc: addrConn.createTransport failed to connect to {Addr: "/var/run/crio/crio.sock", ServerName: "localhost", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	Dec 29 07:10:40 pause-481637 kubelet[1293]: E1229 07:10:40.146007    1293 log.go:32] "ListPodSandbox with filter from runtime service failed" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory\"" filter="<nil>"
	Dec 29 07:10:40 pause-481637 kubelet[1293]: E1229 07:10:40.146067    1293 kuberuntime_sandbox.go:297] "Failed to list pod sandboxes" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory\""
	Dec 29 07:10:40 pause-481637 kubelet[1293]: E1229 07:10:40.146085    1293 generic.go:254] "GenericPLEG: Unable to retrieve pods" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory\""
	Dec 29 07:10:40 pause-481637 kubelet[1293]: W1229 07:10:40.247160    1293 logging.go:55] [core] [Channel #1 SubChannel #2]grpc: addrConn.createTransport failed to connect to {Addr: "/var/run/crio/crio.sock", ServerName: "localhost", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	Dec 29 07:10:40 pause-481637 kubelet[1293]: W1229 07:10:40.418513    1293 logging.go:55] [core] [Channel #1 SubChannel #2]grpc: addrConn.createTransport failed to connect to {Addr: "/var/run/crio/crio.sock", ServerName: "localhost", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	Dec 29 07:10:44 pause-481637 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Dec 29 07:10:44 pause-481637 systemd[1]: kubelet.service: Deactivated successfully.
	Dec 29 07:10:44 pause-481637 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 29 07:10:44 pause-481637 systemd[1]: kubelet.service: Consumed 1.311s CPU time.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-481637 -n pause-481637
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-481637 -n pause-481637: exit status 2 (415.711369ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:270: (dbg) Run:  kubectl --context pause-481637 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:294: <<< TestPause/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:295: ---------------------/post-mortem---------------------------------
--- FAIL: TestPause/serial/Pause (6.85s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (2.23s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-876718 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-876718 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 11 (250.452799ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-29T07:14:08Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-876718 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 11
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context old-k8s-version-876718 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:213: (dbg) Non-zero exit: kubectl --context old-k8s-version-876718 describe deploy/metrics-server -n kube-system: exit status 1 (61.186218ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): deployments.apps "metrics-server" not found

                                                
                                                
** /stderr **
start_stop_delete_test.go:215: failed to get info on auto-pause deployments. args "kubectl --context old-k8s-version-876718 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:219: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect old-k8s-version-876718
helpers_test.go:244: (dbg) docker inspect old-k8s-version-876718:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "707d2d5cd5ce42857aeddeecf651ef884f3999cc918cf89592a98dcee898883d",
	        "Created": "2025-12-29T07:13:19.142529229Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 233769,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-29T07:13:19.177990115Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:a2c772d8c9235c5e8a66a3d0af7f5184436aa141d74f90a5659fe0d594ea14d5",
	        "ResolvConfPath": "/var/lib/docker/containers/707d2d5cd5ce42857aeddeecf651ef884f3999cc918cf89592a98dcee898883d/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/707d2d5cd5ce42857aeddeecf651ef884f3999cc918cf89592a98dcee898883d/hostname",
	        "HostsPath": "/var/lib/docker/containers/707d2d5cd5ce42857aeddeecf651ef884f3999cc918cf89592a98dcee898883d/hosts",
	        "LogPath": "/var/lib/docker/containers/707d2d5cd5ce42857aeddeecf651ef884f3999cc918cf89592a98dcee898883d/707d2d5cd5ce42857aeddeecf651ef884f3999cc918cf89592a98dcee898883d-json.log",
	        "Name": "/old-k8s-version-876718",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-876718:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "old-k8s-version-876718",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "707d2d5cd5ce42857aeddeecf651ef884f3999cc918cf89592a98dcee898883d",
	                "LowerDir": "/var/lib/docker/overlay2/674fb664845fd5c6a2ef24debb7531ad5eb9beab7fa93bd8dc00561d5a5ed330-init/diff:/var/lib/docker/overlay2/3c9e424a3298c821d4195922375c499065668db3230de879006c717dc268a220/diff",
	                "MergedDir": "/var/lib/docker/overlay2/674fb664845fd5c6a2ef24debb7531ad5eb9beab7fa93bd8dc00561d5a5ed330/merged",
	                "UpperDir": "/var/lib/docker/overlay2/674fb664845fd5c6a2ef24debb7531ad5eb9beab7fa93bd8dc00561d5a5ed330/diff",
	                "WorkDir": "/var/lib/docker/overlay2/674fb664845fd5c6a2ef24debb7531ad5eb9beab7fa93bd8dc00561d5a5ed330/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-876718",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-876718/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-876718",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766979815-22353@sha256:20dad5895b49b986a1253c0faab60865204843ac97fd3a6e6210da5896244409",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-876718",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-876718",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "d02ea4719f44a6e48639d156c636164d47df9ec3ff638b2647159b8d8dbed41f",
	            "SandboxKey": "/var/run/docker/netns/d02ea4719f44",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33053"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33054"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33057"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33055"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33056"
	                    }
	                ]
	            },
	            "Networks": {
	                "old-k8s-version-876718": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.103.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "6961f21bb90e6befcbf5f75f7b239c49f9b8e14ab6e6619030de29754825fc86",
	                    "EndpointID": "f30cec13031b18c308576cd7c8b0c88ac5ab7f852cd224ac84aa0fc459ea3263",
	                    "Gateway": "192.168.103.1",
	                    "IPAddress": "192.168.103.2",
	                    "MacAddress": "fa:ba:66:d4:2a:be",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "old-k8s-version-876718",
	                        "707d2d5cd5ce"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-876718 -n old-k8s-version-876718
helpers_test.go:253: <<< TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-876718 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-amd64 -p old-k8s-version-876718 logs -n 25: (1.048058936s)
helpers_test.go:261: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────
────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │          PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────
────┤
	│ start   │ -p test-preload-457393 --preload=true --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=crio                                                                                                                            │ test-preload-457393       │ jenkins │ v1.37.0 │ 29 Dec 25 07:11 UTC │ 29 Dec 25 07:12 UTC │
	│ delete  │ -p NoKubernetes-868221                                                                                                                                                                                                                        │ NoKubernetes-868221       │ jenkins │ v1.37.0 │ 29 Dec 25 07:11 UTC │ 29 Dec 25 07:11 UTC │
	│ start   │ -p NoKubernetes-868221 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                                                                                                                         │ NoKubernetes-868221       │ jenkins │ v1.37.0 │ 29 Dec 25 07:11 UTC │ 29 Dec 25 07:11 UTC │
	│ ssh     │ -p NoKubernetes-868221 sudo systemctl is-active --quiet service kubelet                                                                                                                                                                       │ NoKubernetes-868221       │ jenkins │ v1.37.0 │ 29 Dec 25 07:11 UTC │                     │
	│ start   │ -p missing-upgrade-967138 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                                                                                                                      │ missing-upgrade-967138    │ jenkins │ v1.37.0 │ 29 Dec 25 07:11 UTC │ 29 Dec 25 07:12 UTC │
	│ stop    │ -p NoKubernetes-868221                                                                                                                                                                                                                        │ NoKubernetes-868221       │ jenkins │ v1.37.0 │ 29 Dec 25 07:11 UTC │ 29 Dec 25 07:11 UTC │
	│ start   │ -p NoKubernetes-868221 --driver=docker  --container-runtime=crio                                                                                                                                                                              │ NoKubernetes-868221       │ jenkins │ v1.37.0 │ 29 Dec 25 07:11 UTC │ 29 Dec 25 07:12 UTC │
	│ ssh     │ -p NoKubernetes-868221 sudo systemctl is-active --quiet service kubelet                                                                                                                                                                       │ NoKubernetes-868221       │ jenkins │ v1.37.0 │ 29 Dec 25 07:12 UTC │                     │
	│ delete  │ -p NoKubernetes-868221                                                                                                                                                                                                                        │ NoKubernetes-868221       │ jenkins │ v1.37.0 │ 29 Dec 25 07:12 UTC │ 29 Dec 25 07:12 UTC │
	│ start   │ -p kubernetes-upgrade-174577 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                                                                                      │ kubernetes-upgrade-174577 │ jenkins │ v1.37.0 │ 29 Dec 25 07:12 UTC │ 29 Dec 25 07:12 UTC │
	│ image   │ test-preload-457393 image list                                                                                                                                                                                                                │ test-preload-457393       │ jenkins │ v1.37.0 │ 29 Dec 25 07:12 UTC │ 29 Dec 25 07:12 UTC │
	│ delete  │ -p test-preload-457393                                                                                                                                                                                                                        │ test-preload-457393       │ jenkins │ v1.37.0 │ 29 Dec 25 07:12 UTC │ 29 Dec 25 07:12 UTC │
	│ start   │ -p cert-expiration-452455 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio                                                                                                                                        │ cert-expiration-452455    │ jenkins │ v1.37.0 │ 29 Dec 25 07:12 UTC │ 29 Dec 25 07:12 UTC │
	│ delete  │ -p missing-upgrade-967138                                                                                                                                                                                                                     │ missing-upgrade-967138    │ jenkins │ v1.37.0 │ 29 Dec 25 07:12 UTC │ 29 Dec 25 07:12 UTC │
	│ start   │ -p force-systemd-flag-074338 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                                                                                                                   │ force-systemd-flag-074338 │ jenkins │ v1.37.0 │ 29 Dec 25 07:12 UTC │ 29 Dec 25 07:12 UTC │
	│ stop    │ -p kubernetes-upgrade-174577 --alsologtostderr                                                                                                                                                                                                │ kubernetes-upgrade-174577 │ jenkins │ v1.37.0 │ 29 Dec 25 07:12 UTC │ 29 Dec 25 07:12 UTC │
	│ start   │ -p kubernetes-upgrade-174577 --memory=3072 --kubernetes-version=v1.35.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                                                                                      │ kubernetes-upgrade-174577 │ jenkins │ v1.37.0 │ 29 Dec 25 07:12 UTC │                     │
	│ ssh     │ force-systemd-flag-074338 ssh cat /etc/crio/crio.conf.d/02-crio.conf                                                                                                                                                                          │ force-systemd-flag-074338 │ jenkins │ v1.37.0 │ 29 Dec 25 07:12 UTC │ 29 Dec 25 07:12 UTC │
	│ delete  │ -p force-systemd-flag-074338                                                                                                                                                                                                                  │ force-systemd-flag-074338 │ jenkins │ v1.37.0 │ 29 Dec 25 07:12 UTC │ 29 Dec 25 07:12 UTC │
	│ start   │ -p cert-options-001954 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio                     │ cert-options-001954       │ jenkins │ v1.37.0 │ 29 Dec 25 07:12 UTC │ 29 Dec 25 07:13 UTC │
	│ ssh     │ cert-options-001954 ssh openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt                                                                                                                                                   │ cert-options-001954       │ jenkins │ v1.37.0 │ 29 Dec 25 07:13 UTC │ 29 Dec 25 07:13 UTC │
	│ ssh     │ -p cert-options-001954 -- sudo cat /etc/kubernetes/admin.conf                                                                                                                                                                                 │ cert-options-001954       │ jenkins │ v1.37.0 │ 29 Dec 25 07:13 UTC │ 29 Dec 25 07:13 UTC │
	│ delete  │ -p cert-options-001954                                                                                                                                                                                                                        │ cert-options-001954       │ jenkins │ v1.37.0 │ 29 Dec 25 07:13 UTC │ 29 Dec 25 07:13 UTC │
	│ start   │ -p old-k8s-version-876718 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-876718    │ jenkins │ v1.37.0 │ 29 Dec 25 07:13 UTC │ 29 Dec 25 07:13 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-876718 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-876718    │ jenkins │ v1.37.0 │ 29 Dec 25 07:14 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────
────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/29 07:13:12
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1229 07:13:12.742690  232412 out.go:360] Setting OutFile to fd 1 ...
	I1229 07:13:12.742800  232412 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1229 07:13:12.742812  232412 out.go:374] Setting ErrFile to fd 2...
	I1229 07:13:12.742818  232412 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1229 07:13:12.743030  232412 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22353-9207/.minikube/bin
	I1229 07:13:12.743642  232412 out.go:368] Setting JSON to false
	I1229 07:13:12.745256  232412 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":3345,"bootTime":1766989048,"procs":310,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1229 07:13:12.745325  232412 start.go:143] virtualization: kvm guest
	I1229 07:13:12.748210  232412 out.go:179] * [old-k8s-version-876718] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1229 07:13:12.749568  232412 out.go:179]   - MINIKUBE_LOCATION=22353
	I1229 07:13:12.749577  232412 notify.go:221] Checking for updates...
	I1229 07:13:12.751995  232412 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1229 07:13:12.753306  232412 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22353-9207/kubeconfig
	I1229 07:13:12.754657  232412 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22353-9207/.minikube
	I1229 07:13:12.755840  232412 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1229 07:13:12.757107  232412 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1229 07:13:12.759040  232412 config.go:182] Loaded profile config "cert-expiration-452455": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1229 07:13:12.759191  232412 config.go:182] Loaded profile config "kubernetes-upgrade-174577": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1229 07:13:12.759355  232412 config.go:182] Loaded profile config "stopped-upgrade-518014": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.0
	I1229 07:13:12.759488  232412 driver.go:422] Setting default libvirt URI to qemu:///system
	I1229 07:13:12.787987  232412 docker.go:124] docker version: linux-29.1.3:Docker Engine - Community
	I1229 07:13:12.788166  232412 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1229 07:13:12.852125  232412 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:65 OomKillDisable:false NGoroutines:75 SystemTime:2025-12-29 07:13:12.842368168 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:dea7da592f5d1d2b7755e3a161be07f43fad8f75 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1229 07:13:12.852275  232412 docker.go:319] overlay module found
	I1229 07:13:12.857209  232412 out.go:179] * Using the docker driver based on user configuration
	I1229 07:13:12.858601  232412 start.go:309] selected driver: docker
	I1229 07:13:12.858615  232412 start.go:928] validating driver "docker" against <nil>
	I1229 07:13:12.858625  232412 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1229 07:13:12.859215  232412 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1229 07:13:12.918106  232412 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:65 OomKillDisable:false NGoroutines:75 SystemTime:2025-12-29 07:13:12.907329835 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:dea7da592f5d1d2b7755e3a161be07f43fad8f75 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1229 07:13:12.918295  232412 start_flags.go:333] no existing cluster config was found, will generate one from the flags 
	I1229 07:13:12.918482  232412 start_flags.go:1019] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1229 07:13:12.920503  232412 out.go:179] * Using Docker driver with root privileges
	I1229 07:13:12.921544  232412 cni.go:84] Creating CNI manager for ""
	I1229 07:13:12.921617  232412 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1229 07:13:12.921631  232412 start_flags.go:342] Found "CNI" CNI - setting NetworkPlugin=cni
	I1229 07:13:12.921697  232412 start.go:353] cluster config:
	{Name:old-k8s-version-876718 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766979815-22353@sha256:20dad5895b49b986a1253c0faab60865204843ac97fd3a6e6210da5896244409 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-876718 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SS
HAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1229 07:13:12.923063  232412 out.go:179] * Starting "old-k8s-version-876718" primary control-plane node in "old-k8s-version-876718" cluster
	I1229 07:13:12.924133  232412 cache.go:134] Beginning downloading kic base image for docker with crio
	I1229 07:13:12.925124  232412 out.go:179] * Pulling base image v0.0.48-1766979815-22353 ...
	I1229 07:13:12.926005  232412 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1229 07:13:12.926051  232412 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22353-9207/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4
	I1229 07:13:12.926059  232412 cache.go:65] Caching tarball of preloaded images
	I1229 07:13:12.926102  232412 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766979815-22353@sha256:20dad5895b49b986a1253c0faab60865204843ac97fd3a6e6210da5896244409 in local docker daemon
	I1229 07:13:12.926138  232412 preload.go:251] Found /home/jenkins/minikube-integration/22353-9207/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1229 07:13:12.926151  232412 cache.go:68] Finished verifying existence of preloaded tar for v1.28.0 on crio
	I1229 07:13:12.926288  232412 profile.go:143] Saving config to /home/jenkins/minikube-integration/22353-9207/.minikube/profiles/old-k8s-version-876718/config.json ...
	I1229 07:13:12.926312  232412 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22353-9207/.minikube/profiles/old-k8s-version-876718/config.json: {Name:mkcb84097b34770ede5612c322a70c636404e8e3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1229 07:13:12.951385  232412 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766979815-22353@sha256:20dad5895b49b986a1253c0faab60865204843ac97fd3a6e6210da5896244409 in local docker daemon, skipping pull
	I1229 07:13:12.951406  232412 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766979815-22353@sha256:20dad5895b49b986a1253c0faab60865204843ac97fd3a6e6210da5896244409 exists in daemon, skipping load
	I1229 07:13:12.951422  232412 cache.go:243] Successfully downloaded all kic artifacts
	I1229 07:13:12.951458  232412 start.go:360] acquireMachinesLock for old-k8s-version-876718: {Name:mk3c66f3c3a9fc489b28ca83a4830eec615e2b15 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1229 07:13:12.951558  232412 start.go:364] duration metric: took 78.768µs to acquireMachinesLock for "old-k8s-version-876718"
	I1229 07:13:12.951588  232412 start.go:93] Provisioning new machine with config: &{Name:old-k8s-version-876718 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766979815-22353@sha256:20dad5895b49b986a1253c0faab60865204843ac97fd3a6e6210da5896244409 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-876718 Namespace:default APIServerHAVIP:
APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQ
emuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false} &{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1229 07:13:12.951670  232412 start.go:125] createHost starting for "" (driver="docker")
	I1229 07:13:09.399554  225445 ssh_runner.go:235] Completed: sudo podman load -i /tmp/tmp.LQXLwoNemP/bb5e0dde9054c02d6badee88547be7e7bb7b7b818d277c8a61b4b29484bbff95.tar: (1.888192136s)
	I1229 07:13:09.399586  225445 crio.go:275] Loading image: /tmp/tmp.LQXLwoNemP/4be79c38a4bab6e1252a35697500e8a0d9c5c7c771d9fcc1935c9a7f6cdf4c62.tar
	I1229 07:13:09.399640  225445 ssh_runner.go:195] Run: sudo podman load -i /tmp/tmp.LQXLwoNemP/4be79c38a4bab6e1252a35697500e8a0d9c5c7c771d9fcc1935c9a7f6cdf4c62.tar
	I1229 07:13:11.146078  225445 ssh_runner.go:235] Completed: sudo podman load -i /tmp/tmp.LQXLwoNemP/4be79c38a4bab6e1252a35697500e8a0d9c5c7c771d9fcc1935c9a7f6cdf4c62.tar: (1.746410946s)
	I1229 07:13:11.146111  225445 crio.go:275] Loading image: /tmp/tmp.LQXLwoNemP/ea1030da44aa18666a7bf15fddd2a38c3143c3277159cb8bdd95f45c8ce62d7a.tar
	I1229 07:13:11.146154  225445 ssh_runner.go:195] Run: sudo podman load -i /tmp/tmp.LQXLwoNemP/ea1030da44aa18666a7bf15fddd2a38c3143c3277159cb8bdd95f45c8ce62d7a.tar
	I1229 07:13:12.176851  225445 ssh_runner.go:235] Completed: sudo podman load -i /tmp/tmp.LQXLwoNemP/ea1030da44aa18666a7bf15fddd2a38c3143c3277159cb8bdd95f45c8ce62d7a.tar: (1.030678066s)
	I1229 07:13:12.176883  225445 crio.go:275] Loading image: /tmp/tmp.LQXLwoNemP/f6f496300a2ae7a6727ccf3080d66d2fd22b6cfc271df5351c976c23a28bb157.tar
	I1229 07:13:12.176920  225445 ssh_runner.go:195] Run: sudo podman load -i /tmp/tmp.LQXLwoNemP/f6f496300a2ae7a6727ccf3080d66d2fd22b6cfc271df5351c976c23a28bb157.tar
	I1229 07:13:12.860085  225445 crio.go:275] Loading image: /tmp/tmp.LQXLwoNemP/e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c.tar
	I1229 07:13:12.860148  225445 ssh_runner.go:195] Run: sudo podman load -i /tmp/tmp.LQXLwoNemP/e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c.tar
	I1229 07:13:12.977828  225445 ssh_runner.go:195] Run: rm -rf /tmp/tmp.LQXLwoNemP
	I1229 07:13:13.069777  225445 ssh_runner.go:195] Run: sudo crictl images --output json
	I1229 07:13:13.111011  225445 crio.go:561] all images are preloaded for cri-o runtime.
	I1229 07:13:13.111031  225445 cache_images.go:86] Images are preloaded, skipping loading
	I1229 07:13:13.111039  225445 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.35.0 crio true true} ...
	I1229 07:13:13.111135  225445 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=kubernetes-upgrade-174577 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0 ClusterName:kubernetes-upgrade-174577 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1229 07:13:13.111201  225445 ssh_runner.go:195] Run: crio config
	I1229 07:13:13.162455  225445 cni.go:84] Creating CNI manager for ""
	I1229 07:13:13.162477  225445 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1229 07:13:13.162491  225445 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1229 07:13:13.162511  225445 kubeadm.go:197] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.35.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:kubernetes-upgrade-174577 NodeName:kubernetes-upgrade-174577 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt Sta
ticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1229 07:13:13.162640  225445 kubeadm.go:203] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "kubernetes-upgrade-174577"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1229 07:13:13.162694  225445 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0
	I1229 07:13:13.170759  225445 binaries.go:51] Found k8s binaries, skipping transfer
	I1229 07:13:13.170830  225445 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1229 07:13:13.178356  225445 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (375 bytes)
	I1229 07:13:13.190678  225445 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1229 07:13:13.202960  225445 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2221 bytes)
	I1229 07:13:13.215159  225445 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1229 07:13:13.218921  225445 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1229 07:13:13.264061  225445 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1229 07:13:13.372966  225445 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1229 07:13:13.399952  225445 certs.go:69] Setting up /home/jenkins/minikube-integration/22353-9207/.minikube/profiles/kubernetes-upgrade-174577 for IP: 192.168.76.2
	I1229 07:13:13.399981  225445 certs.go:195] generating shared ca certs ...
	I1229 07:13:13.400007  225445 certs.go:227] acquiring lock for ca certs: {Name:mk9ea2caa33885d3eff30cd6b31b0954826457bd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1229 07:13:13.400175  225445 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22353-9207/.minikube/ca.key
	I1229 07:13:13.400270  225445 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22353-9207/.minikube/proxy-client-ca.key
	I1229 07:13:13.400291  225445 certs.go:257] generating profile certs ...
	I1229 07:13:13.400411  225445 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22353-9207/.minikube/profiles/kubernetes-upgrade-174577/client.key
	I1229 07:13:13.400488  225445 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22353-9207/.minikube/profiles/kubernetes-upgrade-174577/apiserver.key.fc0c06a7
	I1229 07:13:13.400556  225445 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22353-9207/.minikube/profiles/kubernetes-upgrade-174577/proxy-client.key
	I1229 07:13:13.400713  225445 certs.go:484] found cert: /home/jenkins/minikube-integration/22353-9207/.minikube/certs/12733.pem (1338 bytes)
	W1229 07:13:13.400763  225445 certs.go:480] ignoring /home/jenkins/minikube-integration/22353-9207/.minikube/certs/12733_empty.pem, impossibly tiny 0 bytes
	I1229 07:13:13.400780  225445 certs.go:484] found cert: /home/jenkins/minikube-integration/22353-9207/.minikube/certs/ca-key.pem (1675 bytes)
	I1229 07:13:13.400822  225445 certs.go:484] found cert: /home/jenkins/minikube-integration/22353-9207/.minikube/certs/ca.pem (1082 bytes)
	I1229 07:13:13.400870  225445 certs.go:484] found cert: /home/jenkins/minikube-integration/22353-9207/.minikube/certs/cert.pem (1123 bytes)
	I1229 07:13:13.400920  225445 certs.go:484] found cert: /home/jenkins/minikube-integration/22353-9207/.minikube/certs/key.pem (1675 bytes)
	I1229 07:13:13.400990  225445 certs.go:484] found cert: /home/jenkins/minikube-integration/22353-9207/.minikube/files/etc/ssl/certs/127332.pem (1708 bytes)
	I1229 07:13:13.401874  225445 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-9207/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1229 07:13:13.422563  225445 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-9207/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1229 07:13:13.442505  225445 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-9207/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1229 07:13:13.463705  225445 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-9207/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1229 07:13:13.488095  225445 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-9207/.minikube/profiles/kubernetes-upgrade-174577/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I1229 07:13:13.520164  225445 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-9207/.minikube/profiles/kubernetes-upgrade-174577/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1229 07:13:13.540886  225445 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-9207/.minikube/profiles/kubernetes-upgrade-174577/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1229 07:13:13.561752  225445 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-9207/.minikube/profiles/kubernetes-upgrade-174577/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1229 07:13:13.580721  225445 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-9207/.minikube/certs/12733.pem --> /usr/share/ca-certificates/12733.pem (1338 bytes)
	I1229 07:13:13.600833  225445 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-9207/.minikube/files/etc/ssl/certs/127332.pem --> /usr/share/ca-certificates/127332.pem (1708 bytes)
	I1229 07:13:13.624410  225445 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-9207/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1229 07:13:13.656924  225445 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (722 bytes)
	I1229 07:13:13.675090  225445 ssh_runner.go:195] Run: openssl version
	I1229 07:13:13.682166  225445 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1229 07:13:13.690436  225445 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1229 07:13:13.700582  225445 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1229 07:13:13.705508  225445 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 29 06:46 /usr/share/ca-certificates/minikubeCA.pem
	I1229 07:13:13.705572  225445 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1229 07:13:13.752242  225445 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1229 07:13:13.761358  225445 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/12733.pem
	I1229 07:13:13.771300  225445 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/12733.pem /etc/ssl/certs/12733.pem
	I1229 07:13:13.779870  225445 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12733.pem
	I1229 07:13:13.784464  225445 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 29 06:49 /usr/share/ca-certificates/12733.pem
	I1229 07:13:13.784521  225445 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12733.pem
	I1229 07:13:13.827002  225445 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1229 07:13:13.834914  225445 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/127332.pem
	I1229 07:13:13.844269  225445 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/127332.pem /etc/ssl/certs/127332.pem
	I1229 07:13:13.853829  225445 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/127332.pem
	I1229 07:13:13.858293  225445 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 29 06:49 /usr/share/ca-certificates/127332.pem
	I1229 07:13:13.858355  225445 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/127332.pem
	I1229 07:13:13.899850  225445 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1229 07:13:13.908235  225445 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1229 07:13:13.912431  225445 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1229 07:13:13.970085  225445 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1229 07:13:14.034671  225445 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1229 07:13:14.098781  225445 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1229 07:13:14.159627  225445 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1229 07:13:13.435068  182389 api_server.go:299] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1229 07:13:13.435583  182389 api_server.go:315] stopped: https://192.168.94.2:8443/healthz: Get "https://192.168.94.2:8443/healthz": dial tcp 192.168.94.2:8443: connect: connection refused
	I1229 07:13:13.435648  182389 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1229 07:13:13.435709  182389 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1229 07:13:13.477380  182389 cri.go:96] found id: "a69c9fe80c213404a0b8de0af5c9ef86b41f70e74173f141059b09d0534b6a03"
	I1229 07:13:13.477407  182389 cri.go:96] found id: ""
	I1229 07:13:13.477416  182389 logs.go:282] 1 containers: [a69c9fe80c213404a0b8de0af5c9ef86b41f70e74173f141059b09d0534b6a03]
	I1229 07:13:13.477468  182389 ssh_runner.go:195] Run: which crictl
	I1229 07:13:13.485877  182389 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1229 07:13:13.485949  182389 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1229 07:13:13.532284  182389 cri.go:96] found id: ""
	I1229 07:13:13.532313  182389 logs.go:282] 0 containers: []
	W1229 07:13:13.532323  182389 logs.go:284] No container was found matching "etcd"
	I1229 07:13:13.532329  182389 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1229 07:13:13.532378  182389 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1229 07:13:13.576132  182389 cri.go:96] found id: ""
	I1229 07:13:13.576160  182389 logs.go:282] 0 containers: []
	W1229 07:13:13.576173  182389 logs.go:284] No container was found matching "coredns"
	I1229 07:13:13.576181  182389 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1229 07:13:13.576258  182389 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1229 07:13:13.613994  182389 cri.go:96] found id: "ab7241dc5c8c2a299a7ede3eb85d75ade304d211f5bb4ae3d659d4455d83ab45"
	I1229 07:13:13.614020  182389 cri.go:96] found id: ""
	I1229 07:13:13.614029  182389 logs.go:282] 1 containers: [ab7241dc5c8c2a299a7ede3eb85d75ade304d211f5bb4ae3d659d4455d83ab45]
	I1229 07:13:13.614076  182389 ssh_runner.go:195] Run: which crictl
	I1229 07:13:13.618645  182389 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1229 07:13:13.618715  182389 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1229 07:13:13.669175  182389 cri.go:96] found id: ""
	I1229 07:13:13.669212  182389 logs.go:282] 0 containers: []
	W1229 07:13:13.669249  182389 logs.go:284] No container was found matching "kube-proxy"
	I1229 07:13:13.669258  182389 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1229 07:13:13.669368  182389 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1229 07:13:13.712781  182389 cri.go:96] found id: "1d16b757804b48197837746fce61d7dfe6bda3185ce49f221bbea9e891681759"
	I1229 07:13:13.712806  182389 cri.go:96] found id: ""
	I1229 07:13:13.712816  182389 logs.go:282] 1 containers: [1d16b757804b48197837746fce61d7dfe6bda3185ce49f221bbea9e891681759]
	I1229 07:13:13.712880  182389 ssh_runner.go:195] Run: which crictl
	I1229 07:13:13.716864  182389 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1229 07:13:13.716931  182389 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1229 07:13:13.769658  182389 cri.go:96] found id: ""
	I1229 07:13:13.769685  182389 logs.go:282] 0 containers: []
	W1229 07:13:13.769697  182389 logs.go:284] No container was found matching "kindnet"
	I1229 07:13:13.769705  182389 cri.go:61] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1229 07:13:13.769774  182389 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=storage-provisioner
	I1229 07:13:13.808787  182389 cri.go:96] found id: ""
	I1229 07:13:13.808812  182389 logs.go:282] 0 containers: []
	W1229 07:13:13.808822  182389 logs.go:284] No container was found matching "storage-provisioner"
	I1229 07:13:13.808833  182389 logs.go:123] Gathering logs for dmesg ...
	I1229 07:13:13.808849  182389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1229 07:13:13.824351  182389 logs.go:123] Gathering logs for describe nodes ...
	I1229 07:13:13.824376  182389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1229 07:13:13.889431  182389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1229 07:13:13.889457  182389 logs.go:123] Gathering logs for kube-apiserver [a69c9fe80c213404a0b8de0af5c9ef86b41f70e74173f141059b09d0534b6a03] ...
	I1229 07:13:13.889470  182389 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a69c9fe80c213404a0b8de0af5c9ef86b41f70e74173f141059b09d0534b6a03"
	I1229 07:13:13.941055  182389 logs.go:123] Gathering logs for kube-scheduler [ab7241dc5c8c2a299a7ede3eb85d75ade304d211f5bb4ae3d659d4455d83ab45] ...
	I1229 07:13:13.941089  182389 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ab7241dc5c8c2a299a7ede3eb85d75ade304d211f5bb4ae3d659d4455d83ab45"
	I1229 07:13:14.053616  182389 logs.go:123] Gathering logs for kube-controller-manager [1d16b757804b48197837746fce61d7dfe6bda3185ce49f221bbea9e891681759] ...
	I1229 07:13:14.053664  182389 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1d16b757804b48197837746fce61d7dfe6bda3185ce49f221bbea9e891681759"
	I1229 07:13:14.108555  182389 logs.go:123] Gathering logs for CRI-O ...
	I1229 07:13:14.108590  182389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1229 07:13:14.178249  182389 logs.go:123] Gathering logs for container status ...
	I1229 07:13:14.178286  182389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1229 07:13:14.234247  182389 logs.go:123] Gathering logs for kubelet ...
	I1229 07:13:14.234278  182389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1229 07:13:12.953342  232412 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1229 07:13:12.953576  232412 start.go:159] libmachine.API.Create for "old-k8s-version-876718" (driver="docker")
	I1229 07:13:12.953609  232412 client.go:173] LocalClient.Create starting
	I1229 07:13:12.953680  232412 main.go:144] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22353-9207/.minikube/certs/ca.pem
	I1229 07:13:12.953739  232412 main.go:144] libmachine: Decoding PEM data...
	I1229 07:13:12.953770  232412 main.go:144] libmachine: Parsing certificate...
	I1229 07:13:12.953834  232412 main.go:144] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22353-9207/.minikube/certs/cert.pem
	I1229 07:13:12.953870  232412 main.go:144] libmachine: Decoding PEM data...
	I1229 07:13:12.953889  232412 main.go:144] libmachine: Parsing certificate...
	I1229 07:13:12.954351  232412 cli_runner.go:164] Run: docker network inspect old-k8s-version-876718 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1229 07:13:12.972765  232412 cli_runner.go:211] docker network inspect old-k8s-version-876718 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1229 07:13:12.972835  232412 network_create.go:284] running [docker network inspect old-k8s-version-876718] to gather additional debugging logs...
	I1229 07:13:12.972854  232412 cli_runner.go:164] Run: docker network inspect old-k8s-version-876718
	W1229 07:13:12.990645  232412 cli_runner.go:211] docker network inspect old-k8s-version-876718 returned with exit code 1
	I1229 07:13:12.990672  232412 network_create.go:287] error running [docker network inspect old-k8s-version-876718]: docker network inspect old-k8s-version-876718: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network old-k8s-version-876718 not found
	I1229 07:13:12.990687  232412 network_create.go:289] output of [docker network inspect old-k8s-version-876718]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network old-k8s-version-876718 not found
	
	** /stderr **
	I1229 07:13:12.990817  232412 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1229 07:13:13.008750  232412 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-0cdc02b57a9c IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:d2:92:f5:d8:8c:53} reservation:<nil>}
	I1229 07:13:13.009405  232412 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-09c86d5ed1ab IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:da:3f:ba:d0:a8:f3} reservation:<nil>}
	I1229 07:13:13.010024  232412 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-5eb2f52e9e64 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:9e:e7:f2:5b:43:1d} reservation:<nil>}
	I1229 07:13:13.010464  232412 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-66e171323e2a IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:2a:d9:01:28:19:dc} reservation:<nil>}
	I1229 07:13:13.010942  232412 network.go:211] skipping subnet 192.168.85.0/24 that is taken: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName:br-faaa954500ab IfaceIPv4:192.168.85.1 IfaceMTU:1500 IfaceMAC:de:8a:1a:a6:08:26} reservation:<nil>}
	I1229 07:13:13.011595  232412 network.go:211] skipping subnet 192.168.94.0/24 that is taken: &{IP:192.168.94.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.94.0/24 Gateway:192.168.94.1 ClientMin:192.168.94.2 ClientMax:192.168.94.254 Broadcast:192.168.94.255 IsPrivate:true Interface:{IfaceName:br-fd4c68e5caac IfaceIPv4:192.168.94.1 IfaceMTU:1500 IfaceMAC:8a:8b:f7:ad:65:de} reservation:<nil>}
	I1229 07:13:13.012384  232412 network.go:206] using free private subnet 192.168.103.0/24: &{IP:192.168.103.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.103.0/24 Gateway:192.168.103.1 ClientMin:192.168.103.2 ClientMax:192.168.103.254 Broadcast:192.168.103.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001ec7110}
	I1229 07:13:13.012409  232412 network_create.go:124] attempt to create docker network old-k8s-version-876718 192.168.103.0/24 with gateway 192.168.103.1 and MTU of 1500 ...
	I1229 07:13:13.012455  232412 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.103.0/24 --gateway=192.168.103.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=old-k8s-version-876718 old-k8s-version-876718
	I1229 07:13:13.061840  232412 network_create.go:108] docker network old-k8s-version-876718 192.168.103.0/24 created
	I1229 07:13:13.061870  232412 kic.go:121] calculated static IP "192.168.103.2" for the "old-k8s-version-876718" container
	I1229 07:13:13.061929  232412 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1229 07:13:13.081646  232412 cli_runner.go:164] Run: docker volume create old-k8s-version-876718 --label name.minikube.sigs.k8s.io=old-k8s-version-876718 --label created_by.minikube.sigs.k8s.io=true
	I1229 07:13:13.103239  232412 oci.go:103] Successfully created a docker volume old-k8s-version-876718
	I1229 07:13:13.103313  232412 cli_runner.go:164] Run: docker run --rm --name old-k8s-version-876718-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=old-k8s-version-876718 --entrypoint /usr/bin/test -v old-k8s-version-876718:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766979815-22353@sha256:20dad5895b49b986a1253c0faab60865204843ac97fd3a6e6210da5896244409 -d /var/lib
	I1229 07:13:13.785829  232412 oci.go:107] Successfully prepared a docker volume old-k8s-version-876718
	I1229 07:13:13.785904  232412 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1229 07:13:13.785921  232412 kic.go:194] Starting extracting preloaded images to volume ...
	I1229 07:13:13.785973  232412 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22353-9207/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v old-k8s-version-876718:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766979815-22353@sha256:20dad5895b49b986a1253c0faab60865204843ac97fd3a6e6210da5896244409 -I lz4 -xf /preloaded.tar -C /extractDir
	I1229 07:13:14.218368  225445 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1229 07:13:14.265346  225445 kubeadm.go:401] StartCluster: {Name:kubernetes-upgrade-174577 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766979815-22353@sha256:20dad5895b49b986a1253c0faab60865204843ac97fd3a6e6210da5896244409 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:kubernetes-upgrade-174577 Namespace:default APIServerHAVIP: APIServerName:m
inikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMne
tClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1229 07:13:14.265449  225445 cri.go:61] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1229 07:13:14.265530  225445 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1229 07:13:14.303136  225445 cri.go:96] found id: "83f0b45b14d0f5c0f2428e774b2c22cc1cbf1cdc631be13cb8017de1791ed9c1"
	I1229 07:13:14.303165  225445 cri.go:96] found id: "6b460bd082cb45e96cdc53abd5ace251fc95bd9957abc99c11caa89672b33281"
	I1229 07:13:14.303171  225445 cri.go:96] found id: "3983ff0eb796732a81cf41918505f3d709d8ac89d82fbe39cc0486e541f72c4e"
	I1229 07:13:14.303176  225445 cri.go:96] found id: "02694b727a7ef5615b676832bc306017d23da15eff0a3aa9c1e32630ffe9e1bd"
	I1229 07:13:14.303179  225445 cri.go:96] found id: ""
	I1229 07:13:14.303267  225445 ssh_runner.go:195] Run: sudo runc list -f json
	W1229 07:13:14.318208  225445 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-29T07:13:14Z" level=error msg="open /run/runc: no such file or directory"
	I1229 07:13:14.318309  225445 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1229 07:13:14.329014  225445 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1229 07:13:14.329033  225445 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1229 07:13:14.329080  225445 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1229 07:13:14.339176  225445 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1229 07:13:14.339942  225445 kubeconfig.go:47] verify endpoint returned: get endpoint: "kubernetes-upgrade-174577" does not appear in /home/jenkins/minikube-integration/22353-9207/kubeconfig
	I1229 07:13:14.340472  225445 kubeconfig.go:62] /home/jenkins/minikube-integration/22353-9207/kubeconfig needs updating (will repair): [kubeconfig missing "kubernetes-upgrade-174577" cluster setting kubeconfig missing "kubernetes-upgrade-174577" context setting]
	I1229 07:13:14.341293  225445 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22353-9207/kubeconfig: {Name:mkdd37af1bd6d0f9cc034a64c07c8d33ee5c10ac Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1229 07:13:14.342191  225445 kapi.go:59] client config for kubernetes-upgrade-174577: &rest.Config{Host:"https://192.168.76.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22353-9207/.minikube/profiles/kubernetes-upgrade-174577/client.crt", KeyFile:"/home/jenkins/minikube-integration/22353-9207/.minikube/profiles/kubernetes-upgrade-174577/client.key", CAFile:"/home/jenkins/minikube-integration/22353-9207/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CA
Data:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2780200), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1229 07:13:14.342747  225445 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1229 07:13:14.342765  225445 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1229 07:13:14.342771  225445 envvar.go:172] "Feature gate default state" feature="InOrderInformersBatchProcess" enabled=true
	I1229 07:13:14.342784  225445 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=true
	I1229 07:13:14.342790  225445 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=true
	I1229 07:13:14.342796  225445 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1229 07:13:14.343167  225445 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1229 07:13:14.356859  225445 kubeadm.go:645] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml	2025-12-29 07:12:17.170317726 +0000
	+++ /var/tmp/minikube/kubeadm.yaml.new	2025-12-29 07:13:13.213237462 +0000
	@@ -1,4 +1,4 @@
	-apiVersion: kubeadm.k8s.io/v1beta3
	+apiVersion: kubeadm.k8s.io/v1beta4
	 kind: InitConfiguration
	 localAPIEndpoint:
	   advertiseAddress: 192.168.76.2
	@@ -14,31 +14,34 @@
	   criSocket: unix:///var/run/crio/crio.sock
	   name: "kubernetes-upgrade-174577"
	   kubeletExtraArgs:
	-    node-ip: 192.168.76.2
	+    - name: "node-ip"
	+      value: "192.168.76.2"
	   taints: []
	 ---
	-apiVersion: kubeadm.k8s.io/v1beta3
	+apiVersion: kubeadm.k8s.io/v1beta4
	 kind: ClusterConfiguration
	 apiServer:
	   certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	   extraArgs:
	-    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	+    - name: "enable-admission-plugins"
	+      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	 controllerManager:
	   extraArgs:
	-    allocate-node-cidrs: "true"
	-    leader-elect: "false"
	+    - name: "allocate-node-cidrs"
	+      value: "true"
	+    - name: "leader-elect"
	+      value: "false"
	 scheduler:
	   extraArgs:
	-    leader-elect: "false"
	+    - name: "leader-elect"
	+      value: "false"
	 certificatesDir: /var/lib/minikube/certs
	 clusterName: mk
	 controlPlaneEndpoint: control-plane.minikube.internal:8443
	 etcd:
	   local:
	     dataDir: /var/lib/minikube/etcd
	-    extraArgs:
	-      proxy-refresh-interval: "70000"
	-kubernetesVersion: v1.28.0
	+kubernetesVersion: v1.35.0
	 networking:
	   dnsDomain: cluster.local
	   podSubnet: "10.244.0.0/16"
	
	-- /stdout --
	I1229 07:13:14.356877  225445 kubeadm.go:1161] stopping kube-system containers ...
	I1229 07:13:14.356890  225445 cri.go:61] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1229 07:13:14.356957  225445 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1229 07:13:14.396639  225445 cri.go:96] found id: "83f0b45b14d0f5c0f2428e774b2c22cc1cbf1cdc631be13cb8017de1791ed9c1"
	I1229 07:13:14.396662  225445 cri.go:96] found id: "6b460bd082cb45e96cdc53abd5ace251fc95bd9957abc99c11caa89672b33281"
	I1229 07:13:14.396668  225445 cri.go:96] found id: "3983ff0eb796732a81cf41918505f3d709d8ac89d82fbe39cc0486e541f72c4e"
	I1229 07:13:14.396673  225445 cri.go:96] found id: "02694b727a7ef5615b676832bc306017d23da15eff0a3aa9c1e32630ffe9e1bd"
	I1229 07:13:14.396679  225445 cri.go:96] found id: ""
	I1229 07:13:14.396685  225445 cri.go:274] Stopping containers: [83f0b45b14d0f5c0f2428e774b2c22cc1cbf1cdc631be13cb8017de1791ed9c1 6b460bd082cb45e96cdc53abd5ace251fc95bd9957abc99c11caa89672b33281 3983ff0eb796732a81cf41918505f3d709d8ac89d82fbe39cc0486e541f72c4e 02694b727a7ef5615b676832bc306017d23da15eff0a3aa9c1e32630ffe9e1bd]
	I1229 07:13:14.396737  225445 ssh_runner.go:195] Run: which crictl
	I1229 07:13:14.401169  225445 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl stop --timeout=10 83f0b45b14d0f5c0f2428e774b2c22cc1cbf1cdc631be13cb8017de1791ed9c1 6b460bd082cb45e96cdc53abd5ace251fc95bd9957abc99c11caa89672b33281 3983ff0eb796732a81cf41918505f3d709d8ac89d82fbe39cc0486e541f72c4e 02694b727a7ef5615b676832bc306017d23da15eff0a3aa9c1e32630ffe9e1bd
	I1229 07:13:16.856297  182389 api_server.go:299] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1229 07:13:16.856755  182389 api_server.go:315] stopped: https://192.168.94.2:8443/healthz: Get "https://192.168.94.2:8443/healthz": dial tcp 192.168.94.2:8443: connect: connection refused
	I1229 07:13:16.856816  182389 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1229 07:13:16.856874  182389 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1229 07:13:16.896739  182389 cri.go:96] found id: "a69c9fe80c213404a0b8de0af5c9ef86b41f70e74173f141059b09d0534b6a03"
	I1229 07:13:16.896767  182389 cri.go:96] found id: ""
	I1229 07:13:16.896777  182389 logs.go:282] 1 containers: [a69c9fe80c213404a0b8de0af5c9ef86b41f70e74173f141059b09d0534b6a03]
	I1229 07:13:16.896844  182389 ssh_runner.go:195] Run: which crictl
	I1229 07:13:16.900993  182389 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1229 07:13:16.901118  182389 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1229 07:13:16.938340  182389 cri.go:96] found id: ""
	I1229 07:13:16.938369  182389 logs.go:282] 0 containers: []
	W1229 07:13:16.938382  182389 logs.go:284] No container was found matching "etcd"
	I1229 07:13:16.938390  182389 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1229 07:13:16.938471  182389 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1229 07:13:16.975736  182389 cri.go:96] found id: ""
	I1229 07:13:16.975766  182389 logs.go:282] 0 containers: []
	W1229 07:13:16.975777  182389 logs.go:284] No container was found matching "coredns"
	I1229 07:13:16.975783  182389 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1229 07:13:16.975841  182389 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1229 07:13:17.013713  182389 cri.go:96] found id: "ab7241dc5c8c2a299a7ede3eb85d75ade304d211f5bb4ae3d659d4455d83ab45"
	I1229 07:13:17.013738  182389 cri.go:96] found id: ""
	I1229 07:13:17.013748  182389 logs.go:282] 1 containers: [ab7241dc5c8c2a299a7ede3eb85d75ade304d211f5bb4ae3d659d4455d83ab45]
	I1229 07:13:17.013800  182389 ssh_runner.go:195] Run: which crictl
	I1229 07:13:17.017758  182389 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1229 07:13:17.017807  182389 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1229 07:13:17.055320  182389 cri.go:96] found id: ""
	I1229 07:13:17.055350  182389 logs.go:282] 0 containers: []
	W1229 07:13:17.055361  182389 logs.go:284] No container was found matching "kube-proxy"
	I1229 07:13:17.055369  182389 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1229 07:13:17.055428  182389 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1229 07:13:17.089780  182389 cri.go:96] found id: "1d16b757804b48197837746fce61d7dfe6bda3185ce49f221bbea9e891681759"
	I1229 07:13:17.089800  182389 cri.go:96] found id: ""
	I1229 07:13:17.089807  182389 logs.go:282] 1 containers: [1d16b757804b48197837746fce61d7dfe6bda3185ce49f221bbea9e891681759]
	I1229 07:13:17.089858  182389 ssh_runner.go:195] Run: which crictl
	I1229 07:13:17.093695  182389 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1229 07:13:17.093760  182389 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1229 07:13:17.130299  182389 cri.go:96] found id: ""
	I1229 07:13:17.130325  182389 logs.go:282] 0 containers: []
	W1229 07:13:17.130336  182389 logs.go:284] No container was found matching "kindnet"
	I1229 07:13:17.130344  182389 cri.go:61] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1229 07:13:17.130405  182389 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=storage-provisioner
	I1229 07:13:17.166061  182389 cri.go:96] found id: ""
	I1229 07:13:17.166087  182389 logs.go:282] 0 containers: []
	W1229 07:13:17.166098  182389 logs.go:284] No container was found matching "storage-provisioner"
	I1229 07:13:17.166107  182389 logs.go:123] Gathering logs for kube-controller-manager [1d16b757804b48197837746fce61d7dfe6bda3185ce49f221bbea9e891681759] ...
	I1229 07:13:17.166121  182389 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1d16b757804b48197837746fce61d7dfe6bda3185ce49f221bbea9e891681759"
	I1229 07:13:17.202740  182389 logs.go:123] Gathering logs for CRI-O ...
	I1229 07:13:17.202770  182389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1229 07:13:17.252602  182389 logs.go:123] Gathering logs for container status ...
	I1229 07:13:17.252637  182389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1229 07:13:17.294797  182389 logs.go:123] Gathering logs for kubelet ...
	I1229 07:13:17.294824  182389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1229 07:13:17.395948  182389 logs.go:123] Gathering logs for dmesg ...
	I1229 07:13:17.395981  182389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1229 07:13:17.411234  182389 logs.go:123] Gathering logs for describe nodes ...
	I1229 07:13:17.411260  182389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1229 07:13:17.478199  182389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1229 07:13:17.478233  182389 logs.go:123] Gathering logs for kube-apiserver [a69c9fe80c213404a0b8de0af5c9ef86b41f70e74173f141059b09d0534b6a03] ...
	I1229 07:13:17.478250  182389 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a69c9fe80c213404a0b8de0af5c9ef86b41f70e74173f141059b09d0534b6a03"
	I1229 07:13:17.516203  182389 logs.go:123] Gathering logs for kube-scheduler [ab7241dc5c8c2a299a7ede3eb85d75ade304d211f5bb4ae3d659d4455d83ab45] ...
	I1229 07:13:17.516270  182389 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ab7241dc5c8c2a299a7ede3eb85d75ade304d211f5bb4ae3d659d4455d83ab45"
	I1229 07:13:20.092283  182389 api_server.go:299] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1229 07:13:19.071364  232412 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22353-9207/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v old-k8s-version-876718:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766979815-22353@sha256:20dad5895b49b986a1253c0faab60865204843ac97fd3a6e6210da5896244409 -I lz4 -xf /preloaded.tar -C /extractDir: (5.285320691s)
	I1229 07:13:19.071395  232412 kic.go:203] duration metric: took 5.285471825s to extract preloaded images to volume ...
	W1229 07:13:19.071466  232412 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1229 07:13:19.071499  232412 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1229 07:13:19.071542  232412 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1229 07:13:19.124855  232412 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname old-k8s-version-876718 --name old-k8s-version-876718 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=old-k8s-version-876718 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=old-k8s-version-876718 --network old-k8s-version-876718 --ip 192.168.103.2 --volume old-k8s-version-876718:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766979815-22353@sha256:20dad5895b49b986a1253c0faab60865204843ac97fd3a6e6210da5896244409
	I1229 07:13:19.400925  232412 cli_runner.go:164] Run: docker container inspect old-k8s-version-876718 --format={{.State.Running}}
	I1229 07:13:19.420282  232412 cli_runner.go:164] Run: docker container inspect old-k8s-version-876718 --format={{.State.Status}}
	I1229 07:13:19.438553  232412 cli_runner.go:164] Run: docker exec old-k8s-version-876718 stat /var/lib/dpkg/alternatives/iptables
	I1229 07:13:19.483117  232412 oci.go:144] the created container "old-k8s-version-876718" has a running status.
	I1229 07:13:19.483146  232412 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/22353-9207/.minikube/machines/old-k8s-version-876718/id_rsa...
	I1229 07:13:19.797285  232412 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22353-9207/.minikube/machines/old-k8s-version-876718/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1229 07:13:19.823735  232412 cli_runner.go:164] Run: docker container inspect old-k8s-version-876718 --format={{.State.Status}}
	I1229 07:13:19.840995  232412 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1229 07:13:19.841017  232412 kic_runner.go:114] Args: [docker exec --privileged old-k8s-version-876718 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1229 07:13:19.881172  232412 cli_runner.go:164] Run: docker container inspect old-k8s-version-876718 --format={{.State.Status}}
	I1229 07:13:19.901565  232412 machine.go:94] provisionDockerMachine start ...
	I1229 07:13:19.901672  232412 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-876718
	I1229 07:13:19.920798  232412 main.go:144] libmachine: Using SSH client type: native
	I1229 07:13:19.921248  232412 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e300] 0x850fa0 <nil>  [] 0s} 127.0.0.1 33053 <nil> <nil>}
	I1229 07:13:19.921278  232412 main.go:144] libmachine: About to run SSH command:
	hostname
	I1229 07:13:20.060252  232412 main.go:144] libmachine: SSH cmd err, output: <nil>: old-k8s-version-876718
	
	I1229 07:13:20.060285  232412 ubuntu.go:182] provisioning hostname "old-k8s-version-876718"
	I1229 07:13:20.060336  232412 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-876718
	I1229 07:13:20.079596  232412 main.go:144] libmachine: Using SSH client type: native
	I1229 07:13:20.079831  232412 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e300] 0x850fa0 <nil>  [] 0s} 127.0.0.1 33053 <nil> <nil>}
	I1229 07:13:20.079848  232412 main.go:144] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-876718 && echo "old-k8s-version-876718" | sudo tee /etc/hostname
	I1229 07:13:20.226140  232412 main.go:144] libmachine: SSH cmd err, output: <nil>: old-k8s-version-876718
	
	I1229 07:13:20.226255  232412 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-876718
	I1229 07:13:20.244719  232412 main.go:144] libmachine: Using SSH client type: native
	I1229 07:13:20.244926  232412 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e300] 0x850fa0 <nil>  [] 0s} 127.0.0.1 33053 <nil> <nil>}
	I1229 07:13:20.244945  232412 main.go:144] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-876718' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-876718/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-876718' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1229 07:13:20.380792  232412 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	I1229 07:13:20.380818  232412 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22353-9207/.minikube CaCertPath:/home/jenkins/minikube-integration/22353-9207/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22353-9207/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22353-9207/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22353-9207/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22353-9207/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22353-9207/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22353-9207/.minikube}
	I1229 07:13:20.380841  232412 ubuntu.go:190] setting up certificates
	I1229 07:13:20.380862  232412 provision.go:84] configureAuth start
	I1229 07:13:20.380919  232412 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-876718
	I1229 07:13:20.399607  232412 provision.go:143] copyHostCerts
	I1229 07:13:20.399672  232412 exec_runner.go:144] found /home/jenkins/minikube-integration/22353-9207/.minikube/cert.pem, removing ...
	I1229 07:13:20.399691  232412 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22353-9207/.minikube/cert.pem
	I1229 07:13:20.399772  232412 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22353-9207/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22353-9207/.minikube/cert.pem (1123 bytes)
	I1229 07:13:20.399888  232412 exec_runner.go:144] found /home/jenkins/minikube-integration/22353-9207/.minikube/key.pem, removing ...
	I1229 07:13:20.399900  232412 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22353-9207/.minikube/key.pem
	I1229 07:13:20.399941  232412 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22353-9207/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22353-9207/.minikube/key.pem (1675 bytes)
	I1229 07:13:20.400020  232412 exec_runner.go:144] found /home/jenkins/minikube-integration/22353-9207/.minikube/ca.pem, removing ...
	I1229 07:13:20.400032  232412 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22353-9207/.minikube/ca.pem
	I1229 07:13:20.400065  232412 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22353-9207/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22353-9207/.minikube/ca.pem (1082 bytes)
	I1229 07:13:20.400134  232412 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22353-9207/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22353-9207/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22353-9207/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-876718 san=[127.0.0.1 192.168.103.2 localhost minikube old-k8s-version-876718]
	I1229 07:13:20.644380  232412 provision.go:177] copyRemoteCerts
	I1229 07:13:20.644438  232412 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1229 07:13:20.644485  232412 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-876718
	I1229 07:13:20.662358  232412 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33053 SSHKeyPath:/home/jenkins/minikube-integration/22353-9207/.minikube/machines/old-k8s-version-876718/id_rsa Username:docker}
	I1229 07:13:20.761829  232412 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-9207/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1229 07:13:20.782150  232412 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-9207/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1229 07:13:20.800137  232412 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-9207/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I1229 07:13:20.818530  232412 provision.go:87] duration metric: took 437.645501ms to configureAuth
	I1229 07:13:20.818563  232412 ubuntu.go:206] setting minikube options for container-runtime
	I1229 07:13:20.818751  232412 config.go:182] Loaded profile config "old-k8s-version-876718": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1229 07:13:20.818862  232412 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-876718
	I1229 07:13:20.837349  232412 main.go:144] libmachine: Using SSH client type: native
	I1229 07:13:20.837615  232412 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e300] 0x850fa0 <nil>  [] 0s} 127.0.0.1 33053 <nil> <nil>}
	I1229 07:13:20.837638  232412 main.go:144] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1229 07:13:21.117978  232412 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1229 07:13:21.118005  232412 machine.go:97] duration metric: took 1.216418157s to provisionDockerMachine
	I1229 07:13:21.118017  232412 client.go:176] duration metric: took 8.164401558s to LocalClient.Create
	I1229 07:13:21.118037  232412 start.go:167] duration metric: took 8.164461986s to libmachine.API.Create "old-k8s-version-876718"
	I1229 07:13:21.118046  232412 start.go:293] postStartSetup for "old-k8s-version-876718" (driver="docker")
	I1229 07:13:21.118059  232412 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1229 07:13:21.118130  232412 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1229 07:13:21.118176  232412 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-876718
	I1229 07:13:21.136280  232412 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33053 SSHKeyPath:/home/jenkins/minikube-integration/22353-9207/.minikube/machines/old-k8s-version-876718/id_rsa Username:docker}
	I1229 07:13:21.236205  232412 ssh_runner.go:195] Run: cat /etc/os-release
	I1229 07:13:21.240111  232412 main.go:144] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1229 07:13:21.240146  232412 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1229 07:13:21.240158  232412 filesync.go:126] Scanning /home/jenkins/minikube-integration/22353-9207/.minikube/addons for local assets ...
	I1229 07:13:21.240247  232412 filesync.go:126] Scanning /home/jenkins/minikube-integration/22353-9207/.minikube/files for local assets ...
	I1229 07:13:21.240344  232412 filesync.go:149] local asset: /home/jenkins/minikube-integration/22353-9207/.minikube/files/etc/ssl/certs/127332.pem -> 127332.pem in /etc/ssl/certs
	I1229 07:13:21.240469  232412 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1229 07:13:21.248582  232412 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-9207/.minikube/files/etc/ssl/certs/127332.pem --> /etc/ssl/certs/127332.pem (1708 bytes)
	I1229 07:13:21.269512  232412 start.go:296] duration metric: took 151.451791ms for postStartSetup
	I1229 07:13:21.269892  232412 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-876718
	I1229 07:13:21.288595  232412 profile.go:143] Saving config to /home/jenkins/minikube-integration/22353-9207/.minikube/profiles/old-k8s-version-876718/config.json ...
	I1229 07:13:21.288881  232412 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1229 07:13:21.288935  232412 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-876718
	I1229 07:13:21.307033  232412 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33053 SSHKeyPath:/home/jenkins/minikube-integration/22353-9207/.minikube/machines/old-k8s-version-876718/id_rsa Username:docker}
	I1229 07:13:21.402530  232412 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1229 07:13:21.407561  232412 start.go:128] duration metric: took 8.455874857s to createHost
	I1229 07:13:21.407588  232412 start.go:83] releasing machines lock for "old-k8s-version-876718", held for 8.456015425s
	I1229 07:13:21.407655  232412 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-876718
	I1229 07:13:21.425316  232412 ssh_runner.go:195] Run: cat /version.json
	I1229 07:13:21.425360  232412 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-876718
	I1229 07:13:21.425392  232412 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1229 07:13:21.425465  232412 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-876718
	I1229 07:13:21.443743  232412 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33053 SSHKeyPath:/home/jenkins/minikube-integration/22353-9207/.minikube/machines/old-k8s-version-876718/id_rsa Username:docker}
	I1229 07:13:21.444454  232412 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33053 SSHKeyPath:/home/jenkins/minikube-integration/22353-9207/.minikube/machines/old-k8s-version-876718/id_rsa Username:docker}
	I1229 07:13:21.591722  232412 ssh_runner.go:195] Run: systemctl --version
	I1229 07:13:21.598470  232412 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1229 07:13:21.634375  232412 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1229 07:13:21.639290  232412 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1229 07:13:21.639349  232412 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1229 07:13:21.665703  232412 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1229 07:13:21.665731  232412 start.go:496] detecting cgroup driver to use...
	I1229 07:13:21.665775  232412 detect.go:190] detected "systemd" cgroup driver on host os
	I1229 07:13:21.665825  232412 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1229 07:13:21.681620  232412 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1229 07:13:21.693761  232412 docker.go:218] disabling cri-docker service (if available) ...
	I1229 07:13:21.693822  232412 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1229 07:13:21.711399  232412 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1229 07:13:21.729553  232412 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1229 07:13:21.813015  232412 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1229 07:13:21.901348  232412 docker.go:234] disabling docker service ...
	I1229 07:13:21.901420  232412 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1229 07:13:21.920395  232412 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1229 07:13:21.933772  232412 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1229 07:13:22.018719  232412 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1229 07:13:22.100502  232412 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1229 07:13:22.113089  232412 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1229 07:13:22.127932  232412 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1229 07:13:22.128000  232412 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1229 07:13:22.138116  232412 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1229 07:13:22.138188  232412 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1229 07:13:22.147131  232412 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1229 07:13:22.155699  232412 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1229 07:13:22.164690  232412 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1229 07:13:22.173456  232412 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1229 07:13:22.182766  232412 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1229 07:13:22.196342  232412 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1229 07:13:22.205882  232412 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1229 07:13:22.213525  232412 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1229 07:13:22.221128  232412 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1229 07:13:22.300948  232412 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1229 07:13:22.448045  232412 start.go:553] Will wait 60s for socket path /var/run/crio/crio.sock
	I1229 07:13:22.448111  232412 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1229 07:13:22.452309  232412 start.go:574] Will wait 60s for crictl version
	I1229 07:13:22.452370  232412 ssh_runner.go:195] Run: which crictl
	I1229 07:13:22.455846  232412 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1229 07:13:22.480720  232412 start.go:590] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.35.0
	RuntimeApiVersion:  v1
	I1229 07:13:22.480803  232412 ssh_runner.go:195] Run: crio --version
	I1229 07:13:22.507335  232412 ssh_runner.go:195] Run: crio --version
	I1229 07:13:22.536134  232412 out.go:179] * Preparing Kubernetes v1.28.0 on CRI-O 1.35.0 ...
	I1229 07:13:22.537673  232412 cli_runner.go:164] Run: docker network inspect old-k8s-version-876718 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1229 07:13:22.556002  232412 ssh_runner.go:195] Run: grep 192.168.103.1	host.minikube.internal$ /etc/hosts
	I1229 07:13:22.560080  232412 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.103.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1229 07:13:22.570405  232412 kubeadm.go:884] updating cluster {Name:old-k8s-version-876718 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766979815-22353@sha256:20dad5895b49b986a1253c0faab60865204843ac97fd3a6e6210da5896244409 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-876718 Namespace:default APIServerHAVIP: APIServerName:minik
ubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFir
mwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false} ...
	I1229 07:13:22.570516  232412 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1229 07:13:22.570555  232412 ssh_runner.go:195] Run: sudo crictl images --output json
	I1229 07:13:22.601295  232412 crio.go:561] all images are preloaded for cri-o runtime.
	I1229 07:13:22.601316  232412 crio.go:433] Images already preloaded, skipping extraction
	I1229 07:13:22.601360  232412 ssh_runner.go:195] Run: sudo crictl images --output json
	I1229 07:13:22.629040  232412 crio.go:561] all images are preloaded for cri-o runtime.
	I1229 07:13:22.629065  232412 cache_images.go:86] Images are preloaded, skipping loading
	I1229 07:13:22.629074  232412 kubeadm.go:935] updating node { 192.168.103.2 8443 v1.28.0 crio true true} ...
	I1229 07:13:22.629195  232412 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=old-k8s-version-876718 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.103.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-876718 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1229 07:13:22.629286  232412 ssh_runner.go:195] Run: crio config
	I1229 07:13:22.674086  232412 cni.go:84] Creating CNI manager for ""
	I1229 07:13:22.674109  232412 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1229 07:13:22.674123  232412 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1229 07:13:22.674144  232412 kubeadm.go:197] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.103.2 APIServerPort:8443 KubernetesVersion:v1.28.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-876718 NodeName:old-k8s-version-876718 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.103.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.103.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt Static
PodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1229 07:13:22.674307  232412 kubeadm.go:203] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.103.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "old-k8s-version-876718"
	  kubeletExtraArgs:
	    node-ip: 192.168.103.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.103.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1229 07:13:22.674366  232412 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.0
	I1229 07:13:22.683758  232412 binaries.go:51] Found k8s binaries, skipping transfer
	I1229 07:13:22.683813  232412 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1229 07:13:22.692578  232412 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (373 bytes)
	I1229 07:13:22.705443  232412 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1229 07:13:22.720908  232412 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2162 bytes)
	I1229 07:13:22.733678  232412 ssh_runner.go:195] Run: grep 192.168.103.2	control-plane.minikube.internal$ /etc/hosts
	I1229 07:13:22.737455  232412 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.103.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1229 07:13:22.747369  232412 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1229 07:13:22.828196  232412 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1229 07:13:22.853689  232412 certs.go:69] Setting up /home/jenkins/minikube-integration/22353-9207/.minikube/profiles/old-k8s-version-876718 for IP: 192.168.103.2
	I1229 07:13:22.853714  232412 certs.go:195] generating shared ca certs ...
	I1229 07:13:22.853735  232412 certs.go:227] acquiring lock for ca certs: {Name:mk9ea2caa33885d3eff30cd6b31b0954826457bd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1229 07:13:22.853879  232412 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22353-9207/.minikube/ca.key
	I1229 07:13:22.853938  232412 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22353-9207/.minikube/proxy-client-ca.key
	I1229 07:13:22.853948  232412 certs.go:257] generating profile certs ...
	I1229 07:13:22.853997  232412 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22353-9207/.minikube/profiles/old-k8s-version-876718/client.key
	I1229 07:13:22.854017  232412 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22353-9207/.minikube/profiles/old-k8s-version-876718/client.crt with IP's: []
	I1229 07:13:22.917907  232412 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22353-9207/.minikube/profiles/old-k8s-version-876718/client.crt ...
	I1229 07:13:22.917939  232412 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22353-9207/.minikube/profiles/old-k8s-version-876718/client.crt: {Name:mk25bbcaae62539eee882ca0606167315206d51c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1229 07:13:22.918101  232412 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22353-9207/.minikube/profiles/old-k8s-version-876718/client.key ...
	I1229 07:13:22.918120  232412 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22353-9207/.minikube/profiles/old-k8s-version-876718/client.key: {Name:mkadc251cc9377d07fe4adbef2bc9bbcc3fb264b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1229 07:13:22.918205  232412 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22353-9207/.minikube/profiles/old-k8s-version-876718/apiserver.key.79c22471
	I1229 07:13:22.918230  232412 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22353-9207/.minikube/profiles/old-k8s-version-876718/apiserver.crt.79c22471 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.103.2]
	I1229 07:13:22.961877  232412 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22353-9207/.minikube/profiles/old-k8s-version-876718/apiserver.crt.79c22471 ...
	I1229 07:13:22.961905  232412 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22353-9207/.minikube/profiles/old-k8s-version-876718/apiserver.crt.79c22471: {Name:mk9cb93ca45e6e90856ad71de6acb31da33e6a96 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1229 07:13:22.962063  232412 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22353-9207/.minikube/profiles/old-k8s-version-876718/apiserver.key.79c22471 ...
	I1229 07:13:22.962078  232412 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22353-9207/.minikube/profiles/old-k8s-version-876718/apiserver.key.79c22471: {Name:mk5b23596f2c58b8b88310bed922e82237445ef1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1229 07:13:22.962156  232412 certs.go:382] copying /home/jenkins/minikube-integration/22353-9207/.minikube/profiles/old-k8s-version-876718/apiserver.crt.79c22471 -> /home/jenkins/minikube-integration/22353-9207/.minikube/profiles/old-k8s-version-876718/apiserver.crt
	I1229 07:13:22.962250  232412 certs.go:386] copying /home/jenkins/minikube-integration/22353-9207/.minikube/profiles/old-k8s-version-876718/apiserver.key.79c22471 -> /home/jenkins/minikube-integration/22353-9207/.minikube/profiles/old-k8s-version-876718/apiserver.key
	I1229 07:13:22.962310  232412 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22353-9207/.minikube/profiles/old-k8s-version-876718/proxy-client.key
	I1229 07:13:22.962325  232412 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22353-9207/.minikube/profiles/old-k8s-version-876718/proxy-client.crt with IP's: []
	I1229 07:13:22.990299  232412 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22353-9207/.minikube/profiles/old-k8s-version-876718/proxy-client.crt ...
	I1229 07:13:22.990324  232412 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22353-9207/.minikube/profiles/old-k8s-version-876718/proxy-client.crt: {Name:mk0f5dee0a19240d47d171b4daaf3e7d80198dfa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1229 07:13:22.990498  232412 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22353-9207/.minikube/profiles/old-k8s-version-876718/proxy-client.key ...
	I1229 07:13:22.990515  232412 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22353-9207/.minikube/profiles/old-k8s-version-876718/proxy-client.key: {Name:mk3b0864aadc49297f3b72e94586904da32e14f3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1229 07:13:22.990769  232412 certs.go:484] found cert: /home/jenkins/minikube-integration/22353-9207/.minikube/certs/12733.pem (1338 bytes)
	W1229 07:13:22.990825  232412 certs.go:480] ignoring /home/jenkins/minikube-integration/22353-9207/.minikube/certs/12733_empty.pem, impossibly tiny 0 bytes
	I1229 07:13:22.990842  232412 certs.go:484] found cert: /home/jenkins/minikube-integration/22353-9207/.minikube/certs/ca-key.pem (1675 bytes)
	I1229 07:13:22.990886  232412 certs.go:484] found cert: /home/jenkins/minikube-integration/22353-9207/.minikube/certs/ca.pem (1082 bytes)
	I1229 07:13:22.990921  232412 certs.go:484] found cert: /home/jenkins/minikube-integration/22353-9207/.minikube/certs/cert.pem (1123 bytes)
	I1229 07:13:22.990947  232412 certs.go:484] found cert: /home/jenkins/minikube-integration/22353-9207/.minikube/certs/key.pem (1675 bytes)
	I1229 07:13:22.990989  232412 certs.go:484] found cert: /home/jenkins/minikube-integration/22353-9207/.minikube/files/etc/ssl/certs/127332.pem (1708 bytes)
	I1229 07:13:22.991557  232412 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-9207/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1229 07:13:23.009795  232412 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-9207/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1229 07:13:23.027035  232412 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-9207/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1229 07:13:23.044651  232412 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-9207/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1229 07:13:23.061962  232412 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-9207/.minikube/profiles/old-k8s-version-876718/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1229 07:13:23.078802  232412 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-9207/.minikube/profiles/old-k8s-version-876718/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1229 07:13:23.096162  232412 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-9207/.minikube/profiles/old-k8s-version-876718/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1229 07:13:23.113338  232412 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-9207/.minikube/profiles/old-k8s-version-876718/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1229 07:13:23.130481  232412 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-9207/.minikube/certs/12733.pem --> /usr/share/ca-certificates/12733.pem (1338 bytes)
	I1229 07:13:23.149769  232412 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-9207/.minikube/files/etc/ssl/certs/127332.pem --> /usr/share/ca-certificates/127332.pem (1708 bytes)
	I1229 07:13:23.167355  232412 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-9207/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1229 07:13:23.184666  232412 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (722 bytes)
	I1229 07:13:23.197654  232412 ssh_runner.go:195] Run: openssl version
	I1229 07:13:23.203819  232412 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1229 07:13:23.211567  232412 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1229 07:13:23.218779  232412 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1229 07:13:23.222509  232412 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 29 06:46 /usr/share/ca-certificates/minikubeCA.pem
	I1229 07:13:23.222568  232412 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1229 07:13:23.258392  232412 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1229 07:13:23.266318  232412 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1229 07:13:23.273911  232412 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/12733.pem
	I1229 07:13:23.281585  232412 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/12733.pem /etc/ssl/certs/12733.pem
	I1229 07:13:23.288951  232412 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12733.pem
	I1229 07:13:23.292737  232412 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 29 06:49 /usr/share/ca-certificates/12733.pem
	I1229 07:13:23.292793  232412 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12733.pem
	I1229 07:13:23.328429  232412 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1229 07:13:23.336593  232412 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/12733.pem /etc/ssl/certs/51391683.0
	I1229 07:13:23.344191  232412 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/127332.pem
	I1229 07:13:23.351843  232412 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/127332.pem /etc/ssl/certs/127332.pem
	I1229 07:13:23.359777  232412 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/127332.pem
	I1229 07:13:23.363650  232412 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 29 06:49 /usr/share/ca-certificates/127332.pem
	I1229 07:13:23.363700  232412 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/127332.pem
	I1229 07:13:23.398423  232412 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1229 07:13:23.406468  232412 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/127332.pem /etc/ssl/certs/3ec20f2e.0
	I1229 07:13:23.413925  232412 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1229 07:13:23.417464  232412 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1229 07:13:23.417530  232412 kubeadm.go:401] StartCluster: {Name:old-k8s-version-876718 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766979815-22353@sha256:20dad5895b49b986a1253c0faab60865204843ac97fd3a6e6210da5896244409 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-876718 Namespace:default APIServerHAVIP: APIServerName:minikube
CA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwa
rePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1229 07:13:23.417606  232412 cri.go:61] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1229 07:13:23.417662  232412 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1229 07:13:23.444462  232412 cri.go:96] found id: ""
	I1229 07:13:23.444516  232412 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1229 07:13:23.452993  232412 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1229 07:13:23.460835  232412 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1229 07:13:23.460897  232412 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1229 07:13:23.468376  232412 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1229 07:13:23.468394  232412 kubeadm.go:158] found existing configuration files:
	
	I1229 07:13:23.468438  232412 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1229 07:13:23.475856  232412 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1229 07:13:23.475913  232412 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1229 07:13:23.483074  232412 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1229 07:13:23.490626  232412 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1229 07:13:23.490670  232412 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1229 07:13:23.497819  232412 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1229 07:13:23.505134  232412 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1229 07:13:23.505174  232412 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1229 07:13:23.513145  232412 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1229 07:13:23.520557  232412 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1229 07:13:23.520607  232412 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1229 07:13:23.527796  232412 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.28.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1229 07:13:23.584581  232412 kubeadm.go:319] [init] Using Kubernetes version: v1.28.0
	I1229 07:13:23.584663  232412 kubeadm.go:319] [preflight] Running pre-flight checks
	I1229 07:13:23.625141  232412 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1229 07:13:23.625292  232412 kubeadm.go:319] KERNEL_VERSION: 6.8.0-1045-gcp
	I1229 07:13:23.625360  232412 kubeadm.go:319] OS: Linux
	I1229 07:13:23.625432  232412 kubeadm.go:319] CGROUPS_CPU: enabled
	I1229 07:13:23.625513  232412 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1229 07:13:23.625591  232412 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1229 07:13:23.625657  232412 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1229 07:13:23.625735  232412 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1229 07:13:23.625814  232412 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1229 07:13:23.625923  232412 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1229 07:13:23.626003  232412 kubeadm.go:319] CGROUPS_IO: enabled
	I1229 07:13:23.696747  232412 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1229 07:13:23.696904  232412 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1229 07:13:23.697053  232412 kubeadm.go:319] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1229 07:13:23.842508  232412 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1229 07:13:23.845952  232412 out.go:252]   - Generating certificates and keys ...
	I1229 07:13:23.846045  232412 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1229 07:13:23.846142  232412 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1229 07:13:23.990976  232412 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1229 07:13:24.151142  232412 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1229 07:13:24.231459  232412 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1229 07:13:24.302648  232412 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1229 07:13:24.372386  232412 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1229 07:13:24.372568  232412 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [localhost old-k8s-version-876718] and IPs [192.168.103.2 127.0.0.1 ::1]
	I1229 07:13:24.476371  232412 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1229 07:13:24.476598  232412 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [localhost old-k8s-version-876718] and IPs [192.168.103.2 127.0.0.1 ::1]
	I1229 07:13:24.610960  232412 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1229 07:13:24.751481  232412 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1229 07:13:24.920403  232412 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1229 07:13:24.920545  232412 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1229 07:13:25.012328  232412 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1229 07:13:25.220086  232412 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1229 07:13:25.546632  232412 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1229 07:13:25.646410  232412 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1229 07:13:25.646838  232412 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1229 07:13:25.651046  232412 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1229 07:13:25.093246  182389 api_server.go:315] stopped: https://192.168.94.2:8443/healthz: Get "https://192.168.94.2:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1229 07:13:25.093314  182389 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1229 07:13:25.093378  182389 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1229 07:13:25.128141  182389 cri.go:96] found id: "e508e3d2bbb715ae7ae25b0aadeb6cacca070930911a426a15f18c7f32ecfe03"
	I1229 07:13:25.128171  182389 cri.go:96] found id: "a69c9fe80c213404a0b8de0af5c9ef86b41f70e74173f141059b09d0534b6a03"
	I1229 07:13:25.128177  182389 cri.go:96] found id: ""
	I1229 07:13:25.128187  182389 logs.go:282] 2 containers: [e508e3d2bbb715ae7ae25b0aadeb6cacca070930911a426a15f18c7f32ecfe03 a69c9fe80c213404a0b8de0af5c9ef86b41f70e74173f141059b09d0534b6a03]
	I1229 07:13:25.128256  182389 ssh_runner.go:195] Run: which crictl
	I1229 07:13:25.132077  182389 ssh_runner.go:195] Run: which crictl
	I1229 07:13:25.135873  182389 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1229 07:13:25.135931  182389 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1229 07:13:25.170562  182389 cri.go:96] found id: ""
	I1229 07:13:25.170589  182389 logs.go:282] 0 containers: []
	W1229 07:13:25.170599  182389 logs.go:284] No container was found matching "etcd"
	I1229 07:13:25.170609  182389 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1229 07:13:25.170663  182389 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1229 07:13:25.211296  182389 cri.go:96] found id: ""
	I1229 07:13:25.211320  182389 logs.go:282] 0 containers: []
	W1229 07:13:25.211330  182389 logs.go:284] No container was found matching "coredns"
	I1229 07:13:25.211337  182389 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1229 07:13:25.211396  182389 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1229 07:13:25.252493  182389 cri.go:96] found id: "ab7241dc5c8c2a299a7ede3eb85d75ade304d211f5bb4ae3d659d4455d83ab45"
	I1229 07:13:25.252517  182389 cri.go:96] found id: ""
	I1229 07:13:25.252527  182389 logs.go:282] 1 containers: [ab7241dc5c8c2a299a7ede3eb85d75ade304d211f5bb4ae3d659d4455d83ab45]
	I1229 07:13:25.252587  182389 ssh_runner.go:195] Run: which crictl
	I1229 07:13:25.256605  182389 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1229 07:13:25.256671  182389 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1229 07:13:25.294004  182389 cri.go:96] found id: ""
	I1229 07:13:25.294033  182389 logs.go:282] 0 containers: []
	W1229 07:13:25.294043  182389 logs.go:284] No container was found matching "kube-proxy"
	I1229 07:13:25.294050  182389 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1229 07:13:25.294107  182389 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1229 07:13:25.327861  182389 cri.go:96] found id: "1d16b757804b48197837746fce61d7dfe6bda3185ce49f221bbea9e891681759"
	I1229 07:13:25.327881  182389 cri.go:96] found id: ""
	I1229 07:13:25.327888  182389 logs.go:282] 1 containers: [1d16b757804b48197837746fce61d7dfe6bda3185ce49f221bbea9e891681759]
	I1229 07:13:25.327948  182389 ssh_runner.go:195] Run: which crictl
	I1229 07:13:25.331727  182389 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1229 07:13:25.331778  182389 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1229 07:13:25.366368  182389 cri.go:96] found id: ""
	I1229 07:13:25.366395  182389 logs.go:282] 0 containers: []
	W1229 07:13:25.366407  182389 logs.go:284] No container was found matching "kindnet"
	I1229 07:13:25.366416  182389 cri.go:61] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1229 07:13:25.366482  182389 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=storage-provisioner
	I1229 07:13:25.401023  182389 cri.go:96] found id: ""
	I1229 07:13:25.401050  182389 logs.go:282] 0 containers: []
	W1229 07:13:25.401060  182389 logs.go:284] No container was found matching "storage-provisioner"
	I1229 07:13:25.401076  182389 logs.go:123] Gathering logs for describe nodes ...
	I1229 07:13:25.401088  182389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1229 07:13:25.652431  232412 out.go:252]   - Booting up control plane ...
	I1229 07:13:25.652531  232412 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1229 07:13:25.652628  232412 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1229 07:13:25.653170  232412 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1229 07:13:25.666425  232412 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1229 07:13:25.667163  232412 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1229 07:13:25.667257  232412 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1229 07:13:25.771877  232412 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1229 07:13:27.083010  225445 ssh_runner.go:235] Completed: sudo /usr/local/bin/crictl stop --timeout=10 83f0b45b14d0f5c0f2428e774b2c22cc1cbf1cdc631be13cb8017de1791ed9c1 6b460bd082cb45e96cdc53abd5ace251fc95bd9957abc99c11caa89672b33281 3983ff0eb796732a81cf41918505f3d709d8ac89d82fbe39cc0486e541f72c4e 02694b727a7ef5615b676832bc306017d23da15eff0a3aa9c1e32630ffe9e1bd: (12.681800307s)
	I1229 07:13:27.083096  225445 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1229 07:13:27.123246  225445 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1229 07:13:27.132082  225445 kubeadm.go:158] found existing configuration files:
	-rw------- 1 root root 5643 Dec 29 07:12 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5656 Dec 29 07:12 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2039 Dec 29 07:12 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5600 Dec 29 07:12 /etc/kubernetes/scheduler.conf
	
	I1229 07:13:27.132159  225445 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1229 07:13:27.140316  225445 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1229 07:13:27.148272  225445 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1229 07:13:27.156442  225445 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1229 07:13:27.156517  225445 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1229 07:13:27.164641  225445 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1229 07:13:27.172663  225445 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1229 07:13:27.172723  225445 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1229 07:13:27.181809  225445 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1229 07:13:27.190445  225445 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1229 07:13:27.237064  225445 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1229 07:13:27.675324  225445 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1229 07:13:27.876403  225445 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1229 07:13:27.930467  225445 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1229 07:13:27.991043  225445 api_server.go:52] waiting for apiserver process to appear ...
	I1229 07:13:27.991118  225445 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1229 07:13:28.002111  225445 api_server.go:72] duration metric: took 11.075936ms to wait for apiserver process to appear ...
	I1229 07:13:28.002147  225445 api_server.go:88] waiting for apiserver healthz status ...
	I1229 07:13:28.002167  225445 api_server.go:299] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1229 07:13:30.274410  232412 kubeadm.go:319] [apiclient] All control plane components are healthy after 4.502657 seconds
	I1229 07:13:30.274540  232412 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1229 07:13:30.286192  232412 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1229 07:13:30.808427  232412 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1229 07:13:30.808618  232412 kubeadm.go:319] [mark-control-plane] Marking the node old-k8s-version-876718 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1229 07:13:31.319168  232412 kubeadm.go:319] [bootstrap-token] Using token: qfqltq.4rfdctic8wnerzf4
	I1229 07:13:31.321616  232412 out.go:252]   - Configuring RBAC rules ...
	I1229 07:13:31.321762  232412 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1229 07:13:31.325246  232412 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1229 07:13:31.331648  232412 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1229 07:13:31.334323  232412 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1229 07:13:31.337578  232412 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1229 07:13:31.340142  232412 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1229 07:13:31.349525  232412 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1229 07:13:31.532248  232412 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1229 07:13:31.729410  232412 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1229 07:13:31.730362  232412 kubeadm.go:319] 
	I1229 07:13:31.730463  232412 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1229 07:13:31.730473  232412 kubeadm.go:319] 
	I1229 07:13:31.730573  232412 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1229 07:13:31.730581  232412 kubeadm.go:319] 
	I1229 07:13:31.730616  232412 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1229 07:13:31.730708  232412 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1229 07:13:31.730800  232412 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1229 07:13:31.730809  232412 kubeadm.go:319] 
	I1229 07:13:31.730870  232412 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1229 07:13:31.730886  232412 kubeadm.go:319] 
	I1229 07:13:31.730935  232412 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1229 07:13:31.730942  232412 kubeadm.go:319] 
	I1229 07:13:31.731005  232412 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1229 07:13:31.731125  232412 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1229 07:13:31.731196  232412 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1229 07:13:31.731203  232412 kubeadm.go:319] 
	I1229 07:13:31.731293  232412 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1229 07:13:31.731361  232412 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1229 07:13:31.731367  232412 kubeadm.go:319] 
	I1229 07:13:31.731475  232412 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token qfqltq.4rfdctic8wnerzf4 \
	I1229 07:13:31.731609  232412 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:d97bd7e33b97d9ccbd4ec53c25ea13f0ec6aabee3d7ac32cc2829ba2c3b93811 \
	I1229 07:13:31.731644  232412 kubeadm.go:319] 	--control-plane 
	I1229 07:13:31.731652  232412 kubeadm.go:319] 
	I1229 07:13:31.731754  232412 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1229 07:13:31.731768  232412 kubeadm.go:319] 
	I1229 07:13:31.731835  232412 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token qfqltq.4rfdctic8wnerzf4 \
	I1229 07:13:31.731943  232412 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:d97bd7e33b97d9ccbd4ec53c25ea13f0ec6aabee3d7ac32cc2829ba2c3b93811 
	I1229 07:13:31.734182  232412 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1045-gcp\n", err: exit status 1
	I1229 07:13:31.734329  232412 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1229 07:13:31.734359  232412 cni.go:84] Creating CNI manager for ""
	I1229 07:13:31.734372  232412 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1229 07:13:31.735769  232412 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1229 07:13:31.736733  232412 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1229 07:13:31.741719  232412 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.0/kubectl ...
	I1229 07:13:31.741736  232412 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2620 bytes)
	I1229 07:13:31.754704  232412 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1229 07:13:32.373246  232412 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1229 07:13:32.373353  232412 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1229 07:13:32.373381  232412 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes old-k8s-version-876718 minikube.k8s.io/updated_at=2025_12_29T07_13_32_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=306c26d738eaa3534a776d7c684ed563998a48b8 minikube.k8s.io/name=old-k8s-version-876718 minikube.k8s.io/primary=true
	I1229 07:13:32.383010  232412 ops.go:34] apiserver oom_adj: -16
	I1229 07:13:32.453716  232412 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1229 07:13:33.005362  225445 api_server.go:315] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1229 07:13:33.005417  225445 api_server.go:299] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1229 07:13:35.459857  182389 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": (10.058746973s)
	W1229 07:13:35.459890  182389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	Unable to connect to the server: net/http: TLS handshake timeout
	 output: 
	** stderr ** 
	Unable to connect to the server: net/http: TLS handshake timeout
	
	** /stderr **
	I1229 07:13:35.459903  182389 logs.go:123] Gathering logs for kube-apiserver [e508e3d2bbb715ae7ae25b0aadeb6cacca070930911a426a15f18c7f32ecfe03] ...
	I1229 07:13:35.459914  182389 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e508e3d2bbb715ae7ae25b0aadeb6cacca070930911a426a15f18c7f32ecfe03"
	I1229 07:13:35.500390  182389 logs.go:123] Gathering logs for kube-apiserver [a69c9fe80c213404a0b8de0af5c9ef86b41f70e74173f141059b09d0534b6a03] ...
	I1229 07:13:35.500431  182389 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a69c9fe80c213404a0b8de0af5c9ef86b41f70e74173f141059b09d0534b6a03"
	I1229 07:13:35.539445  182389 logs.go:123] Gathering logs for kube-scheduler [ab7241dc5c8c2a299a7ede3eb85d75ade304d211f5bb4ae3d659d4455d83ab45] ...
	I1229 07:13:35.539475  182389 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ab7241dc5c8c2a299a7ede3eb85d75ade304d211f5bb4ae3d659d4455d83ab45"
	I1229 07:13:35.613243  182389 logs.go:123] Gathering logs for container status ...
	I1229 07:13:35.613274  182389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1229 07:13:35.651132  182389 logs.go:123] Gathering logs for kubelet ...
	I1229 07:13:35.651167  182389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1229 07:13:35.750065  182389 logs.go:123] Gathering logs for dmesg ...
	I1229 07:13:35.750096  182389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1229 07:13:35.765106  182389 logs.go:123] Gathering logs for kube-controller-manager [1d16b757804b48197837746fce61d7dfe6bda3185ce49f221bbea9e891681759] ...
	I1229 07:13:35.765133  182389 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1d16b757804b48197837746fce61d7dfe6bda3185ce49f221bbea9e891681759"
	I1229 07:13:35.799316  182389 logs.go:123] Gathering logs for CRI-O ...
	I1229 07:13:35.799339  182389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1229 07:13:32.953867  232412 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1229 07:13:33.454764  232412 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1229 07:13:33.954724  232412 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1229 07:13:34.454007  232412 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1229 07:13:34.953797  232412 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1229 07:13:35.453811  232412 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1229 07:13:35.954430  232412 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1229 07:13:36.454281  232412 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1229 07:13:36.954511  232412 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1229 07:13:37.453843  232412 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1229 07:13:38.008348  225445 api_server.go:315] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1229 07:13:38.008437  225445 api_server.go:299] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1229 07:13:38.350269  182389 api_server.go:299] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1229 07:13:38.919962  182389 api_server.go:315] stopped: https://192.168.94.2:8443/healthz: Get "https://192.168.94.2:8443/healthz": read tcp 192.168.94.1:43076->192.168.94.2:8443: read: connection reset by peer
	I1229 07:13:38.920034  182389 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1229 07:13:38.920097  182389 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1229 07:13:38.960288  182389 cri.go:96] found id: "e508e3d2bbb715ae7ae25b0aadeb6cacca070930911a426a15f18c7f32ecfe03"
	I1229 07:13:38.960313  182389 cri.go:96] found id: "a69c9fe80c213404a0b8de0af5c9ef86b41f70e74173f141059b09d0534b6a03"
	I1229 07:13:38.960318  182389 cri.go:96] found id: ""
	I1229 07:13:38.960328  182389 logs.go:282] 2 containers: [e508e3d2bbb715ae7ae25b0aadeb6cacca070930911a426a15f18c7f32ecfe03 a69c9fe80c213404a0b8de0af5c9ef86b41f70e74173f141059b09d0534b6a03]
	I1229 07:13:38.960386  182389 ssh_runner.go:195] Run: which crictl
	I1229 07:13:38.965020  182389 ssh_runner.go:195] Run: which crictl
	I1229 07:13:38.969189  182389 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1229 07:13:38.969288  182389 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1229 07:13:39.004886  182389 cri.go:96] found id: ""
	I1229 07:13:39.004909  182389 logs.go:282] 0 containers: []
	W1229 07:13:39.004916  182389 logs.go:284] No container was found matching "etcd"
	I1229 07:13:39.004922  182389 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1229 07:13:39.004977  182389 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1229 07:13:39.042553  182389 cri.go:96] found id: ""
	I1229 07:13:39.042577  182389 logs.go:282] 0 containers: []
	W1229 07:13:39.042588  182389 logs.go:284] No container was found matching "coredns"
	I1229 07:13:39.042595  182389 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1229 07:13:39.042654  182389 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1229 07:13:39.076886  182389 cri.go:96] found id: "ab7241dc5c8c2a299a7ede3eb85d75ade304d211f5bb4ae3d659d4455d83ab45"
	I1229 07:13:39.076926  182389 cri.go:96] found id: ""
	I1229 07:13:39.076938  182389 logs.go:282] 1 containers: [ab7241dc5c8c2a299a7ede3eb85d75ade304d211f5bb4ae3d659d4455d83ab45]
	I1229 07:13:39.076995  182389 ssh_runner.go:195] Run: which crictl
	I1229 07:13:39.080906  182389 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1229 07:13:39.080963  182389 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1229 07:13:39.115253  182389 cri.go:96] found id: ""
	I1229 07:13:39.115280  182389 logs.go:282] 0 containers: []
	W1229 07:13:39.115292  182389 logs.go:284] No container was found matching "kube-proxy"
	I1229 07:13:39.115301  182389 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1229 07:13:39.115361  182389 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1229 07:13:39.154745  182389 cri.go:96] found id: "1d16b757804b48197837746fce61d7dfe6bda3185ce49f221bbea9e891681759"
	I1229 07:13:39.154762  182389 cri.go:96] found id: ""
	I1229 07:13:39.154770  182389 logs.go:282] 1 containers: [1d16b757804b48197837746fce61d7dfe6bda3185ce49f221bbea9e891681759]
	I1229 07:13:39.154817  182389 ssh_runner.go:195] Run: which crictl
	I1229 07:13:39.158604  182389 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1229 07:13:39.158658  182389 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1229 07:13:39.194696  182389 cri.go:96] found id: ""
	I1229 07:13:39.194723  182389 logs.go:282] 0 containers: []
	W1229 07:13:39.194733  182389 logs.go:284] No container was found matching "kindnet"
	I1229 07:13:39.194741  182389 cri.go:61] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1229 07:13:39.194803  182389 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=storage-provisioner
	I1229 07:13:39.229032  182389 cri.go:96] found id: ""
	I1229 07:13:39.229058  182389 logs.go:282] 0 containers: []
	W1229 07:13:39.229069  182389 logs.go:284] No container was found matching "storage-provisioner"
	I1229 07:13:39.229085  182389 logs.go:123] Gathering logs for container status ...
	I1229 07:13:39.229104  182389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1229 07:13:39.268582  182389 logs.go:123] Gathering logs for dmesg ...
	I1229 07:13:39.268615  182389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1229 07:13:39.284052  182389 logs.go:123] Gathering logs for kube-apiserver [e508e3d2bbb715ae7ae25b0aadeb6cacca070930911a426a15f18c7f32ecfe03] ...
	I1229 07:13:39.284078  182389 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e508e3d2bbb715ae7ae25b0aadeb6cacca070930911a426a15f18c7f32ecfe03"
	I1229 07:13:39.321933  182389 logs.go:123] Gathering logs for kube-controller-manager [1d16b757804b48197837746fce61d7dfe6bda3185ce49f221bbea9e891681759] ...
	I1229 07:13:39.321974  182389 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1d16b757804b48197837746fce61d7dfe6bda3185ce49f221bbea9e891681759"
	I1229 07:13:39.359166  182389 logs.go:123] Gathering logs for kubelet ...
	I1229 07:13:39.359192  182389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1229 07:13:39.454905  182389 logs.go:123] Gathering logs for describe nodes ...
	I1229 07:13:39.454936  182389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1229 07:13:39.519680  182389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1229 07:13:39.519702  182389 logs.go:123] Gathering logs for kube-apiserver [a69c9fe80c213404a0b8de0af5c9ef86b41f70e74173f141059b09d0534b6a03] ...
	I1229 07:13:39.519719  182389 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a69c9fe80c213404a0b8de0af5c9ef86b41f70e74173f141059b09d0534b6a03"
	W1229 07:13:39.555261  182389 logs.go:130] failed kube-apiserver [a69c9fe80c213404a0b8de0af5c9ef86b41f70e74173f141059b09d0534b6a03]: command: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a69c9fe80c213404a0b8de0af5c9ef86b41f70e74173f141059b09d0534b6a03" /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a69c9fe80c213404a0b8de0af5c9ef86b41f70e74173f141059b09d0534b6a03": Process exited with status 1
	stdout:
	
	stderr:
	E1229 07:13:39.552524    6038 remote_runtime.go:625] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a69c9fe80c213404a0b8de0af5c9ef86b41f70e74173f141059b09d0534b6a03\": container with ID starting with a69c9fe80c213404a0b8de0af5c9ef86b41f70e74173f141059b09d0534b6a03 not found: ID does not exist" containerID="a69c9fe80c213404a0b8de0af5c9ef86b41f70e74173f141059b09d0534b6a03"
	time="2025-12-29T07:13:39Z" level=fatal msg="rpc error: code = NotFound desc = could not find container \"a69c9fe80c213404a0b8de0af5c9ef86b41f70e74173f141059b09d0534b6a03\": container with ID starting with a69c9fe80c213404a0b8de0af5c9ef86b41f70e74173f141059b09d0534b6a03 not found: ID does not exist"
	 output: 
	** stderr ** 
	E1229 07:13:39.552524    6038 remote_runtime.go:625] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a69c9fe80c213404a0b8de0af5c9ef86b41f70e74173f141059b09d0534b6a03\": container with ID starting with a69c9fe80c213404a0b8de0af5c9ef86b41f70e74173f141059b09d0534b6a03 not found: ID does not exist" containerID="a69c9fe80c213404a0b8de0af5c9ef86b41f70e74173f141059b09d0534b6a03"
	time="2025-12-29T07:13:39Z" level=fatal msg="rpc error: code = NotFound desc = could not find container \"a69c9fe80c213404a0b8de0af5c9ef86b41f70e74173f141059b09d0534b6a03\": container with ID starting with a69c9fe80c213404a0b8de0af5c9ef86b41f70e74173f141059b09d0534b6a03 not found: ID does not exist"
	
	** /stderr **
	I1229 07:13:39.555286  182389 logs.go:123] Gathering logs for kube-scheduler [ab7241dc5c8c2a299a7ede3eb85d75ade304d211f5bb4ae3d659d4455d83ab45] ...
	I1229 07:13:39.555301  182389 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ab7241dc5c8c2a299a7ede3eb85d75ade304d211f5bb4ae3d659d4455d83ab45"
	I1229 07:13:39.627140  182389 logs.go:123] Gathering logs for CRI-O ...
	I1229 07:13:39.627170  182389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1229 07:13:37.954693  232412 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1229 07:13:38.454110  232412 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1229 07:13:38.954444  232412 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1229 07:13:39.453818  232412 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1229 07:13:39.954579  232412 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1229 07:13:40.454095  232412 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1229 07:13:40.953754  232412 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1229 07:13:41.454664  232412 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1229 07:13:41.954105  232412 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1229 07:13:42.454031  232412 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1229 07:13:43.011312  225445 api_server.go:315] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1229 07:13:43.011358  225445 api_server.go:299] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1229 07:13:42.954576  232412 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1229 07:13:43.454462  232412 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1229 07:13:43.954434  232412 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1229 07:13:44.454194  232412 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1229 07:13:44.524371  232412 kubeadm.go:1114] duration metric: took 12.151066728s to wait for elevateKubeSystemPrivileges
	I1229 07:13:44.524408  232412 kubeadm.go:403] duration metric: took 21.106883647s to StartCluster
	I1229 07:13:44.524430  232412 settings.go:142] acquiring lock: {Name:mkc0a676e8e9f981283a1b55a28ae5261bf35d87 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1229 07:13:44.524501  232412 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22353-9207/kubeconfig
	I1229 07:13:44.526405  232412 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22353-9207/kubeconfig: {Name:mkdd37af1bd6d0f9cc034a64c07c8d33ee5c10ac Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1229 07:13:44.526612  232412 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1229 07:13:44.526621  232412 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1229 07:13:44.526689  232412 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1229 07:13:44.526809  232412 addons.go:70] Setting storage-provisioner=true in profile "old-k8s-version-876718"
	I1229 07:13:44.526831  232412 addons.go:239] Setting addon storage-provisioner=true in "old-k8s-version-876718"
	I1229 07:13:44.526852  232412 addons.go:70] Setting default-storageclass=true in profile "old-k8s-version-876718"
	I1229 07:13:44.526867  232412 host.go:66] Checking if "old-k8s-version-876718" exists ...
	I1229 07:13:44.526894  232412 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "old-k8s-version-876718"
	I1229 07:13:44.526949  232412 config.go:182] Loaded profile config "old-k8s-version-876718": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1229 07:13:44.527359  232412 cli_runner.go:164] Run: docker container inspect old-k8s-version-876718 --format={{.State.Status}}
	I1229 07:13:44.527497  232412 cli_runner.go:164] Run: docker container inspect old-k8s-version-876718 --format={{.State.Status}}
	I1229 07:13:44.529096  232412 out.go:179] * Verifying Kubernetes components...
	I1229 07:13:44.530850  232412 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1229 07:13:44.555387  232412 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1229 07:13:44.555416  232412 addons.go:239] Setting addon default-storageclass=true in "old-k8s-version-876718"
	I1229 07:13:44.555500  232412 host.go:66] Checking if "old-k8s-version-876718" exists ...
	I1229 07:13:44.556012  232412 cli_runner.go:164] Run: docker container inspect old-k8s-version-876718 --format={{.State.Status}}
	I1229 07:13:44.559400  232412 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1229 07:13:44.559419  232412 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1229 07:13:44.559475  232412 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-876718
	I1229 07:13:44.589049  232412 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1229 07:13:44.589504  232412 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1229 07:13:44.589571  232412 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-876718
	I1229 07:13:44.591014  232412 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33053 SSHKeyPath:/home/jenkins/minikube-integration/22353-9207/.minikube/machines/old-k8s-version-876718/id_rsa Username:docker}
	I1229 07:13:44.612949  232412 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33053 SSHKeyPath:/home/jenkins/minikube-integration/22353-9207/.minikube/machines/old-k8s-version-876718/id_rsa Username:docker}
	I1229 07:13:44.623494  232412 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.103.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1229 07:13:44.687072  232412 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1229 07:13:44.701984  232412 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1229 07:13:44.726917  232412 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1229 07:13:44.890835  232412 start.go:987] {"host.minikube.internal": 192.168.103.1} host record injected into CoreDNS's ConfigMap
	I1229 07:13:44.893521  232412 node_ready.go:35] waiting up to 6m0s for node "old-k8s-version-876718" to be "Ready" ...
	I1229 07:13:45.179292  232412 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1229 07:13:42.173283  182389 api_server.go:299] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1229 07:13:42.173701  182389 api_server.go:315] stopped: https://192.168.94.2:8443/healthz: Get "https://192.168.94.2:8443/healthz": dial tcp 192.168.94.2:8443: connect: connection refused
	I1229 07:13:42.173753  182389 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1229 07:13:42.173801  182389 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1229 07:13:42.208248  182389 cri.go:96] found id: "e508e3d2bbb715ae7ae25b0aadeb6cacca070930911a426a15f18c7f32ecfe03"
	I1229 07:13:42.208274  182389 cri.go:96] found id: ""
	I1229 07:13:42.208283  182389 logs.go:282] 1 containers: [e508e3d2bbb715ae7ae25b0aadeb6cacca070930911a426a15f18c7f32ecfe03]
	I1229 07:13:42.208331  182389 ssh_runner.go:195] Run: which crictl
	I1229 07:13:42.212535  182389 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1229 07:13:42.212604  182389 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1229 07:13:42.247446  182389 cri.go:96] found id: ""
	I1229 07:13:42.247474  182389 logs.go:282] 0 containers: []
	W1229 07:13:42.247482  182389 logs.go:284] No container was found matching "etcd"
	I1229 07:13:42.247488  182389 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1229 07:13:42.247542  182389 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1229 07:13:42.282105  182389 cri.go:96] found id: ""
	I1229 07:13:42.282135  182389 logs.go:282] 0 containers: []
	W1229 07:13:42.282148  182389 logs.go:284] No container was found matching "coredns"
	I1229 07:13:42.282157  182389 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1229 07:13:42.282210  182389 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1229 07:13:42.318880  182389 cri.go:96] found id: "ab7241dc5c8c2a299a7ede3eb85d75ade304d211f5bb4ae3d659d4455d83ab45"
	I1229 07:13:42.318902  182389 cri.go:96] found id: ""
	I1229 07:13:42.318911  182389 logs.go:282] 1 containers: [ab7241dc5c8c2a299a7ede3eb85d75ade304d211f5bb4ae3d659d4455d83ab45]
	I1229 07:13:42.318965  182389 ssh_runner.go:195] Run: which crictl
	I1229 07:13:42.322608  182389 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1229 07:13:42.322657  182389 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1229 07:13:42.357451  182389 cri.go:96] found id: ""
	I1229 07:13:42.357477  182389 logs.go:282] 0 containers: []
	W1229 07:13:42.357487  182389 logs.go:284] No container was found matching "kube-proxy"
	I1229 07:13:42.357494  182389 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1229 07:13:42.357555  182389 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1229 07:13:42.394263  182389 cri.go:96] found id: "1d16b757804b48197837746fce61d7dfe6bda3185ce49f221bbea9e891681759"
	I1229 07:13:42.394284  182389 cri.go:96] found id: ""
	I1229 07:13:42.394291  182389 logs.go:282] 1 containers: [1d16b757804b48197837746fce61d7dfe6bda3185ce49f221bbea9e891681759]
	I1229 07:13:42.394338  182389 ssh_runner.go:195] Run: which crictl
	I1229 07:13:42.398109  182389 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1229 07:13:42.398182  182389 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1229 07:13:42.432104  182389 cri.go:96] found id: ""
	I1229 07:13:42.432127  182389 logs.go:282] 0 containers: []
	W1229 07:13:42.432137  182389 logs.go:284] No container was found matching "kindnet"
	I1229 07:13:42.432144  182389 cri.go:61] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1229 07:13:42.432204  182389 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=storage-provisioner
	I1229 07:13:42.467181  182389 cri.go:96] found id: ""
	I1229 07:13:42.467202  182389 logs.go:282] 0 containers: []
	W1229 07:13:42.467210  182389 logs.go:284] No container was found matching "storage-provisioner"
	I1229 07:13:42.467232  182389 logs.go:123] Gathering logs for container status ...
	I1229 07:13:42.467247  182389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1229 07:13:42.506689  182389 logs.go:123] Gathering logs for kubelet ...
	I1229 07:13:42.506725  182389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1229 07:13:42.602180  182389 logs.go:123] Gathering logs for dmesg ...
	I1229 07:13:42.602209  182389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1229 07:13:42.617213  182389 logs.go:123] Gathering logs for describe nodes ...
	I1229 07:13:42.617260  182389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1229 07:13:42.675437  182389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1229 07:13:42.675461  182389 logs.go:123] Gathering logs for kube-apiserver [e508e3d2bbb715ae7ae25b0aadeb6cacca070930911a426a15f18c7f32ecfe03] ...
	I1229 07:13:42.675476  182389 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e508e3d2bbb715ae7ae25b0aadeb6cacca070930911a426a15f18c7f32ecfe03"
	I1229 07:13:42.713632  182389 logs.go:123] Gathering logs for kube-scheduler [ab7241dc5c8c2a299a7ede3eb85d75ade304d211f5bb4ae3d659d4455d83ab45] ...
	I1229 07:13:42.713661  182389 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ab7241dc5c8c2a299a7ede3eb85d75ade304d211f5bb4ae3d659d4455d83ab45"
	I1229 07:13:42.790872  182389 logs.go:123] Gathering logs for kube-controller-manager [1d16b757804b48197837746fce61d7dfe6bda3185ce49f221bbea9e891681759] ...
	I1229 07:13:42.790905  182389 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1d16b757804b48197837746fce61d7dfe6bda3185ce49f221bbea9e891681759"
	I1229 07:13:42.826701  182389 logs.go:123] Gathering logs for CRI-O ...
	I1229 07:13:42.826734  182389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1229 07:13:45.379314  182389 api_server.go:299] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1229 07:13:45.379766  182389 api_server.go:315] stopped: https://192.168.94.2:8443/healthz: Get "https://192.168.94.2:8443/healthz": dial tcp 192.168.94.2:8443: connect: connection refused
	I1229 07:13:45.379831  182389 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1229 07:13:45.379896  182389 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1229 07:13:45.420183  182389 cri.go:96] found id: "e508e3d2bbb715ae7ae25b0aadeb6cacca070930911a426a15f18c7f32ecfe03"
	I1229 07:13:45.420204  182389 cri.go:96] found id: ""
	I1229 07:13:45.420211  182389 logs.go:282] 1 containers: [e508e3d2bbb715ae7ae25b0aadeb6cacca070930911a426a15f18c7f32ecfe03]
	I1229 07:13:45.420280  182389 ssh_runner.go:195] Run: which crictl
	I1229 07:13:45.424205  182389 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1229 07:13:45.424306  182389 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1229 07:13:45.465156  182389 cri.go:96] found id: ""
	I1229 07:13:45.465183  182389 logs.go:282] 0 containers: []
	W1229 07:13:45.465192  182389 logs.go:284] No container was found matching "etcd"
	I1229 07:13:45.465199  182389 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1229 07:13:45.465277  182389 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1229 07:13:45.510210  182389 cri.go:96] found id: ""
	I1229 07:13:45.510255  182389 logs.go:282] 0 containers: []
	W1229 07:13:45.510265  182389 logs.go:284] No container was found matching "coredns"
	I1229 07:13:45.510272  182389 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1229 07:13:45.510338  182389 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1229 07:13:45.558345  182389 cri.go:96] found id: "ab7241dc5c8c2a299a7ede3eb85d75ade304d211f5bb4ae3d659d4455d83ab45"
	I1229 07:13:45.558370  182389 cri.go:96] found id: ""
	I1229 07:13:45.558380  182389 logs.go:282] 1 containers: [ab7241dc5c8c2a299a7ede3eb85d75ade304d211f5bb4ae3d659d4455d83ab45]
	I1229 07:13:45.558441  182389 ssh_runner.go:195] Run: which crictl
	I1229 07:13:45.563126  182389 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1229 07:13:45.563188  182389 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1229 07:13:45.609320  182389 cri.go:96] found id: ""
	I1229 07:13:45.609348  182389 logs.go:282] 0 containers: []
	W1229 07:13:45.609359  182389 logs.go:284] No container was found matching "kube-proxy"
	I1229 07:13:45.609367  182389 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1229 07:13:45.609427  182389 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1229 07:13:45.655075  182389 cri.go:96] found id: "1d16b757804b48197837746fce61d7dfe6bda3185ce49f221bbea9e891681759"
	I1229 07:13:45.655102  182389 cri.go:96] found id: ""
	I1229 07:13:45.655112  182389 logs.go:282] 1 containers: [1d16b757804b48197837746fce61d7dfe6bda3185ce49f221bbea9e891681759]
	I1229 07:13:45.655175  182389 ssh_runner.go:195] Run: which crictl
	I1229 07:13:45.659927  182389 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1229 07:13:45.660004  182389 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1229 07:13:45.705636  182389 cri.go:96] found id: ""
	I1229 07:13:45.705664  182389 logs.go:282] 0 containers: []
	W1229 07:13:45.705675  182389 logs.go:284] No container was found matching "kindnet"
	I1229 07:13:45.705683  182389 cri.go:61] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1229 07:13:45.705743  182389 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=storage-provisioner
	I1229 07:13:45.752068  182389 cri.go:96] found id: ""
	I1229 07:13:45.752097  182389 logs.go:282] 0 containers: []
	W1229 07:13:45.752108  182389 logs.go:284] No container was found matching "storage-provisioner"
	I1229 07:13:45.752121  182389 logs.go:123] Gathering logs for describe nodes ...
	I1229 07:13:45.752154  182389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1229 07:13:45.818919  182389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1229 07:13:45.818944  182389 logs.go:123] Gathering logs for kube-apiserver [e508e3d2bbb715ae7ae25b0aadeb6cacca070930911a426a15f18c7f32ecfe03] ...
	I1229 07:13:45.818960  182389 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e508e3d2bbb715ae7ae25b0aadeb6cacca070930911a426a15f18c7f32ecfe03"
	I1229 07:13:45.860191  182389 logs.go:123] Gathering logs for kube-scheduler [ab7241dc5c8c2a299a7ede3eb85d75ade304d211f5bb4ae3d659d4455d83ab45] ...
	I1229 07:13:45.860232  182389 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ab7241dc5c8c2a299a7ede3eb85d75ade304d211f5bb4ae3d659d4455d83ab45"
	I1229 07:13:45.180428  232412 addons.go:530] duration metric: took 653.739863ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I1229 07:13:45.396697  232412 kapi.go:214] "coredns" deployment in "kube-system" namespace and "old-k8s-version-876718" context rescaled to 1 replicas
	W1229 07:13:46.897205  232412 node_ready.go:57] node "old-k8s-version-876718" has "Ready":"False" status (will retry)
	I1229 07:13:47.345539  225445 api_server.go:315] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": read tcp 192.168.76.1:35842->192.168.76.2:8443: read: connection reset by peer
	I1229 07:13:47.345586  225445 api_server.go:299] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1229 07:13:47.345937  225445 api_server.go:315] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1229 07:13:47.502241  225445 api_server.go:299] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1229 07:13:47.502670  225445 api_server.go:315] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1229 07:13:48.002287  225445 api_server.go:299] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1229 07:13:48.002746  225445 api_server.go:315] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1229 07:13:48.502280  225445 api_server.go:299] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1229 07:13:48.502687  225445 api_server.go:315] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1229 07:13:49.002289  225445 api_server.go:299] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1229 07:13:49.002705  225445 api_server.go:315] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1229 07:13:45.937712  182389 logs.go:123] Gathering logs for kube-controller-manager [1d16b757804b48197837746fce61d7dfe6bda3185ce49f221bbea9e891681759] ...
	I1229 07:13:45.937747  182389 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1d16b757804b48197837746fce61d7dfe6bda3185ce49f221bbea9e891681759"
	I1229 07:13:45.973128  182389 logs.go:123] Gathering logs for CRI-O ...
	I1229 07:13:45.973156  182389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1229 07:13:46.021933  182389 logs.go:123] Gathering logs for container status ...
	I1229 07:13:46.021965  182389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1229 07:13:46.060420  182389 logs.go:123] Gathering logs for kubelet ...
	I1229 07:13:46.060444  182389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1229 07:13:46.160019  182389 logs.go:123] Gathering logs for dmesg ...
	I1229 07:13:46.160049  182389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1229 07:13:48.676603  182389 api_server.go:299] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1229 07:13:48.677045  182389 api_server.go:315] stopped: https://192.168.94.2:8443/healthz: Get "https://192.168.94.2:8443/healthz": dial tcp 192.168.94.2:8443: connect: connection refused
	I1229 07:13:48.677107  182389 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1229 07:13:48.677165  182389 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1229 07:13:48.712382  182389 cri.go:96] found id: "e508e3d2bbb715ae7ae25b0aadeb6cacca070930911a426a15f18c7f32ecfe03"
	I1229 07:13:48.712402  182389 cri.go:96] found id: ""
	I1229 07:13:48.712410  182389 logs.go:282] 1 containers: [e508e3d2bbb715ae7ae25b0aadeb6cacca070930911a426a15f18c7f32ecfe03]
	I1229 07:13:48.712455  182389 ssh_runner.go:195] Run: which crictl
	I1229 07:13:48.716328  182389 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1229 07:13:48.716381  182389 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1229 07:13:48.751834  182389 cri.go:96] found id: ""
	I1229 07:13:48.751857  182389 logs.go:282] 0 containers: []
	W1229 07:13:48.751864  182389 logs.go:284] No container was found matching "etcd"
	I1229 07:13:48.751870  182389 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1229 07:13:48.751916  182389 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1229 07:13:48.785474  182389 cri.go:96] found id: ""
	I1229 07:13:48.785499  182389 logs.go:282] 0 containers: []
	W1229 07:13:48.785509  182389 logs.go:284] No container was found matching "coredns"
	I1229 07:13:48.785516  182389 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1229 07:13:48.785572  182389 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1229 07:13:48.821491  182389 cri.go:96] found id: "ab7241dc5c8c2a299a7ede3eb85d75ade304d211f5bb4ae3d659d4455d83ab45"
	I1229 07:13:48.821510  182389 cri.go:96] found id: ""
	I1229 07:13:48.821517  182389 logs.go:282] 1 containers: [ab7241dc5c8c2a299a7ede3eb85d75ade304d211f5bb4ae3d659d4455d83ab45]
	I1229 07:13:48.821562  182389 ssh_runner.go:195] Run: which crictl
	I1229 07:13:48.825494  182389 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1229 07:13:48.825553  182389 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1229 07:13:48.859300  182389 cri.go:96] found id: ""
	I1229 07:13:48.859324  182389 logs.go:282] 0 containers: []
	W1229 07:13:48.859332  182389 logs.go:284] No container was found matching "kube-proxy"
	I1229 07:13:48.859338  182389 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1229 07:13:48.859396  182389 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1229 07:13:48.892564  182389 cri.go:96] found id: "1d16b757804b48197837746fce61d7dfe6bda3185ce49f221bbea9e891681759"
	I1229 07:13:48.892586  182389 cri.go:96] found id: ""
	I1229 07:13:48.892593  182389 logs.go:282] 1 containers: [1d16b757804b48197837746fce61d7dfe6bda3185ce49f221bbea9e891681759]
	I1229 07:13:48.892644  182389 ssh_runner.go:195] Run: which crictl
	I1229 07:13:48.896587  182389 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1229 07:13:48.896656  182389 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1229 07:13:48.930625  182389 cri.go:96] found id: ""
	I1229 07:13:48.930649  182389 logs.go:282] 0 containers: []
	W1229 07:13:48.930657  182389 logs.go:284] No container was found matching "kindnet"
	I1229 07:13:48.930663  182389 cri.go:61] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1229 07:13:48.930708  182389 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=storage-provisioner
	I1229 07:13:48.966286  182389 cri.go:96] found id: ""
	I1229 07:13:48.966316  182389 logs.go:282] 0 containers: []
	W1229 07:13:48.966325  182389 logs.go:284] No container was found matching "storage-provisioner"
	I1229 07:13:48.966335  182389 logs.go:123] Gathering logs for kube-apiserver [e508e3d2bbb715ae7ae25b0aadeb6cacca070930911a426a15f18c7f32ecfe03] ...
	I1229 07:13:48.966347  182389 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e508e3d2bbb715ae7ae25b0aadeb6cacca070930911a426a15f18c7f32ecfe03"
	I1229 07:13:49.003540  182389 logs.go:123] Gathering logs for kube-scheduler [ab7241dc5c8c2a299a7ede3eb85d75ade304d211f5bb4ae3d659d4455d83ab45] ...
	I1229 07:13:49.003564  182389 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ab7241dc5c8c2a299a7ede3eb85d75ade304d211f5bb4ae3d659d4455d83ab45"
	I1229 07:13:49.077940  182389 logs.go:123] Gathering logs for kube-controller-manager [1d16b757804b48197837746fce61d7dfe6bda3185ce49f221bbea9e891681759] ...
	I1229 07:13:49.077976  182389 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1d16b757804b48197837746fce61d7dfe6bda3185ce49f221bbea9e891681759"
	I1229 07:13:49.112760  182389 logs.go:123] Gathering logs for CRI-O ...
	I1229 07:13:49.112788  182389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1229 07:13:49.163113  182389 logs.go:123] Gathering logs for container status ...
	I1229 07:13:49.163141  182389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1229 07:13:49.201184  182389 logs.go:123] Gathering logs for kubelet ...
	I1229 07:13:49.201210  182389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1229 07:13:49.293842  182389 logs.go:123] Gathering logs for dmesg ...
	I1229 07:13:49.293872  182389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1229 07:13:49.309402  182389 logs.go:123] Gathering logs for describe nodes ...
	I1229 07:13:49.309427  182389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1229 07:13:49.367622  182389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	W1229 07:13:49.396970  232412 node_ready.go:57] node "old-k8s-version-876718" has "Ready":"False" status (will retry)
	W1229 07:13:51.896747  232412 node_ready.go:57] node "old-k8s-version-876718" has "Ready":"False" status (will retry)
	I1229 07:13:49.502359  225445 api_server.go:299] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1229 07:13:49.502721  225445 api_server.go:315] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1229 07:13:50.002356  225445 api_server.go:299] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1229 07:13:50.002767  225445 api_server.go:315] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1229 07:13:50.502411  225445 api_server.go:299] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1229 07:13:50.502825  225445 api_server.go:315] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1229 07:13:51.002290  225445 api_server.go:299] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1229 07:13:51.002743  225445 api_server.go:315] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1229 07:13:51.502297  225445 api_server.go:299] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1229 07:13:51.502765  225445 api_server.go:315] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1229 07:13:52.003031  225445 api_server.go:299] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1229 07:13:52.003466  225445 api_server.go:315] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1229 07:13:52.503194  225445 api_server.go:299] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1229 07:13:52.503627  225445 api_server.go:315] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1229 07:13:53.002265  225445 api_server.go:299] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1229 07:13:53.002613  225445 api_server.go:315] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1229 07:13:53.502278  225445 api_server.go:299] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1229 07:13:53.502671  225445 api_server.go:315] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1229 07:13:54.002302  225445 api_server.go:299] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1229 07:13:54.002786  225445 api_server.go:315] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1229 07:13:51.867955  182389 api_server.go:299] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1229 07:13:51.868389  182389 api_server.go:315] stopped: https://192.168.94.2:8443/healthz: Get "https://192.168.94.2:8443/healthz": dial tcp 192.168.94.2:8443: connect: connection refused
	I1229 07:13:51.868441  182389 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1229 07:13:51.868503  182389 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1229 07:13:51.903720  182389 cri.go:96] found id: "e508e3d2bbb715ae7ae25b0aadeb6cacca070930911a426a15f18c7f32ecfe03"
	I1229 07:13:51.903740  182389 cri.go:96] found id: ""
	I1229 07:13:51.903749  182389 logs.go:282] 1 containers: [e508e3d2bbb715ae7ae25b0aadeb6cacca070930911a426a15f18c7f32ecfe03]
	I1229 07:13:51.903808  182389 ssh_runner.go:195] Run: which crictl
	I1229 07:13:51.907703  182389 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1229 07:13:51.907767  182389 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1229 07:13:51.942449  182389 cri.go:96] found id: ""
	I1229 07:13:51.942475  182389 logs.go:282] 0 containers: []
	W1229 07:13:51.942485  182389 logs.go:284] No container was found matching "etcd"
	I1229 07:13:51.942492  182389 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1229 07:13:51.942544  182389 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1229 07:13:51.976416  182389 cri.go:96] found id: ""
	I1229 07:13:51.976442  182389 logs.go:282] 0 containers: []
	W1229 07:13:51.976451  182389 logs.go:284] No container was found matching "coredns"
	I1229 07:13:51.976460  182389 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1229 07:13:51.976532  182389 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1229 07:13:52.011687  182389 cri.go:96] found id: "ab7241dc5c8c2a299a7ede3eb85d75ade304d211f5bb4ae3d659d4455d83ab45"
	I1229 07:13:52.011706  182389 cri.go:96] found id: ""
	I1229 07:13:52.011718  182389 logs.go:282] 1 containers: [ab7241dc5c8c2a299a7ede3eb85d75ade304d211f5bb4ae3d659d4455d83ab45]
	I1229 07:13:52.011774  182389 ssh_runner.go:195] Run: which crictl
	I1229 07:13:52.015826  182389 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1229 07:13:52.015888  182389 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1229 07:13:52.050489  182389 cri.go:96] found id: ""
	I1229 07:13:52.050517  182389 logs.go:282] 0 containers: []
	W1229 07:13:52.050528  182389 logs.go:284] No container was found matching "kube-proxy"
	I1229 07:13:52.050536  182389 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1229 07:13:52.050589  182389 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1229 07:13:52.086014  182389 cri.go:96] found id: "1d16b757804b48197837746fce61d7dfe6bda3185ce49f221bbea9e891681759"
	I1229 07:13:52.086035  182389 cri.go:96] found id: ""
	I1229 07:13:52.086045  182389 logs.go:282] 1 containers: [1d16b757804b48197837746fce61d7dfe6bda3185ce49f221bbea9e891681759]
	I1229 07:13:52.086096  182389 ssh_runner.go:195] Run: which crictl
	I1229 07:13:52.089894  182389 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1229 07:13:52.089955  182389 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1229 07:13:52.125712  182389 cri.go:96] found id: ""
	I1229 07:13:52.125741  182389 logs.go:282] 0 containers: []
	W1229 07:13:52.125751  182389 logs.go:284] No container was found matching "kindnet"
	I1229 07:13:52.125757  182389 cri.go:61] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1229 07:13:52.125810  182389 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=storage-provisioner
	I1229 07:13:52.160821  182389 cri.go:96] found id: ""
	I1229 07:13:52.160849  182389 logs.go:282] 0 containers: []
	W1229 07:13:52.160859  182389 logs.go:284] No container was found matching "storage-provisioner"
	I1229 07:13:52.160871  182389 logs.go:123] Gathering logs for kube-scheduler [ab7241dc5c8c2a299a7ede3eb85d75ade304d211f5bb4ae3d659d4455d83ab45] ...
	I1229 07:13:52.160886  182389 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ab7241dc5c8c2a299a7ede3eb85d75ade304d211f5bb4ae3d659d4455d83ab45"
	I1229 07:13:52.234136  182389 logs.go:123] Gathering logs for kube-controller-manager [1d16b757804b48197837746fce61d7dfe6bda3185ce49f221bbea9e891681759] ...
	I1229 07:13:52.234166  182389 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1d16b757804b48197837746fce61d7dfe6bda3185ce49f221bbea9e891681759"
	I1229 07:13:52.269570  182389 logs.go:123] Gathering logs for CRI-O ...
	I1229 07:13:52.269596  182389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1229 07:13:52.321103  182389 logs.go:123] Gathering logs for container status ...
	I1229 07:13:52.321134  182389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1229 07:13:52.360109  182389 logs.go:123] Gathering logs for kubelet ...
	I1229 07:13:52.360134  182389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1229 07:13:52.453925  182389 logs.go:123] Gathering logs for dmesg ...
	I1229 07:13:52.453955  182389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1229 07:13:52.469389  182389 logs.go:123] Gathering logs for describe nodes ...
	I1229 07:13:52.469413  182389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1229 07:13:52.528297  182389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1229 07:13:52.528317  182389 logs.go:123] Gathering logs for kube-apiserver [e508e3d2bbb715ae7ae25b0aadeb6cacca070930911a426a15f18c7f32ecfe03] ...
	I1229 07:13:52.528335  182389 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e508e3d2bbb715ae7ae25b0aadeb6cacca070930911a426a15f18c7f32ecfe03"
	I1229 07:13:55.068404  182389 api_server.go:299] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1229 07:13:55.068826  182389 api_server.go:315] stopped: https://192.168.94.2:8443/healthz: Get "https://192.168.94.2:8443/healthz": dial tcp 192.168.94.2:8443: connect: connection refused
	I1229 07:13:55.068895  182389 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1229 07:13:55.068955  182389 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1229 07:13:55.104072  182389 cri.go:96] found id: "e508e3d2bbb715ae7ae25b0aadeb6cacca070930911a426a15f18c7f32ecfe03"
	I1229 07:13:55.104091  182389 cri.go:96] found id: ""
	I1229 07:13:55.104099  182389 logs.go:282] 1 containers: [e508e3d2bbb715ae7ae25b0aadeb6cacca070930911a426a15f18c7f32ecfe03]
	I1229 07:13:55.104156  182389 ssh_runner.go:195] Run: which crictl
	I1229 07:13:55.107997  182389 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1229 07:13:55.108064  182389 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1229 07:13:55.141864  182389 cri.go:96] found id: ""
	I1229 07:13:55.141887  182389 logs.go:282] 0 containers: []
	W1229 07:13:55.141894  182389 logs.go:284] No container was found matching "etcd"
	I1229 07:13:55.141899  182389 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1229 07:13:55.141958  182389 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1229 07:13:55.176503  182389 cri.go:96] found id: ""
	I1229 07:13:55.176523  182389 logs.go:282] 0 containers: []
	W1229 07:13:55.176533  182389 logs.go:284] No container was found matching "coredns"
	I1229 07:13:55.176541  182389 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1229 07:13:55.176597  182389 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1229 07:13:55.210083  182389 cri.go:96] found id: "ab7241dc5c8c2a299a7ede3eb85d75ade304d211f5bb4ae3d659d4455d83ab45"
	I1229 07:13:55.210109  182389 cri.go:96] found id: ""
	I1229 07:13:55.210120  182389 logs.go:282] 1 containers: [ab7241dc5c8c2a299a7ede3eb85d75ade304d211f5bb4ae3d659d4455d83ab45]
	I1229 07:13:55.210182  182389 ssh_runner.go:195] Run: which crictl
	I1229 07:13:55.214180  182389 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1229 07:13:55.214259  182389 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1229 07:13:55.247385  182389 cri.go:96] found id: ""
	I1229 07:13:55.247406  182389 logs.go:282] 0 containers: []
	W1229 07:13:55.247413  182389 logs.go:284] No container was found matching "kube-proxy"
	I1229 07:13:55.247419  182389 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1229 07:13:55.247475  182389 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1229 07:13:55.281283  182389 cri.go:96] found id: "1d16b757804b48197837746fce61d7dfe6bda3185ce49f221bbea9e891681759"
	I1229 07:13:55.281304  182389 cri.go:96] found id: ""
	I1229 07:13:55.281312  182389 logs.go:282] 1 containers: [1d16b757804b48197837746fce61d7dfe6bda3185ce49f221bbea9e891681759]
	I1229 07:13:55.281356  182389 ssh_runner.go:195] Run: which crictl
	I1229 07:13:55.285167  182389 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1229 07:13:55.285244  182389 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1229 07:13:55.320633  182389 cri.go:96] found id: ""
	I1229 07:13:55.320653  182389 logs.go:282] 0 containers: []
	W1229 07:13:55.320661  182389 logs.go:284] No container was found matching "kindnet"
	I1229 07:13:55.320666  182389 cri.go:61] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1229 07:13:55.320711  182389 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=storage-provisioner
	I1229 07:13:55.356037  182389 cri.go:96] found id: ""
	I1229 07:13:55.356062  182389 logs.go:282] 0 containers: []
	W1229 07:13:55.356073  182389 logs.go:284] No container was found matching "storage-provisioner"
	I1229 07:13:55.356084  182389 logs.go:123] Gathering logs for kube-scheduler [ab7241dc5c8c2a299a7ede3eb85d75ade304d211f5bb4ae3d659d4455d83ab45] ...
	I1229 07:13:55.356099  182389 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ab7241dc5c8c2a299a7ede3eb85d75ade304d211f5bb4ae3d659d4455d83ab45"
	I1229 07:13:55.430581  182389 logs.go:123] Gathering logs for kube-controller-manager [1d16b757804b48197837746fce61d7dfe6bda3185ce49f221bbea9e891681759] ...
	I1229 07:13:55.430613  182389 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1d16b757804b48197837746fce61d7dfe6bda3185ce49f221bbea9e891681759"
	I1229 07:13:55.466326  182389 logs.go:123] Gathering logs for CRI-O ...
	I1229 07:13:55.466351  182389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1229 07:13:55.513331  182389 logs.go:123] Gathering logs for container status ...
	I1229 07:13:55.513357  182389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1229 07:13:55.550799  182389 logs.go:123] Gathering logs for kubelet ...
	I1229 07:13:55.550827  182389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1229 07:13:55.646672  182389 logs.go:123] Gathering logs for dmesg ...
	I1229 07:13:55.646701  182389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1229 07:13:55.662388  182389 logs.go:123] Gathering logs for describe nodes ...
	I1229 07:13:55.662418  182389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1229 07:13:55.721567  182389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1229 07:13:55.721588  182389 logs.go:123] Gathering logs for kube-apiserver [e508e3d2bbb715ae7ae25b0aadeb6cacca070930911a426a15f18c7f32ecfe03] ...
	I1229 07:13:55.721642  182389 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e508e3d2bbb715ae7ae25b0aadeb6cacca070930911a426a15f18c7f32ecfe03"
	W1229 07:13:54.396516  232412 node_ready.go:57] node "old-k8s-version-876718" has "Ready":"False" status (will retry)
	W1229 07:13:56.396706  232412 node_ready.go:57] node "old-k8s-version-876718" has "Ready":"False" status (will retry)
	I1229 07:13:56.898902  232412 node_ready.go:49] node "old-k8s-version-876718" is "Ready"
	I1229 07:13:56.898939  232412 node_ready.go:38] duration metric: took 12.005386988s for node "old-k8s-version-876718" to be "Ready" ...
	I1229 07:13:56.898955  232412 api_server.go:52] waiting for apiserver process to appear ...
	I1229 07:13:56.899014  232412 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1229 07:13:56.914252  232412 api_server.go:72] duration metric: took 12.387572617s to wait for apiserver process to appear ...
	I1229 07:13:56.914283  232412 api_server.go:88] waiting for apiserver healthz status ...
	I1229 07:13:56.914306  232412 api_server.go:299] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1229 07:13:56.919515  232412 api_server.go:325] https://192.168.103.2:8443/healthz returned 200:
	ok
	I1229 07:13:56.920490  232412 api_server.go:141] control plane version: v1.28.0
	I1229 07:13:56.920514  232412 api_server.go:131] duration metric: took 6.224333ms to wait for apiserver health ...
	I1229 07:13:56.920522  232412 system_pods.go:43] waiting for kube-system pods to appear ...
	I1229 07:13:56.924920  232412 system_pods.go:59] 8 kube-system pods found
	I1229 07:13:56.924952  232412 system_pods.go:61] "coredns-5dd5756b68-pnstl" [36610378-7535-44b7-a5f7-2aa2a3f81b36] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1229 07:13:56.924970  232412 system_pods.go:61] "etcd-old-k8s-version-876718" [7e0e2fbb-02b0-446f-af69-c971677e0f79] Running
	I1229 07:13:56.924979  232412 system_pods.go:61] "kindnet-kgr4x" [aa334643-1cbf-42a1-9792-4c11f4bd321e] Running
	I1229 07:13:56.924989  232412 system_pods.go:61] "kube-apiserver-old-k8s-version-876718" [72157bce-e9ed-40da-b723-fcb40a233dcc] Running
	I1229 07:13:56.924996  232412 system_pods.go:61] "kube-controller-manager-old-k8s-version-876718" [717ced59-853d-4ce5-97da-aa48d41f0e62] Running
	I1229 07:13:56.925001  232412 system_pods.go:61] "kube-proxy-2v9kr" [3a7b9034-71d8-48ea-a007-730b24cdf7e1] Running
	I1229 07:13:56.925007  232412 system_pods.go:61] "kube-scheduler-old-k8s-version-876718" [84df3d92-dee8-4930-8b8d-93257c745173] Running
	I1229 07:13:56.925018  232412 system_pods.go:61] "storage-provisioner" [554b2d49-670e-4430-bd1c-298394852b83] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1229 07:13:56.925026  232412 system_pods.go:74] duration metric: took 4.49829ms to wait for pod list to return data ...
	I1229 07:13:56.925039  232412 default_sa.go:34] waiting for default service account to be created ...
	I1229 07:13:56.926932  232412 default_sa.go:45] found service account: "default"
	I1229 07:13:56.926952  232412 default_sa.go:55] duration metric: took 1.906641ms for default service account to be created ...
	I1229 07:13:56.926961  232412 system_pods.go:116] waiting for k8s-apps to be running ...
	I1229 07:13:56.929860  232412 system_pods.go:86] 8 kube-system pods found
	I1229 07:13:56.929883  232412 system_pods.go:89] "coredns-5dd5756b68-pnstl" [36610378-7535-44b7-a5f7-2aa2a3f81b36] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1229 07:13:56.929893  232412 system_pods.go:89] "etcd-old-k8s-version-876718" [7e0e2fbb-02b0-446f-af69-c971677e0f79] Running
	I1229 07:13:56.929899  232412 system_pods.go:89] "kindnet-kgr4x" [aa334643-1cbf-42a1-9792-4c11f4bd321e] Running
	I1229 07:13:56.929902  232412 system_pods.go:89] "kube-apiserver-old-k8s-version-876718" [72157bce-e9ed-40da-b723-fcb40a233dcc] Running
	I1229 07:13:56.929911  232412 system_pods.go:89] "kube-controller-manager-old-k8s-version-876718" [717ced59-853d-4ce5-97da-aa48d41f0e62] Running
	I1229 07:13:56.929914  232412 system_pods.go:89] "kube-proxy-2v9kr" [3a7b9034-71d8-48ea-a007-730b24cdf7e1] Running
	I1229 07:13:56.929920  232412 system_pods.go:89] "kube-scheduler-old-k8s-version-876718" [84df3d92-dee8-4930-8b8d-93257c745173] Running
	I1229 07:13:56.929925  232412 system_pods.go:89] "storage-provisioner" [554b2d49-670e-4430-bd1c-298394852b83] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1229 07:13:56.929949  232412 retry.go:84] will retry after 300ms: missing components: kube-dns
	I1229 07:13:57.202209  232412 system_pods.go:86] 8 kube-system pods found
	I1229 07:13:57.202263  232412 system_pods.go:89] "coredns-5dd5756b68-pnstl" [36610378-7535-44b7-a5f7-2aa2a3f81b36] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1229 07:13:57.202268  232412 system_pods.go:89] "etcd-old-k8s-version-876718" [7e0e2fbb-02b0-446f-af69-c971677e0f79] Running
	I1229 07:13:57.202275  232412 system_pods.go:89] "kindnet-kgr4x" [aa334643-1cbf-42a1-9792-4c11f4bd321e] Running
	I1229 07:13:57.202280  232412 system_pods.go:89] "kube-apiserver-old-k8s-version-876718" [72157bce-e9ed-40da-b723-fcb40a233dcc] Running
	I1229 07:13:57.202286  232412 system_pods.go:89] "kube-controller-manager-old-k8s-version-876718" [717ced59-853d-4ce5-97da-aa48d41f0e62] Running
	I1229 07:13:57.202291  232412 system_pods.go:89] "kube-proxy-2v9kr" [3a7b9034-71d8-48ea-a007-730b24cdf7e1] Running
	I1229 07:13:57.202296  232412 system_pods.go:89] "kube-scheduler-old-k8s-version-876718" [84df3d92-dee8-4930-8b8d-93257c745173] Running
	I1229 07:13:57.202302  232412 system_pods.go:89] "storage-provisioner" [554b2d49-670e-4430-bd1c-298394852b83] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1229 07:13:57.510631  232412 system_pods.go:86] 8 kube-system pods found
	I1229 07:13:57.510659  232412 system_pods.go:89] "coredns-5dd5756b68-pnstl" [36610378-7535-44b7-a5f7-2aa2a3f81b36] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1229 07:13:57.510668  232412 system_pods.go:89] "etcd-old-k8s-version-876718" [7e0e2fbb-02b0-446f-af69-c971677e0f79] Running
	I1229 07:13:57.510673  232412 system_pods.go:89] "kindnet-kgr4x" [aa334643-1cbf-42a1-9792-4c11f4bd321e] Running
	I1229 07:13:57.510677  232412 system_pods.go:89] "kube-apiserver-old-k8s-version-876718" [72157bce-e9ed-40da-b723-fcb40a233dcc] Running
	I1229 07:13:57.510680  232412 system_pods.go:89] "kube-controller-manager-old-k8s-version-876718" [717ced59-853d-4ce5-97da-aa48d41f0e62] Running
	I1229 07:13:57.510683  232412 system_pods.go:89] "kube-proxy-2v9kr" [3a7b9034-71d8-48ea-a007-730b24cdf7e1] Running
	I1229 07:13:57.510687  232412 system_pods.go:89] "kube-scheduler-old-k8s-version-876718" [84df3d92-dee8-4930-8b8d-93257c745173] Running
	I1229 07:13:57.510696  232412 system_pods.go:89] "storage-provisioner" [554b2d49-670e-4430-bd1c-298394852b83] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1229 07:13:54.502605  225445 api_server.go:299] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1229 07:13:54.503012  225445 api_server.go:315] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1229 07:13:55.002522  225445 api_server.go:299] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1229 07:13:55.002960  225445 api_server.go:315] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1229 07:13:55.502499  225445 api_server.go:299] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1229 07:13:55.502875  225445 api_server.go:315] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1229 07:13:56.002504  225445 api_server.go:299] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1229 07:13:56.002937  225445 api_server.go:315] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1229 07:13:56.502304  225445 api_server.go:299] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1229 07:13:56.502707  225445 api_server.go:315] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1229 07:13:57.002318  225445 api_server.go:299] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1229 07:13:57.002691  225445 api_server.go:315] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1229 07:13:57.502290  225445 api_server.go:299] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1229 07:13:57.502721  225445 api_server.go:315] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1229 07:13:58.002277  225445 api_server.go:299] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1229 07:13:58.002583  225445 api_server.go:315] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1229 07:13:58.503241  225445 api_server.go:299] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1229 07:13:58.503628  225445 api_server.go:315] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1229 07:13:59.002252  225445 api_server.go:299] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1229 07:13:59.002595  225445 api_server.go:315] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1229 07:13:57.995149  232412 system_pods.go:86] 8 kube-system pods found
	I1229 07:13:57.995184  232412 system_pods.go:89] "coredns-5dd5756b68-pnstl" [36610378-7535-44b7-a5f7-2aa2a3f81b36] Running
	I1229 07:13:57.995190  232412 system_pods.go:89] "etcd-old-k8s-version-876718" [7e0e2fbb-02b0-446f-af69-c971677e0f79] Running
	I1229 07:13:57.995194  232412 system_pods.go:89] "kindnet-kgr4x" [aa334643-1cbf-42a1-9792-4c11f4bd321e] Running
	I1229 07:13:57.995198  232412 system_pods.go:89] "kube-apiserver-old-k8s-version-876718" [72157bce-e9ed-40da-b723-fcb40a233dcc] Running
	I1229 07:13:57.995201  232412 system_pods.go:89] "kube-controller-manager-old-k8s-version-876718" [717ced59-853d-4ce5-97da-aa48d41f0e62] Running
	I1229 07:13:57.995205  232412 system_pods.go:89] "kube-proxy-2v9kr" [3a7b9034-71d8-48ea-a007-730b24cdf7e1] Running
	I1229 07:13:57.995208  232412 system_pods.go:89] "kube-scheduler-old-k8s-version-876718" [84df3d92-dee8-4930-8b8d-93257c745173] Running
	I1229 07:13:57.995211  232412 system_pods.go:89] "storage-provisioner" [554b2d49-670e-4430-bd1c-298394852b83] Running
	I1229 07:13:57.995239  232412 system_pods.go:126] duration metric: took 1.068271902s to wait for k8s-apps to be running ...
	I1229 07:13:57.995246  232412 system_svc.go:44] waiting for kubelet service to be running ....
	I1229 07:13:57.995290  232412 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1229 07:13:58.008371  232412 system_svc.go:56] duration metric: took 13.116139ms WaitForService to wait for kubelet
	I1229 07:13:58.008411  232412 kubeadm.go:587] duration metric: took 13.481761452s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1229 07:13:58.008438  232412 node_conditions.go:102] verifying NodePressure condition ...
	I1229 07:13:58.011037  232412 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1229 07:13:58.011063  232412 node_conditions.go:123] node cpu capacity is 8
	I1229 07:13:58.011078  232412 node_conditions.go:105] duration metric: took 2.634763ms to run NodePressure ...
	I1229 07:13:58.011091  232412 start.go:242] waiting for startup goroutines ...
	I1229 07:13:58.011101  232412 start.go:247] waiting for cluster config update ...
	I1229 07:13:58.011114  232412 start.go:256] writing updated cluster config ...
	I1229 07:13:58.011381  232412 ssh_runner.go:195] Run: rm -f paused
	I1229 07:13:58.015037  232412 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1229 07:13:58.019564  232412 pod_ready.go:83] waiting for pod "coredns-5dd5756b68-pnstl" in "kube-system" namespace to be "Ready" or be gone ...
	I1229 07:13:58.023840  232412 pod_ready.go:94] pod "coredns-5dd5756b68-pnstl" is "Ready"
	I1229 07:13:58.023856  232412 pod_ready.go:86] duration metric: took 4.271479ms for pod "coredns-5dd5756b68-pnstl" in "kube-system" namespace to be "Ready" or be gone ...
	I1229 07:13:58.026415  232412 pod_ready.go:83] waiting for pod "etcd-old-k8s-version-876718" in "kube-system" namespace to be "Ready" or be gone ...
	I1229 07:13:58.030048  232412 pod_ready.go:94] pod "etcd-old-k8s-version-876718" is "Ready"
	I1229 07:13:58.030066  232412 pod_ready.go:86] duration metric: took 3.634482ms for pod "etcd-old-k8s-version-876718" in "kube-system" namespace to be "Ready" or be gone ...
	I1229 07:13:58.032417  232412 pod_ready.go:83] waiting for pod "kube-apiserver-old-k8s-version-876718" in "kube-system" namespace to be "Ready" or be gone ...
	I1229 07:13:58.036296  232412 pod_ready.go:94] pod "kube-apiserver-old-k8s-version-876718" is "Ready"
	I1229 07:13:58.036312  232412 pod_ready.go:86] duration metric: took 3.878212ms for pod "kube-apiserver-old-k8s-version-876718" in "kube-system" namespace to be "Ready" or be gone ...
	I1229 07:13:58.038807  232412 pod_ready.go:83] waiting for pod "kube-controller-manager-old-k8s-version-876718" in "kube-system" namespace to be "Ready" or be gone ...
	I1229 07:13:58.419986  232412 pod_ready.go:94] pod "kube-controller-manager-old-k8s-version-876718" is "Ready"
	I1229 07:13:58.420013  232412 pod_ready.go:86] duration metric: took 381.190229ms for pod "kube-controller-manager-old-k8s-version-876718" in "kube-system" namespace to be "Ready" or be gone ...
	I1229 07:13:58.620637  232412 pod_ready.go:83] waiting for pod "kube-proxy-2v9kr" in "kube-system" namespace to be "Ready" or be gone ...
	I1229 07:13:59.019609  232412 pod_ready.go:94] pod "kube-proxy-2v9kr" is "Ready"
	I1229 07:13:59.019634  232412 pod_ready.go:86] duration metric: took 398.971058ms for pod "kube-proxy-2v9kr" in "kube-system" namespace to be "Ready" or be gone ...
	I1229 07:13:59.220301  232412 pod_ready.go:83] waiting for pod "kube-scheduler-old-k8s-version-876718" in "kube-system" namespace to be "Ready" or be gone ...
	I1229 07:13:59.619908  232412 pod_ready.go:94] pod "kube-scheduler-old-k8s-version-876718" is "Ready"
	I1229 07:13:59.619939  232412 pod_ready.go:86] duration metric: took 399.613954ms for pod "kube-scheduler-old-k8s-version-876718" in "kube-system" namespace to be "Ready" or be gone ...
	I1229 07:13:59.619954  232412 pod_ready.go:40] duration metric: took 1.604885578s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1229 07:13:59.664490  232412 start.go:625] kubectl: 1.35.0, cluster: 1.28.0 (minor skew: 7)
	I1229 07:13:59.666346  232412 out.go:203] 
	W1229 07:13:59.667632  232412 out.go:285] ! /usr/local/bin/kubectl is version 1.35.0, which may have incompatibilities with Kubernetes 1.28.0.
	I1229 07:13:59.668858  232412 out.go:179]   - Want kubectl v1.28.0? Try 'minikube kubectl -- get pods -A'
	I1229 07:13:59.670375  232412 out.go:179] * Done! kubectl is now configured to use "old-k8s-version-876718" cluster and "default" namespace by default
	I1229 07:13:58.263005  182389 api_server.go:299] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1229 07:13:58.263540  182389 api_server.go:315] stopped: https://192.168.94.2:8443/healthz: Get "https://192.168.94.2:8443/healthz": dial tcp 192.168.94.2:8443: connect: connection refused
	I1229 07:13:58.263602  182389 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1229 07:13:58.263671  182389 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1229 07:13:58.297935  182389 cri.go:96] found id: "e508e3d2bbb715ae7ae25b0aadeb6cacca070930911a426a15f18c7f32ecfe03"
	I1229 07:13:58.297955  182389 cri.go:96] found id: ""
	I1229 07:13:58.297963  182389 logs.go:282] 1 containers: [e508e3d2bbb715ae7ae25b0aadeb6cacca070930911a426a15f18c7f32ecfe03]
	I1229 07:13:58.298010  182389 ssh_runner.go:195] Run: which crictl
	I1229 07:13:58.301788  182389 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1229 07:13:58.301844  182389 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1229 07:13:58.336878  182389 cri.go:96] found id: ""
	I1229 07:13:58.336904  182389 logs.go:282] 0 containers: []
	W1229 07:13:58.336913  182389 logs.go:284] No container was found matching "etcd"
	I1229 07:13:58.336918  182389 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1229 07:13:58.336966  182389 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1229 07:13:58.371788  182389 cri.go:96] found id: ""
	I1229 07:13:58.371816  182389 logs.go:282] 0 containers: []
	W1229 07:13:58.371825  182389 logs.go:284] No container was found matching "coredns"
	I1229 07:13:58.371831  182389 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1229 07:13:58.371882  182389 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1229 07:13:58.407377  182389 cri.go:96] found id: "ab7241dc5c8c2a299a7ede3eb85d75ade304d211f5bb4ae3d659d4455d83ab45"
	I1229 07:13:58.407398  182389 cri.go:96] found id: ""
	I1229 07:13:58.407404  182389 logs.go:282] 1 containers: [ab7241dc5c8c2a299a7ede3eb85d75ade304d211f5bb4ae3d659d4455d83ab45]
	I1229 07:13:58.407460  182389 ssh_runner.go:195] Run: which crictl
	I1229 07:13:58.411355  182389 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1229 07:13:58.411411  182389 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1229 07:13:58.446517  182389 cri.go:96] found id: ""
	I1229 07:13:58.446537  182389 logs.go:282] 0 containers: []
	W1229 07:13:58.446544  182389 logs.go:284] No container was found matching "kube-proxy"
	I1229 07:13:58.446552  182389 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1229 07:13:58.446606  182389 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1229 07:13:58.482623  182389 cri.go:96] found id: "f73af8a09832156dab34a5bfa7a5b062701fc23c744162318dede9e9ffe5980a"
	I1229 07:13:58.482645  182389 cri.go:96] found id: "1d16b757804b48197837746fce61d7dfe6bda3185ce49f221bbea9e891681759"
	I1229 07:13:58.482649  182389 cri.go:96] found id: ""
	I1229 07:13:58.482656  182389 logs.go:282] 2 containers: [f73af8a09832156dab34a5bfa7a5b062701fc23c744162318dede9e9ffe5980a 1d16b757804b48197837746fce61d7dfe6bda3185ce49f221bbea9e891681759]
	I1229 07:13:58.482714  182389 ssh_runner.go:195] Run: which crictl
	I1229 07:13:58.486509  182389 ssh_runner.go:195] Run: which crictl
	I1229 07:13:58.489928  182389 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1229 07:13:58.489990  182389 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1229 07:13:58.524816  182389 cri.go:96] found id: ""
	I1229 07:13:58.524836  182389 logs.go:282] 0 containers: []
	W1229 07:13:58.524843  182389 logs.go:284] No container was found matching "kindnet"
	I1229 07:13:58.524848  182389 cri.go:61] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1229 07:13:58.524895  182389 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=storage-provisioner
	I1229 07:13:58.559550  182389 cri.go:96] found id: ""
	I1229 07:13:58.559571  182389 logs.go:282] 0 containers: []
	W1229 07:13:58.559579  182389 logs.go:284] No container was found matching "storage-provisioner"
	I1229 07:13:58.559594  182389 logs.go:123] Gathering logs for kube-scheduler [ab7241dc5c8c2a299a7ede3eb85d75ade304d211f5bb4ae3d659d4455d83ab45] ...
	I1229 07:13:58.559607  182389 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ab7241dc5c8c2a299a7ede3eb85d75ade304d211f5bb4ae3d659d4455d83ab45"
	I1229 07:13:58.635398  182389 logs.go:123] Gathering logs for kube-controller-manager [1d16b757804b48197837746fce61d7dfe6bda3185ce49f221bbea9e891681759] ...
	I1229 07:13:58.635424  182389 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1d16b757804b48197837746fce61d7dfe6bda3185ce49f221bbea9e891681759"
	I1229 07:13:58.669958  182389 logs.go:123] Gathering logs for CRI-O ...
	I1229 07:13:58.669980  182389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1229 07:13:58.720131  182389 logs.go:123] Gathering logs for describe nodes ...
	I1229 07:13:58.720163  182389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1229 07:13:58.779046  182389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1229 07:13:58.779068  182389 logs.go:123] Gathering logs for kube-apiserver [e508e3d2bbb715ae7ae25b0aadeb6cacca070930911a426a15f18c7f32ecfe03] ...
	I1229 07:13:58.779082  182389 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e508e3d2bbb715ae7ae25b0aadeb6cacca070930911a426a15f18c7f32ecfe03"
	I1229 07:13:58.817486  182389 logs.go:123] Gathering logs for kube-controller-manager [f73af8a09832156dab34a5bfa7a5b062701fc23c744162318dede9e9ffe5980a] ...
	I1229 07:13:58.817512  182389 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f73af8a09832156dab34a5bfa7a5b062701fc23c744162318dede9e9ffe5980a"
	I1229 07:13:58.853462  182389 logs.go:123] Gathering logs for container status ...
	I1229 07:13:58.853486  182389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1229 07:13:58.890724  182389 logs.go:123] Gathering logs for kubelet ...
	I1229 07:13:58.890757  182389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1229 07:13:58.991125  182389 logs.go:123] Gathering logs for dmesg ...
	I1229 07:13:58.991163  182389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1229 07:13:59.502268  225445 api_server.go:299] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1229 07:13:59.502658  225445 api_server.go:315] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1229 07:14:00.002292  225445 api_server.go:299] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1229 07:14:00.002681  225445 api_server.go:315] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1229 07:14:00.502299  225445 api_server.go:299] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1229 07:14:00.502682  225445 api_server.go:315] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1229 07:14:01.002315  225445 api_server.go:299] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1229 07:14:01.002713  225445 api_server.go:315] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1229 07:14:01.502349  225445 api_server.go:299] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1229 07:14:01.502746  225445 api_server.go:315] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1229 07:14:02.002284  225445 api_server.go:299] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1229 07:14:02.002638  225445 api_server.go:315] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1229 07:14:02.502310  225445 api_server.go:299] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1229 07:14:02.502726  225445 api_server.go:315] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1229 07:14:03.002294  225445 api_server.go:299] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1229 07:14:03.002675  225445 api_server.go:315] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1229 07:14:03.502277  225445 api_server.go:299] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1229 07:14:01.507103  182389 api_server.go:299] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1229 07:14:01.507483  182389 api_server.go:315] stopped: https://192.168.94.2:8443/healthz: Get "https://192.168.94.2:8443/healthz": dial tcp 192.168.94.2:8443: connect: connection refused
	I1229 07:14:01.507532  182389 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1229 07:14:01.507580  182389 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1229 07:14:01.542852  182389 cri.go:96] found id: "e508e3d2bbb715ae7ae25b0aadeb6cacca070930911a426a15f18c7f32ecfe03"
	I1229 07:14:01.542874  182389 cri.go:96] found id: ""
	I1229 07:14:01.542882  182389 logs.go:282] 1 containers: [e508e3d2bbb715ae7ae25b0aadeb6cacca070930911a426a15f18c7f32ecfe03]
	I1229 07:14:01.542967  182389 ssh_runner.go:195] Run: which crictl
	I1229 07:14:01.547463  182389 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1229 07:14:01.547532  182389 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1229 07:14:01.582284  182389 cri.go:96] found id: ""
	I1229 07:14:01.582309  182389 logs.go:282] 0 containers: []
	W1229 07:14:01.582318  182389 logs.go:284] No container was found matching "etcd"
	I1229 07:14:01.582324  182389 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1229 07:14:01.582435  182389 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1229 07:14:01.618765  182389 cri.go:96] found id: ""
	I1229 07:14:01.618788  182389 logs.go:282] 0 containers: []
	W1229 07:14:01.618795  182389 logs.go:284] No container was found matching "coredns"
	I1229 07:14:01.618801  182389 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1229 07:14:01.618845  182389 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1229 07:14:01.654463  182389 cri.go:96] found id: "ab7241dc5c8c2a299a7ede3eb85d75ade304d211f5bb4ae3d659d4455d83ab45"
	I1229 07:14:01.654489  182389 cri.go:96] found id: ""
	I1229 07:14:01.654500  182389 logs.go:282] 1 containers: [ab7241dc5c8c2a299a7ede3eb85d75ade304d211f5bb4ae3d659d4455d83ab45]
	I1229 07:14:01.654552  182389 ssh_runner.go:195] Run: which crictl
	I1229 07:14:01.658555  182389 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1229 07:14:01.658632  182389 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1229 07:14:01.695316  182389 cri.go:96] found id: ""
	I1229 07:14:01.695344  182389 logs.go:282] 0 containers: []
	W1229 07:14:01.695357  182389 logs.go:284] No container was found matching "kube-proxy"
	I1229 07:14:01.695364  182389 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1229 07:14:01.695414  182389 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1229 07:14:01.730388  182389 cri.go:96] found id: "f73af8a09832156dab34a5bfa7a5b062701fc23c744162318dede9e9ffe5980a"
	I1229 07:14:01.730412  182389 cri.go:96] found id: "1d16b757804b48197837746fce61d7dfe6bda3185ce49f221bbea9e891681759"
	I1229 07:14:01.730419  182389 cri.go:96] found id: ""
	I1229 07:14:01.730430  182389 logs.go:282] 2 containers: [f73af8a09832156dab34a5bfa7a5b062701fc23c744162318dede9e9ffe5980a 1d16b757804b48197837746fce61d7dfe6bda3185ce49f221bbea9e891681759]
	I1229 07:14:01.730485  182389 ssh_runner.go:195] Run: which crictl
	I1229 07:14:01.734856  182389 ssh_runner.go:195] Run: which crictl
	I1229 07:14:01.738766  182389 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1229 07:14:01.738825  182389 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1229 07:14:01.774024  182389 cri.go:96] found id: ""
	I1229 07:14:01.774050  182389 logs.go:282] 0 containers: []
	W1229 07:14:01.774059  182389 logs.go:284] No container was found matching "kindnet"
	I1229 07:14:01.774067  182389 cri.go:61] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1229 07:14:01.774148  182389 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=storage-provisioner
	I1229 07:14:01.808760  182389 cri.go:96] found id: ""
	I1229 07:14:01.808792  182389 logs.go:282] 0 containers: []
	W1229 07:14:01.808804  182389 logs.go:284] No container was found matching "storage-provisioner"
	I1229 07:14:01.808819  182389 logs.go:123] Gathering logs for kubelet ...
	I1229 07:14:01.808830  182389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1229 07:14:01.904134  182389 logs.go:123] Gathering logs for describe nodes ...
	I1229 07:14:01.904167  182389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1229 07:14:01.963898  182389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1229 07:14:01.963923  182389 logs.go:123] Gathering logs for kube-scheduler [ab7241dc5c8c2a299a7ede3eb85d75ade304d211f5bb4ae3d659d4455d83ab45] ...
	I1229 07:14:01.963936  182389 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ab7241dc5c8c2a299a7ede3eb85d75ade304d211f5bb4ae3d659d4455d83ab45"
	I1229 07:14:02.039441  182389 logs.go:123] Gathering logs for kube-controller-manager [1d16b757804b48197837746fce61d7dfe6bda3185ce49f221bbea9e891681759] ...
	I1229 07:14:02.039484  182389 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1d16b757804b48197837746fce61d7dfe6bda3185ce49f221bbea9e891681759"
	I1229 07:14:02.074669  182389 logs.go:123] Gathering logs for container status ...
	I1229 07:14:02.074695  182389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1229 07:14:02.113472  182389 logs.go:123] Gathering logs for dmesg ...
	I1229 07:14:02.113501  182389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1229 07:14:02.128130  182389 logs.go:123] Gathering logs for kube-apiserver [e508e3d2bbb715ae7ae25b0aadeb6cacca070930911a426a15f18c7f32ecfe03] ...
	I1229 07:14:02.128159  182389 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e508e3d2bbb715ae7ae25b0aadeb6cacca070930911a426a15f18c7f32ecfe03"
	I1229 07:14:02.165621  182389 logs.go:123] Gathering logs for kube-controller-manager [f73af8a09832156dab34a5bfa7a5b062701fc23c744162318dede9e9ffe5980a] ...
	I1229 07:14:02.165645  182389 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f73af8a09832156dab34a5bfa7a5b062701fc23c744162318dede9e9ffe5980a"
	I1229 07:14:02.204655  182389 logs.go:123] Gathering logs for CRI-O ...
	I1229 07:14:02.204677  182389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1229 07:14:04.755378  182389 api_server.go:299] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1229 07:14:04.755751  182389 api_server.go:315] stopped: https://192.168.94.2:8443/healthz: Get "https://192.168.94.2:8443/healthz": dial tcp 192.168.94.2:8443: connect: connection refused
	I1229 07:14:04.755801  182389 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1229 07:14:04.755863  182389 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1229 07:14:04.790628  182389 cri.go:96] found id: "e508e3d2bbb715ae7ae25b0aadeb6cacca070930911a426a15f18c7f32ecfe03"
	I1229 07:14:04.790647  182389 cri.go:96] found id: ""
	I1229 07:14:04.790655  182389 logs.go:282] 1 containers: [e508e3d2bbb715ae7ae25b0aadeb6cacca070930911a426a15f18c7f32ecfe03]
	I1229 07:14:04.790706  182389 ssh_runner.go:195] Run: which crictl
	I1229 07:14:04.794549  182389 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1229 07:14:04.794604  182389 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1229 07:14:04.829841  182389 cri.go:96] found id: ""
	I1229 07:14:04.829872  182389 logs.go:282] 0 containers: []
	W1229 07:14:04.829882  182389 logs.go:284] No container was found matching "etcd"
	I1229 07:14:04.829888  182389 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1229 07:14:04.829938  182389 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1229 07:14:04.866149  182389 cri.go:96] found id: ""
	I1229 07:14:04.866170  182389 logs.go:282] 0 containers: []
	W1229 07:14:04.866178  182389 logs.go:284] No container was found matching "coredns"
	I1229 07:14:04.866188  182389 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1229 07:14:04.866261  182389 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1229 07:14:04.900449  182389 cri.go:96] found id: "ab7241dc5c8c2a299a7ede3eb85d75ade304d211f5bb4ae3d659d4455d83ab45"
	I1229 07:14:04.900468  182389 cri.go:96] found id: ""
	I1229 07:14:04.900475  182389 logs.go:282] 1 containers: [ab7241dc5c8c2a299a7ede3eb85d75ade304d211f5bb4ae3d659d4455d83ab45]
	I1229 07:14:04.900545  182389 ssh_runner.go:195] Run: which crictl
	I1229 07:14:04.904359  182389 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1229 07:14:04.904417  182389 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1229 07:14:04.941174  182389 cri.go:96] found id: ""
	I1229 07:14:04.941201  182389 logs.go:282] 0 containers: []
	W1229 07:14:04.941209  182389 logs.go:284] No container was found matching "kube-proxy"
	I1229 07:14:04.941228  182389 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1229 07:14:04.941286  182389 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1229 07:14:04.977039  182389 cri.go:96] found id: "f73af8a09832156dab34a5bfa7a5b062701fc23c744162318dede9e9ffe5980a"
	I1229 07:14:04.977065  182389 cri.go:96] found id: "1d16b757804b48197837746fce61d7dfe6bda3185ce49f221bbea9e891681759"
	I1229 07:14:04.977071  182389 cri.go:96] found id: ""
	I1229 07:14:04.977080  182389 logs.go:282] 2 containers: [f73af8a09832156dab34a5bfa7a5b062701fc23c744162318dede9e9ffe5980a 1d16b757804b48197837746fce61d7dfe6bda3185ce49f221bbea9e891681759]
	I1229 07:14:04.977140  182389 ssh_runner.go:195] Run: which crictl
	I1229 07:14:04.980908  182389 ssh_runner.go:195] Run: which crictl
	I1229 07:14:04.984795  182389 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1229 07:14:04.984856  182389 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1229 07:14:05.018033  182389 cri.go:96] found id: ""
	I1229 07:14:05.018057  182389 logs.go:282] 0 containers: []
	W1229 07:14:05.018067  182389 logs.go:284] No container was found matching "kindnet"
	I1229 07:14:05.018073  182389 cri.go:61] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1229 07:14:05.018132  182389 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=storage-provisioner
	I1229 07:14:05.052773  182389 cri.go:96] found id: ""
	I1229 07:14:05.052796  182389 logs.go:282] 0 containers: []
	W1229 07:14:05.052803  182389 logs.go:284] No container was found matching "storage-provisioner"
	I1229 07:14:05.052816  182389 logs.go:123] Gathering logs for kube-scheduler [ab7241dc5c8c2a299a7ede3eb85d75ade304d211f5bb4ae3d659d4455d83ab45] ...
	I1229 07:14:05.052827  182389 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ab7241dc5c8c2a299a7ede3eb85d75ade304d211f5bb4ae3d659d4455d83ab45"
	I1229 07:14:05.126420  182389 logs.go:123] Gathering logs for kube-controller-manager [1d16b757804b48197837746fce61d7dfe6bda3185ce49f221bbea9e891681759] ...
	I1229 07:14:05.126448  182389 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1d16b757804b48197837746fce61d7dfe6bda3185ce49f221bbea9e891681759"
	I1229 07:14:05.160434  182389 logs.go:123] Gathering logs for CRI-O ...
	I1229 07:14:05.160459  182389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1229 07:14:05.211813  182389 logs.go:123] Gathering logs for kubelet ...
	I1229 07:14:05.211839  182389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1229 07:14:05.310918  182389 logs.go:123] Gathering logs for dmesg ...
	I1229 07:14:05.310948  182389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1229 07:14:05.326040  182389 logs.go:123] Gathering logs for describe nodes ...
	I1229 07:14:05.326065  182389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1229 07:14:05.384977  182389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1229 07:14:05.384998  182389 logs.go:123] Gathering logs for kube-apiserver [e508e3d2bbb715ae7ae25b0aadeb6cacca070930911a426a15f18c7f32ecfe03] ...
	I1229 07:14:05.385017  182389 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e508e3d2bbb715ae7ae25b0aadeb6cacca070930911a426a15f18c7f32ecfe03"
	I1229 07:14:05.422489  182389 logs.go:123] Gathering logs for kube-controller-manager [f73af8a09832156dab34a5bfa7a5b062701fc23c744162318dede9e9ffe5980a] ...
	I1229 07:14:05.422516  182389 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f73af8a09832156dab34a5bfa7a5b062701fc23c744162318dede9e9ffe5980a"
	I1229 07:14:05.457765  182389 logs.go:123] Gathering logs for container status ...
	I1229 07:14:05.457798  182389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	
	
	==> CRI-O <==
	Dec 29 07:13:56 old-k8s-version-876718 crio[779]: time="2025-12-29T07:13:56.878940489Z" level=info msg="Starting container: bbb7390c624a1953db4035741e4d5052fc1e6dac2b82310ad7538eea260b8434" id=b6d2d03e-e364-4f92-87ab-f91a4d384d20 name=/runtime.v1.RuntimeService/StartContainer
	Dec 29 07:13:56 old-k8s-version-876718 crio[779]: time="2025-12-29T07:13:56.881117049Z" level=info msg="Started container" PID=2168 containerID=bbb7390c624a1953db4035741e4d5052fc1e6dac2b82310ad7538eea260b8434 description=kube-system/coredns-5dd5756b68-pnstl/coredns id=b6d2d03e-e364-4f92-87ab-f91a4d384d20 name=/runtime.v1.RuntimeService/StartContainer sandboxID=78a277c7b968404fe74b24b584fc1155de0e4e414ecf5abeec91cf2d0c254138
	Dec 29 07:14:00 old-k8s-version-876718 crio[779]: time="2025-12-29T07:14:00.114722339Z" level=info msg="Running pod sandbox: default/busybox/POD" id=dca99cdb-965e-4a6a-95c3-5d9bef740784 name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 29 07:14:00 old-k8s-version-876718 crio[779]: time="2025-12-29T07:14:00.114802554Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 29 07:14:00 old-k8s-version-876718 crio[779]: time="2025-12-29T07:14:00.12006637Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:69feb44e8039b6ec003e57b8ef8b1b25b211a0e492c35cebe56427a2cc4aaf54 UID:cb3bcbb9-b40d-499b-89b5-6b34baf24e5b NetNS:/var/run/netns/948254b1-1cf1-4b2e-9c75-d2b166bb598c Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc00007fa40}] Aliases:map[]}"
	Dec 29 07:14:00 old-k8s-version-876718 crio[779]: time="2025-12-29T07:14:00.120091683Z" level=info msg="Adding pod default_busybox to CNI network \"kindnet\" (type=ptp)"
	Dec 29 07:14:00 old-k8s-version-876718 crio[779]: time="2025-12-29T07:14:00.136470732Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:69feb44e8039b6ec003e57b8ef8b1b25b211a0e492c35cebe56427a2cc4aaf54 UID:cb3bcbb9-b40d-499b-89b5-6b34baf24e5b NetNS:/var/run/netns/948254b1-1cf1-4b2e-9c75-d2b166bb598c Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc00007fa40}] Aliases:map[]}"
	Dec 29 07:14:00 old-k8s-version-876718 crio[779]: time="2025-12-29T07:14:00.136629353Z" level=info msg="Checking pod default_busybox for CNI network kindnet (type=ptp)"
	Dec 29 07:14:00 old-k8s-version-876718 crio[779]: time="2025-12-29T07:14:00.137398238Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Dec 29 07:14:00 old-k8s-version-876718 crio[779]: time="2025-12-29T07:14:00.138109239Z" level=info msg="Ran pod sandbox 69feb44e8039b6ec003e57b8ef8b1b25b211a0e492c35cebe56427a2cc4aaf54 with infra container: default/busybox/POD" id=dca99cdb-965e-4a6a-95c3-5d9bef740784 name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 29 07:14:00 old-k8s-version-876718 crio[779]: time="2025-12-29T07:14:00.139358628Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=9aeeeef6-ec10-4822-a508-cb1a87b100fd name=/runtime.v1.ImageService/ImageStatus
	Dec 29 07:14:00 old-k8s-version-876718 crio[779]: time="2025-12-29T07:14:00.139452644Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=9aeeeef6-ec10-4822-a508-cb1a87b100fd name=/runtime.v1.ImageService/ImageStatus
	Dec 29 07:14:00 old-k8s-version-876718 crio[779]: time="2025-12-29T07:14:00.139532569Z" level=info msg="Neither image nor artfiact gcr.io/k8s-minikube/busybox:1.28.4-glibc found" id=9aeeeef6-ec10-4822-a508-cb1a87b100fd name=/runtime.v1.ImageService/ImageStatus
	Dec 29 07:14:00 old-k8s-version-876718 crio[779]: time="2025-12-29T07:14:00.140066679Z" level=info msg="Pulling image: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=f4c726c4-a283-4b8a-a5ac-18e17d9a42a0 name=/runtime.v1.ImageService/PullImage
	Dec 29 07:14:00 old-k8s-version-876718 crio[779]: time="2025-12-29T07:14:00.140467119Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Dec 29 07:14:01 old-k8s-version-876718 crio[779]: time="2025-12-29T07:14:01.396553387Z" level=info msg="Pulled image: gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998" id=f4c726c4-a283-4b8a-a5ac-18e17d9a42a0 name=/runtime.v1.ImageService/PullImage
	Dec 29 07:14:01 old-k8s-version-876718 crio[779]: time="2025-12-29T07:14:01.397391155Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=2294c9de-41ef-4fd3-8dfc-8b30de59cf61 name=/runtime.v1.ImageService/ImageStatus
	Dec 29 07:14:01 old-k8s-version-876718 crio[779]: time="2025-12-29T07:14:01.398933537Z" level=info msg="Creating container: default/busybox/busybox" id=b6cb5aae-3add-41da-b556-cb8def36ff7a name=/runtime.v1.RuntimeService/CreateContainer
	Dec 29 07:14:01 old-k8s-version-876718 crio[779]: time="2025-12-29T07:14:01.399075557Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 29 07:14:01 old-k8s-version-876718 crio[779]: time="2025-12-29T07:14:01.402756733Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 29 07:14:01 old-k8s-version-876718 crio[779]: time="2025-12-29T07:14:01.403211636Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 29 07:14:01 old-k8s-version-876718 crio[779]: time="2025-12-29T07:14:01.431858958Z" level=info msg="Created container b9646dbaa1e8e235fc711cda1124068bd3f8143df5600f92b9edf8952497a350: default/busybox/busybox" id=b6cb5aae-3add-41da-b556-cb8def36ff7a name=/runtime.v1.RuntimeService/CreateContainer
	Dec 29 07:14:01 old-k8s-version-876718 crio[779]: time="2025-12-29T07:14:01.432433553Z" level=info msg="Starting container: b9646dbaa1e8e235fc711cda1124068bd3f8143df5600f92b9edf8952497a350" id=24eb6ca8-0e0f-44ce-a87a-44063b03835e name=/runtime.v1.RuntimeService/StartContainer
	Dec 29 07:14:01 old-k8s-version-876718 crio[779]: time="2025-12-29T07:14:01.434070717Z" level=info msg="Started container" PID=2255 containerID=b9646dbaa1e8e235fc711cda1124068bd3f8143df5600f92b9edf8952497a350 description=default/busybox/busybox id=24eb6ca8-0e0f-44ce-a87a-44063b03835e name=/runtime.v1.RuntimeService/StartContainer sandboxID=69feb44e8039b6ec003e57b8ef8b1b25b211a0e492c35cebe56427a2cc4aaf54
	Dec 29 07:14:07 old-k8s-version-876718 crio[779]: time="2025-12-29T07:14:07.891209674Z" level=error msg="Unhandled Error: unable to upgrade websocket connection: websocket server finished before becoming ready (logger=\"UnhandledError\")"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                              NAMESPACE
	b9646dbaa1e8e       gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998   7 seconds ago       Running             busybox                   0                   69feb44e8039b       busybox                                          default
	bbb7390c624a1       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc                                      12 seconds ago      Running             coredns                   0                   78a277c7b9684       coredns-5dd5756b68-pnstl                         kube-system
	ea45518b116d4       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      12 seconds ago      Running             storage-provisioner       0                   53a6a48218595       storage-provisioner                              kube-system
	0bc8ff834ee37       docker.io/kindest/kindnetd@sha256:7c22558dc06a570d46ea6e8a73b23cdc754eb81f7c08d3441a3171ad359ffc27    23 seconds ago      Running             kindnet-cni               0                   de432139b9d7e       kindnet-kgr4x                                    kube-system
	9eafddaf1adf1       ea1030da44aa18666a7bf15fddd2a38c3143c3277159cb8bdd95f45c8ce62d7a                                      24 seconds ago      Running             kube-proxy                0                   cc8f43203aa37       kube-proxy-2v9kr                                 kube-system
	8888c0f838df5       bb5e0dde9054c02d6badee88547be7e7bb7b7b818d277c8a61b4b29484bbff95                                      42 seconds ago      Running             kube-apiserver            0                   140f7fff691fe       kube-apiserver-old-k8s-version-876718            kube-system
	db83d20c1d6bc       f6f496300a2ae7a6727ccf3080d66d2fd22b6cfc271df5351c976c23a28bb157                                      42 seconds ago      Running             kube-scheduler            0                   6bca7858fe097       kube-scheduler-old-k8s-version-876718            kube-system
	fbf70a963a38e       4be79c38a4bab6e1252a35697500e8a0d9c5c7c771d9fcc1935c9a7f6cdf4c62                                      42 seconds ago      Running             kube-controller-manager   0                   2d540ef8ea04b       kube-controller-manager-old-k8s-version-876718   kube-system
	8be2fdc92693c       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9                                      42 seconds ago      Running             etcd                      0                   972d6582930cc       etcd-old-k8s-version-876718                      kube-system
	
	
	==> coredns [bbb7390c624a1953db4035741e4d5052fc1e6dac2b82310ad7538eea260b8434] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 25cf5af2951e282c4b0e961a02fb5d3e57c974501832fee92eec17b5135b9ec9d9e87d2ac94e6d117a5ed3dd54e8800aa7b4479706eb54497145ccdb80397d1b
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] 127.0.0.1:56151 - 16618 "HINFO IN 1156710521728283800.534670317913651589. udp 56 false 512" NXDOMAIN qr,rd,ra 131 0.015178364s
	
	
	==> describe nodes <==
	Name:               old-k8s-version-876718
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=old-k8s-version-876718
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=306c26d738eaa3534a776d7c684ed563998a48b8
	                    minikube.k8s.io/name=old-k8s-version-876718
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_29T07_13_32_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 29 Dec 2025 07:13:28 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  old-k8s-version-876718
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 29 Dec 2025 07:14:02 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 29 Dec 2025 07:14:02 +0000   Mon, 29 Dec 2025 07:13:26 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 29 Dec 2025 07:14:02 +0000   Mon, 29 Dec 2025 07:13:26 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 29 Dec 2025 07:14:02 +0000   Mon, 29 Dec 2025 07:13:26 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 29 Dec 2025 07:14:02 +0000   Mon, 29 Dec 2025 07:13:56 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.103.2
	  Hostname:    old-k8s-version-876718
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863360Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863360Ki
	  pods:               110
	System Info:
	  Machine ID:                 44f990608ba801cb32708aeb6951f96d
	  System UUID:                89c29f88-abf1-4b86-a174-1e64c8cd0857
	  Boot ID:                    38b59c2f-2e15-4538-b2f7-46a8b6545a02
	  Kernel Version:             6.8.0-1045-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.35.0
	  Kubelet Version:            v1.28.0
	  Kube-Proxy Version:         v1.28.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                              ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         10s
	  kube-system                 coredns-5dd5756b68-pnstl                          100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     25s
	  kube-system                 etcd-old-k8s-version-876718                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         38s
	  kube-system                 kindnet-kgr4x                                     100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      25s
	  kube-system                 kube-apiserver-old-k8s-version-876718             250m (3%)     0 (0%)      0 (0%)           0 (0%)         38s
	  kube-system                 kube-controller-manager-old-k8s-version-876718    200m (2%)     0 (0%)      0 (0%)           0 (0%)         38s
	  kube-system                 kube-proxy-2v9kr                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         25s
	  kube-system                 kube-scheduler-old-k8s-version-876718             100m (1%)     0 (0%)      0 (0%)           0 (0%)         38s
	  kube-system                 storage-provisioner                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         24s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 24s                kube-proxy       
	  Normal  Starting                 43s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  43s (x9 over 43s)  kubelet          Node old-k8s-version-876718 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    43s (x8 over 43s)  kubelet          Node old-k8s-version-876718 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     43s (x7 over 43s)  kubelet          Node old-k8s-version-876718 status is now: NodeHasSufficientPID
	  Normal  Starting                 38s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  38s                kubelet          Node old-k8s-version-876718 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    38s                kubelet          Node old-k8s-version-876718 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     38s                kubelet          Node old-k8s-version-876718 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           25s                node-controller  Node old-k8s-version-876718 event: Registered Node old-k8s-version-876718 in Controller
	  Normal  NodeReady                13s                kubelet          Node old-k8s-version-876718 status is now: NodeReady
	
	
	==> dmesg <==
	[Dec29 06:17] MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details.
	[  +0.001714] TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details.
	[  +0.001000] MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details.
	[  +0.084008] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge
	[  +0.380672] i8042: Warning: Keylock active
	[  +0.013979] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.493300] block sda: the capability attribute has been deprecated.
	[  +0.084603] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.024785] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +4.737934] kauditd_printk_skb: 47 callbacks suppressed
	
	
	==> etcd [8be2fdc92693c0666146db50a7ebba2aef1a6b34d8dced31b61ffcdaa9d44d0e] <==
	{"level":"info","ts":"2025-12-29T07:13:26.730242Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f23060b075c4c089 switched to configuration voters=(17451554867067011209)"}
	{"level":"info","ts":"2025-12-29T07:13:26.730395Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"3336683c081d149d","local-member-id":"f23060b075c4c089","added-peer-id":"f23060b075c4c089","added-peer-peer-urls":["https://192.168.103.2:2380"]}
	{"level":"info","ts":"2025-12-29T07:13:26.731079Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2025-12-29T07:13:26.73117Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.103.2:2380"}
	{"level":"info","ts":"2025-12-29T07:13:26.732589Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.103.2:2380"}
	{"level":"info","ts":"2025-12-29T07:13:26.731325Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"f23060b075c4c089","initial-advertise-peer-urls":["https://192.168.103.2:2380"],"listen-peer-urls":["https://192.168.103.2:2380"],"advertise-client-urls":["https://192.168.103.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.103.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2025-12-29T07:13:26.731359Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2025-12-29T07:13:27.321285Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f23060b075c4c089 is starting a new election at term 1"}
	{"level":"info","ts":"2025-12-29T07:13:27.321337Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f23060b075c4c089 became pre-candidate at term 1"}
	{"level":"info","ts":"2025-12-29T07:13:27.321369Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f23060b075c4c089 received MsgPreVoteResp from f23060b075c4c089 at term 1"}
	{"level":"info","ts":"2025-12-29T07:13:27.321384Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f23060b075c4c089 became candidate at term 2"}
	{"level":"info","ts":"2025-12-29T07:13:27.321393Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f23060b075c4c089 received MsgVoteResp from f23060b075c4c089 at term 2"}
	{"level":"info","ts":"2025-12-29T07:13:27.321406Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f23060b075c4c089 became leader at term 2"}
	{"level":"info","ts":"2025-12-29T07:13:27.321416Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: f23060b075c4c089 elected leader f23060b075c4c089 at term 2"}
	{"level":"info","ts":"2025-12-29T07:13:27.322352Z","caller":"etcdserver/server.go:2571","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2025-12-29T07:13:27.322971Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"f23060b075c4c089","local-member-attributes":"{Name:old-k8s-version-876718 ClientURLs:[https://192.168.103.2:2379]}","request-path":"/0/members/f23060b075c4c089/attributes","cluster-id":"3336683c081d149d","publish-timeout":"7s"}
	{"level":"info","ts":"2025-12-29T07:13:27.32301Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-12-29T07:13:27.323004Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-12-29T07:13:27.32387Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-12-29T07:13:27.323934Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-12-29T07:13:27.324324Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.103.2:2379"}
	{"level":"info","ts":"2025-12-29T07:13:27.32442Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2025-12-29T07:13:27.328073Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"3336683c081d149d","local-member-id":"f23060b075c4c089","cluster-version":"3.5"}
	{"level":"info","ts":"2025-12-29T07:13:27.32819Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2025-12-29T07:13:27.328214Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
	
	
	==> kernel <==
	 07:14:09 up 56 min,  0 user,  load average: 2.78, 2.92, 1.93
	Linux old-k8s-version-876718 6.8.0-1045-gcp #48~22.04.1-Ubuntu SMP Tue Nov 25 13:07:56 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [0bc8ff834ee37fd25403e5f25cc1a2c9fdf91009486c3aef2f8c49c03ea455c7] <==
	I1229 07:13:46.183095       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1229 07:13:46.183385       1 main.go:139] hostIP = 192.168.103.2
	podIP = 192.168.103.2
	I1229 07:13:46.183523       1 main.go:148] setting mtu 1500 for CNI 
	I1229 07:13:46.183540       1 main.go:178] kindnetd IP family: "ipv4"
	I1229 07:13:46.183561       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-12-29T07:13:46Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1229 07:13:46.420585       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1229 07:13:46.420632       1 controller.go:381] "Waiting for informer caches to sync"
	I1229 07:13:46.420644       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1229 07:13:46.481620       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1229 07:13:46.681719       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1229 07:13:46.681747       1 metrics.go:72] Registering metrics
	I1229 07:13:46.681795       1 controller.go:711] "Syncing nftables rules"
	I1229 07:13:56.391249       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1229 07:13:56.391325       1 main.go:301] handling current node
	I1229 07:14:06.382888       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1229 07:14:06.382919       1 main.go:301] handling current node
	
	
	==> kube-apiserver [8888c0f838df5c15a8d7ff93a8bde67015cc24e1822b5316045d92786a2d46c1] <==
	I1229 07:13:28.460992       1 apf_controller.go:377] Running API Priority and Fairness config worker
	I1229 07:13:28.461008       1 apf_controller.go:380] Running API Priority and Fairness periodic rebalancing process
	I1229 07:13:28.461119       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I1229 07:13:28.461155       1 aggregator.go:166] initial CRD sync complete...
	I1229 07:13:28.461169       1 autoregister_controller.go:141] Starting autoregister controller
	I1229 07:13:28.461177       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1229 07:13:28.461186       1 cache.go:39] Caches are synced for autoregister controller
	I1229 07:13:28.462750       1 controller.go:624] quota admission added evaluator for: namespaces
	E1229 07:13:28.480292       1 controller.go:146] "Failed to ensure lease exists, will retry" err="namespaces \"kube-system\" not found" interval="200ms"
	I1229 07:13:28.682751       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I1229 07:13:29.365326       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1229 07:13:29.368886       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1229 07:13:29.368906       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1229 07:13:29.739838       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1229 07:13:29.775455       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1229 07:13:29.878245       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1229 07:13:29.883871       1 lease.go:263] Resetting endpoints for master service "kubernetes" to [192.168.103.2]
	I1229 07:13:29.884830       1 controller.go:624] quota admission added evaluator for: endpoints
	I1229 07:13:29.889601       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1229 07:13:30.424323       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I1229 07:13:31.521916       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I1229 07:13:31.530951       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1229 07:13:31.539938       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	I1229 07:13:44.184273       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	I1229 07:13:44.186759       1 controller.go:624] quota admission added evaluator for: controllerrevisions.apps
	
	
	==> kube-controller-manager [fbf70a963a38e854499908b2553f9485879aaba3eb10c86856f6992be54b6366] <==
	I1229 07:13:44.215808       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-5dd5756b68-pnstl"
	I1229 07:13:44.223108       1 shared_informer.go:318] Caches are synced for stateful set
	I1229 07:13:44.224701       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-5dd5756b68-wpjzc"
	I1229 07:13:44.231168       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="42.820155ms"
	I1229 07:13:44.237509       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="6.288877ms"
	I1229 07:13:44.237616       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="65.023µs"
	I1229 07:13:44.322847       1 shared_informer.go:318] Caches are synced for cronjob
	I1229 07:13:44.355289       1 shared_informer.go:318] Caches are synced for resource quota
	I1229 07:13:44.373120       1 shared_informer.go:318] Caches are synced for endpoint_slice_mirroring
	I1229 07:13:44.373153       1 shared_informer.go:318] Caches are synced for endpoint_slice
	I1229 07:13:44.373714       1 shared_informer.go:318] Caches are synced for endpoint
	I1229 07:13:44.408288       1 shared_informer.go:318] Caches are synced for resource quota
	I1229 07:13:44.721758       1 shared_informer.go:318] Caches are synced for garbage collector
	I1229 07:13:44.723854       1 shared_informer.go:318] Caches are synced for garbage collector
	I1229 07:13:44.723893       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	I1229 07:13:44.931078       1 event.go:307] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set coredns-5dd5756b68 to 1 from 2"
	I1229 07:13:44.949701       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-5dd5756b68-wpjzc"
	I1229 07:13:44.959248       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="30.096322ms"
	I1229 07:13:44.968563       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="9.20286ms"
	I1229 07:13:44.968879       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="67.338µs"
	I1229 07:13:56.523250       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="125.627µs"
	I1229 07:13:56.533316       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="130.46µs"
	I1229 07:13:57.685045       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="6.378173ms"
	I1229 07:13:57.685214       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="71.079µs"
	I1229 07:13:59.126412       1 node_lifecycle_controller.go:1048] "Controller detected that some Nodes are Ready. Exiting master disruption mode"
	
	
	==> kube-proxy [9eafddaf1adf1a9f3ea9547bd7f9feb5914efa05119db0243f51d8ca3daceb0d] <==
	I1229 07:13:44.761925       1 server_others.go:69] "Using iptables proxy"
	I1229 07:13:44.771897       1 node.go:141] Successfully retrieved node IP: 192.168.103.2
	I1229 07:13:44.797096       1 server.go:632] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1229 07:13:44.801209       1 server_others.go:152] "Using iptables Proxier"
	I1229 07:13:44.801347       1 server_others.go:421] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I1229 07:13:44.801384       1 server_others.go:438] "Defaulting to no-op detect-local"
	I1229 07:13:44.801466       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1229 07:13:44.802082       1 server.go:846] "Version info" version="v1.28.0"
	I1229 07:13:44.802149       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1229 07:13:44.805960       1 config.go:188] "Starting service config controller"
	I1229 07:13:44.806044       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1229 07:13:44.807408       1 config.go:97] "Starting endpoint slice config controller"
	I1229 07:13:44.807481       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1229 07:13:44.810326       1 config.go:315] "Starting node config controller"
	I1229 07:13:44.810358       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1229 07:13:44.906468       1 shared_informer.go:318] Caches are synced for service config
	I1229 07:13:44.910568       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I1229 07:13:44.910709       1 shared_informer.go:318] Caches are synced for node config
	
	
	==> kube-scheduler [db83d20c1d6bc7057894a05bd8f59aec7d0350e98c591c939d205148b020a4e4] <==
	E1229 07:13:28.434780       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1229 07:13:28.434681       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W1229 07:13:28.434820       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1229 07:13:28.434845       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W1229 07:13:28.434905       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1229 07:13:28.434929       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W1229 07:13:29.243440       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E1229 07:13:29.243470       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W1229 07:13:29.278726       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1229 07:13:29.278753       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W1229 07:13:29.279743       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1229 07:13:29.279763       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W1229 07:13:29.325403       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1229 07:13:29.325432       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W1229 07:13:29.411055       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E1229 07:13:29.411098       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W1229 07:13:29.448243       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1229 07:13:29.448286       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W1229 07:13:29.453568       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E1229 07:13:29.453593       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W1229 07:13:29.489977       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1229 07:13:29.490005       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W1229 07:13:29.605843       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1229 07:13:29.605874       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	I1229 07:13:30.129259       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Dec 29 07:13:44 old-k8s-version-876718 kubelet[1406]: I1229 07:13:44.211405    1406 topology_manager.go:215] "Topology Admit Handler" podUID="aa334643-1cbf-42a1-9792-4c11f4bd321e" podNamespace="kube-system" podName="kindnet-kgr4x"
	Dec 29 07:13:44 old-k8s-version-876718 kubelet[1406]: I1229 07:13:44.250767    1406 kuberuntime_manager.go:1463] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Dec 29 07:13:44 old-k8s-version-876718 kubelet[1406]: I1229 07:13:44.251456    1406 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Dec 29 07:13:44 old-k8s-version-876718 kubelet[1406]: I1229 07:13:44.271682    1406 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/3a7b9034-71d8-48ea-a007-730b24cdf7e1-xtables-lock\") pod \"kube-proxy-2v9kr\" (UID: \"3a7b9034-71d8-48ea-a007-730b24cdf7e1\") " pod="kube-system/kube-proxy-2v9kr"
	Dec 29 07:13:44 old-k8s-version-876718 kubelet[1406]: I1229 07:13:44.271726    1406 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/aa334643-1cbf-42a1-9792-4c11f4bd321e-xtables-lock\") pod \"kindnet-kgr4x\" (UID: \"aa334643-1cbf-42a1-9792-4c11f4bd321e\") " pod="kube-system/kindnet-kgr4x"
	Dec 29 07:13:44 old-k8s-version-876718 kubelet[1406]: I1229 07:13:44.271764    1406 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/3a7b9034-71d8-48ea-a007-730b24cdf7e1-lib-modules\") pod \"kube-proxy-2v9kr\" (UID: \"3a7b9034-71d8-48ea-a007-730b24cdf7e1\") " pod="kube-system/kube-proxy-2v9kr"
	Dec 29 07:13:44 old-k8s-version-876718 kubelet[1406]: I1229 07:13:44.271849    1406 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v876w\" (UniqueName: \"kubernetes.io/projected/3a7b9034-71d8-48ea-a007-730b24cdf7e1-kube-api-access-v876w\") pod \"kube-proxy-2v9kr\" (UID: \"3a7b9034-71d8-48ea-a007-730b24cdf7e1\") " pod="kube-system/kube-proxy-2v9kr"
	Dec 29 07:13:44 old-k8s-version-876718 kubelet[1406]: I1229 07:13:44.271913    1406 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/3a7b9034-71d8-48ea-a007-730b24cdf7e1-kube-proxy\") pod \"kube-proxy-2v9kr\" (UID: \"3a7b9034-71d8-48ea-a007-730b24cdf7e1\") " pod="kube-system/kube-proxy-2v9kr"
	Dec 29 07:13:44 old-k8s-version-876718 kubelet[1406]: I1229 07:13:44.271949    1406 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/aa334643-1cbf-42a1-9792-4c11f4bd321e-cni-cfg\") pod \"kindnet-kgr4x\" (UID: \"aa334643-1cbf-42a1-9792-4c11f4bd321e\") " pod="kube-system/kindnet-kgr4x"
	Dec 29 07:13:44 old-k8s-version-876718 kubelet[1406]: I1229 07:13:44.271980    1406 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/aa334643-1cbf-42a1-9792-4c11f4bd321e-lib-modules\") pod \"kindnet-kgr4x\" (UID: \"aa334643-1cbf-42a1-9792-4c11f4bd321e\") " pod="kube-system/kindnet-kgr4x"
	Dec 29 07:13:44 old-k8s-version-876718 kubelet[1406]: I1229 07:13:44.272025    1406 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z6m9z\" (UniqueName: \"kubernetes.io/projected/aa334643-1cbf-42a1-9792-4c11f4bd321e-kube-api-access-z6m9z\") pod \"kindnet-kgr4x\" (UID: \"aa334643-1cbf-42a1-9792-4c11f4bd321e\") " pod="kube-system/kindnet-kgr4x"
	Dec 29 07:13:46 old-k8s-version-876718 kubelet[1406]: I1229 07:13:46.642651    1406 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-2v9kr" podStartSLOduration=2.642597925 podCreationTimestamp="2025-12-29 07:13:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-29 07:13:45.643875738 +0000 UTC m=+14.145383987" watchObservedRunningTime="2025-12-29 07:13:46.642597925 +0000 UTC m=+15.144106174"
	Dec 29 07:13:46 old-k8s-version-876718 kubelet[1406]: I1229 07:13:46.642798    1406 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kindnet-kgr4x" podStartSLOduration=1.261308539 podCreationTimestamp="2025-12-29 07:13:44 +0000 UTC" firstStartedPulling="2025-12-29 07:13:44.520151489 +0000 UTC m=+13.021659723" lastFinishedPulling="2025-12-29 07:13:45.901611761 +0000 UTC m=+14.403119994" observedRunningTime="2025-12-29 07:13:46.642575352 +0000 UTC m=+15.144083602" watchObservedRunningTime="2025-12-29 07:13:46.64276881 +0000 UTC m=+15.144277060"
	Dec 29 07:13:56 old-k8s-version-876718 kubelet[1406]: I1229 07:13:56.502631    1406 kubelet_node_status.go:493] "Fast updating node status as it just became ready"
	Dec 29 07:13:56 old-k8s-version-876718 kubelet[1406]: I1229 07:13:56.523499    1406 topology_manager.go:215] "Topology Admit Handler" podUID="36610378-7535-44b7-a5f7-2aa2a3f81b36" podNamespace="kube-system" podName="coredns-5dd5756b68-pnstl"
	Dec 29 07:13:56 old-k8s-version-876718 kubelet[1406]: I1229 07:13:56.524816    1406 topology_manager.go:215] "Topology Admit Handler" podUID="554b2d49-670e-4430-bd1c-298394852b83" podNamespace="kube-system" podName="storage-provisioner"
	Dec 29 07:13:56 old-k8s-version-876718 kubelet[1406]: I1229 07:13:56.565092    1406 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/36610378-7535-44b7-a5f7-2aa2a3f81b36-config-volume\") pod \"coredns-5dd5756b68-pnstl\" (UID: \"36610378-7535-44b7-a5f7-2aa2a3f81b36\") " pod="kube-system/coredns-5dd5756b68-pnstl"
	Dec 29 07:13:56 old-k8s-version-876718 kubelet[1406]: I1229 07:13:56.565138    1406 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q77kg\" (UniqueName: \"kubernetes.io/projected/554b2d49-670e-4430-bd1c-298394852b83-kube-api-access-q77kg\") pod \"storage-provisioner\" (UID: \"554b2d49-670e-4430-bd1c-298394852b83\") " pod="kube-system/storage-provisioner"
	Dec 29 07:13:56 old-k8s-version-876718 kubelet[1406]: I1229 07:13:56.565158    1406 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z64jr\" (UniqueName: \"kubernetes.io/projected/36610378-7535-44b7-a5f7-2aa2a3f81b36-kube-api-access-z64jr\") pod \"coredns-5dd5756b68-pnstl\" (UID: \"36610378-7535-44b7-a5f7-2aa2a3f81b36\") " pod="kube-system/coredns-5dd5756b68-pnstl"
	Dec 29 07:13:56 old-k8s-version-876718 kubelet[1406]: I1229 07:13:56.565181    1406 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/554b2d49-670e-4430-bd1c-298394852b83-tmp\") pod \"storage-provisioner\" (UID: \"554b2d49-670e-4430-bd1c-298394852b83\") " pod="kube-system/storage-provisioner"
	Dec 29 07:13:57 old-k8s-version-876718 kubelet[1406]: I1229 07:13:57.666145    1406 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=12.66609371 podCreationTimestamp="2025-12-29 07:13:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-29 07:13:57.666056083 +0000 UTC m=+26.167564332" watchObservedRunningTime="2025-12-29 07:13:57.66609371 +0000 UTC m=+26.167601962"
	Dec 29 07:13:57 old-k8s-version-876718 kubelet[1406]: I1229 07:13:57.678146    1406 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-5dd5756b68-pnstl" podStartSLOduration=13.678097809 podCreationTimestamp="2025-12-29 07:13:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-29 07:13:57.678052616 +0000 UTC m=+26.179560866" watchObservedRunningTime="2025-12-29 07:13:57.678097809 +0000 UTC m=+26.179606060"
	Dec 29 07:13:59 old-k8s-version-876718 kubelet[1406]: I1229 07:13:59.812941    1406 topology_manager.go:215] "Topology Admit Handler" podUID="cb3bcbb9-b40d-499b-89b5-6b34baf24e5b" podNamespace="default" podName="busybox"
	Dec 29 07:13:59 old-k8s-version-876718 kubelet[1406]: I1229 07:13:59.886756    1406 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pss7k\" (UniqueName: \"kubernetes.io/projected/cb3bcbb9-b40d-499b-89b5-6b34baf24e5b-kube-api-access-pss7k\") pod \"busybox\" (UID: \"cb3bcbb9-b40d-499b-89b5-6b34baf24e5b\") " pod="default/busybox"
	Dec 29 07:14:01 old-k8s-version-876718 kubelet[1406]: I1229 07:14:01.675742    1406 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="default/busybox" podStartSLOduration=1.418554802 podCreationTimestamp="2025-12-29 07:13:59 +0000 UTC" firstStartedPulling="2025-12-29 07:14:00.139715058 +0000 UTC m=+28.641223293" lastFinishedPulling="2025-12-29 07:14:01.396849328 +0000 UTC m=+29.898357576" observedRunningTime="2025-12-29 07:14:01.67547331 +0000 UTC m=+30.176981560" watchObservedRunningTime="2025-12-29 07:14:01.675689085 +0000 UTC m=+30.177197334"
	
	
	==> storage-provisioner [ea45518b116d4300f9543d24d826e8e19b8f1f0920358bbb33da7e9bc913a0fd] <==
	I1229 07:13:56.889727       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1229 07:13:56.899087       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1229 07:13:56.899149       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1229 07:13:56.906031       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1229 07:13:56.906200       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-876718_18ee3562-4e39-4a7d-88b8-fc2c0196e397!
	I1229 07:13:56.906185       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"9eb83101-af4b-4f08-89af-4c2a64d6d770", APIVersion:"v1", ResourceVersion:"431", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-876718_18ee3562-4e39-4a7d-88b8-fc2c0196e397 became leader
	I1229 07:13:57.007204       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-876718_18ee3562-4e39-4a7d-88b8-fc2c0196e397!
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-876718 -n old-k8s-version-876718
helpers_test.go:270: (dbg) Run:  kubectl --context old-k8s-version-876718 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:294: <<< TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive FAILED: end of post-mortem logs <<<
helpers_test.go:295: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (2.23s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (5.57s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p old-k8s-version-876718 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 pause -p old-k8s-version-876718 --alsologtostderr -v=1: exit status 80 (1.659216547s)

                                                
                                                
-- stdout --
	* Pausing node old-k8s-version-876718 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1229 07:15:29.119960  250701 out.go:360] Setting OutFile to fd 1 ...
	I1229 07:15:29.120251  250701 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1229 07:15:29.120262  250701 out.go:374] Setting ErrFile to fd 2...
	I1229 07:15:29.120266  250701 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1229 07:15:29.120471  250701 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22353-9207/.minikube/bin
	I1229 07:15:29.120695  250701 out.go:368] Setting JSON to false
	I1229 07:15:29.120712  250701 mustload.go:66] Loading cluster: old-k8s-version-876718
	I1229 07:15:29.121053  250701 config.go:182] Loaded profile config "old-k8s-version-876718": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1229 07:15:29.121481  250701 cli_runner.go:164] Run: docker container inspect old-k8s-version-876718 --format={{.State.Status}}
	I1229 07:15:29.140672  250701 host.go:66] Checking if "old-k8s-version-876718" exists ...
	I1229 07:15:29.140990  250701 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1229 07:15:29.199574  250701 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:4 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:76 OomKillDisable:false NGoroutines:84 SystemTime:2025-12-29 07:15:29.188506179 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:dea7da592f5d1d2b7755e3a161be07f43fad8f75 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1229 07:15:29.200299  250701 pause.go:60] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766979815-22353@sha256:20dad5895b49b986a1253c0faab60865204843ac97fd3a6e6210da5896244409 binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/22353/minikube-v1.37.0-1766979747-22353-amd64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1766979747-22353/minikube-v1.37.0-1766979747-22353-amd64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1766979747-22353-amd64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) preload-source:auto profile:old-k8s-version-876718 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) rosetta:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s
(bool=true) wantupdatenotification:%!s(bool=true) wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1229 07:15:29.202135  250701 out.go:179] * Pausing node old-k8s-version-876718 ... 
	I1229 07:15:29.203256  250701 host.go:66] Checking if "old-k8s-version-876718" exists ...
	I1229 07:15:29.203517  250701 ssh_runner.go:195] Run: systemctl --version
	I1229 07:15:29.203558  250701 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-876718
	I1229 07:15:29.220467  250701 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33058 SSHKeyPath:/home/jenkins/minikube-integration/22353-9207/.minikube/machines/old-k8s-version-876718/id_rsa Username:docker}
	I1229 07:15:29.320197  250701 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1229 07:15:29.341588  250701 pause.go:52] kubelet running: true
	I1229 07:15:29.341659  250701 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1229 07:15:29.504694  250701 cri.go:61] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1229 07:15:29.504768  250701 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1229 07:15:29.570537  250701 cri.go:96] found id: "1580265780bb72872432923c6589598a07efda6af2d5ede23afbf8a4ff201291"
	I1229 07:15:29.570565  250701 cri.go:96] found id: "ae7be12ff50cb259b5279dc02c3c2df281a1f08343c6bdd43a0534b08ec9a6b6"
	I1229 07:15:29.570572  250701 cri.go:96] found id: "eccdd751d5c90dc102d5991e820df94c667027233d147fc5276fe889a9653468"
	I1229 07:15:29.570577  250701 cri.go:96] found id: "604c0d1f5c7a0df5b8eb5cb40329d966a9ac5cc854e5051c0596c0c5eb5f91ed"
	I1229 07:15:29.570580  250701 cri.go:96] found id: "ffdc68478751c4ef8ecfb26589718e753fec507bdd303d88a626d88adc6b76b9"
	I1229 07:15:29.570584  250701 cri.go:96] found id: "96d9acdaa9e812fcd678cb5aa4c56ffc81629c3f8f930d7c429c5c520e7684c8"
	I1229 07:15:29.570586  250701 cri.go:96] found id: "bacf752453b6e31e76322e28d8bd8e4495c2626f31b52d8c86de2430551e0205"
	I1229 07:15:29.570589  250701 cri.go:96] found id: "69931aee6620ecef0e707aa69dde3c1c55637a74c6d0b2b17435ae34321b5fda"
	I1229 07:15:29.570591  250701 cri.go:96] found id: "176fbe8370904a1abad1e6ed78d46681127fa2c11cbc919f309fe0a96e3bf559"
	I1229 07:15:29.570596  250701 cri.go:96] found id: "066fa833e37233849566c1e1480b105296402328ef58b83df823298bf2eb8f4e"
	I1229 07:15:29.570599  250701 cri.go:96] found id: "9d62d4e5a2727d58ed4b3c8405a2b5330cd761d5a2c4e9aa2d1157dc7249f99d"
	I1229 07:15:29.570602  250701 cri.go:96] found id: ""
	I1229 07:15:29.570641  250701 ssh_runner.go:195] Run: sudo runc list -f json
	I1229 07:15:29.583380  250701 retry.go:84] will retry after 300ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-29T07:15:29Z" level=error msg="open /run/runc: no such file or directory"
	I1229 07:15:29.877936  250701 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1229 07:15:29.891690  250701 pause.go:52] kubelet running: false
	I1229 07:15:29.891746  250701 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1229 07:15:30.036751  250701 cri.go:61] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1229 07:15:30.036825  250701 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1229 07:15:30.106777  250701 cri.go:96] found id: "1580265780bb72872432923c6589598a07efda6af2d5ede23afbf8a4ff201291"
	I1229 07:15:30.106804  250701 cri.go:96] found id: "ae7be12ff50cb259b5279dc02c3c2df281a1f08343c6bdd43a0534b08ec9a6b6"
	I1229 07:15:30.106812  250701 cri.go:96] found id: "eccdd751d5c90dc102d5991e820df94c667027233d147fc5276fe889a9653468"
	I1229 07:15:30.106819  250701 cri.go:96] found id: "604c0d1f5c7a0df5b8eb5cb40329d966a9ac5cc854e5051c0596c0c5eb5f91ed"
	I1229 07:15:30.106825  250701 cri.go:96] found id: "ffdc68478751c4ef8ecfb26589718e753fec507bdd303d88a626d88adc6b76b9"
	I1229 07:15:30.106832  250701 cri.go:96] found id: "96d9acdaa9e812fcd678cb5aa4c56ffc81629c3f8f930d7c429c5c520e7684c8"
	I1229 07:15:30.106837  250701 cri.go:96] found id: "bacf752453b6e31e76322e28d8bd8e4495c2626f31b52d8c86de2430551e0205"
	I1229 07:15:30.106867  250701 cri.go:96] found id: "69931aee6620ecef0e707aa69dde3c1c55637a74c6d0b2b17435ae34321b5fda"
	I1229 07:15:30.106878  250701 cri.go:96] found id: "176fbe8370904a1abad1e6ed78d46681127fa2c11cbc919f309fe0a96e3bf559"
	I1229 07:15:30.106886  250701 cri.go:96] found id: "066fa833e37233849566c1e1480b105296402328ef58b83df823298bf2eb8f4e"
	I1229 07:15:30.106892  250701 cri.go:96] found id: "9d62d4e5a2727d58ed4b3c8405a2b5330cd761d5a2c4e9aa2d1157dc7249f99d"
	I1229 07:15:30.106895  250701 cri.go:96] found id: ""
	I1229 07:15:30.106937  250701 ssh_runner.go:195] Run: sudo runc list -f json
	I1229 07:15:30.482472  250701 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1229 07:15:30.496203  250701 pause.go:52] kubelet running: false
	I1229 07:15:30.496274  250701 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1229 07:15:30.637297  250701 cri.go:61] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1229 07:15:30.637388  250701 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1229 07:15:30.703651  250701 cri.go:96] found id: "1580265780bb72872432923c6589598a07efda6af2d5ede23afbf8a4ff201291"
	I1229 07:15:30.703677  250701 cri.go:96] found id: "ae7be12ff50cb259b5279dc02c3c2df281a1f08343c6bdd43a0534b08ec9a6b6"
	I1229 07:15:30.703682  250701 cri.go:96] found id: "eccdd751d5c90dc102d5991e820df94c667027233d147fc5276fe889a9653468"
	I1229 07:15:30.703686  250701 cri.go:96] found id: "604c0d1f5c7a0df5b8eb5cb40329d966a9ac5cc854e5051c0596c0c5eb5f91ed"
	I1229 07:15:30.703701  250701 cri.go:96] found id: "ffdc68478751c4ef8ecfb26589718e753fec507bdd303d88a626d88adc6b76b9"
	I1229 07:15:30.703704  250701 cri.go:96] found id: "96d9acdaa9e812fcd678cb5aa4c56ffc81629c3f8f930d7c429c5c520e7684c8"
	I1229 07:15:30.703707  250701 cri.go:96] found id: "bacf752453b6e31e76322e28d8bd8e4495c2626f31b52d8c86de2430551e0205"
	I1229 07:15:30.703710  250701 cri.go:96] found id: "69931aee6620ecef0e707aa69dde3c1c55637a74c6d0b2b17435ae34321b5fda"
	I1229 07:15:30.703713  250701 cri.go:96] found id: "176fbe8370904a1abad1e6ed78d46681127fa2c11cbc919f309fe0a96e3bf559"
	I1229 07:15:30.703719  250701 cri.go:96] found id: "066fa833e37233849566c1e1480b105296402328ef58b83df823298bf2eb8f4e"
	I1229 07:15:30.703725  250701 cri.go:96] found id: "9d62d4e5a2727d58ed4b3c8405a2b5330cd761d5a2c4e9aa2d1157dc7249f99d"
	I1229 07:15:30.703728  250701 cri.go:96] found id: ""
	I1229 07:15:30.703766  250701 ssh_runner.go:195] Run: sudo runc list -f json
	I1229 07:15:30.718590  250701 out.go:203] 
	W1229 07:15:30.719735  250701 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-29T07:15:30Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-29T07:15:30Z" level=error msg="open /run/runc: no such file or directory"
	
	W1229 07:15:30.719751  250701 out.go:285] * 
	* 
	W1229 07:15:30.721499  250701 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1229 07:15:30.722839  250701 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:309: out/minikube-linux-amd64 pause -p old-k8s-version-876718 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect old-k8s-version-876718
helpers_test.go:244: (dbg) docker inspect old-k8s-version-876718:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "707d2d5cd5ce42857aeddeecf651ef884f3999cc918cf89592a98dcee898883d",
	        "Created": "2025-12-29T07:13:19.142529229Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 241415,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-29T07:14:26.625069811Z",
	            "FinishedAt": "2025-12-29T07:14:25.774950509Z"
	        },
	        "Image": "sha256:a2c772d8c9235c5e8a66a3d0af7f5184436aa141d74f90a5659fe0d594ea14d5",
	        "ResolvConfPath": "/var/lib/docker/containers/707d2d5cd5ce42857aeddeecf651ef884f3999cc918cf89592a98dcee898883d/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/707d2d5cd5ce42857aeddeecf651ef884f3999cc918cf89592a98dcee898883d/hostname",
	        "HostsPath": "/var/lib/docker/containers/707d2d5cd5ce42857aeddeecf651ef884f3999cc918cf89592a98dcee898883d/hosts",
	        "LogPath": "/var/lib/docker/containers/707d2d5cd5ce42857aeddeecf651ef884f3999cc918cf89592a98dcee898883d/707d2d5cd5ce42857aeddeecf651ef884f3999cc918cf89592a98dcee898883d-json.log",
	        "Name": "/old-k8s-version-876718",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-876718:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "old-k8s-version-876718",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "707d2d5cd5ce42857aeddeecf651ef884f3999cc918cf89592a98dcee898883d",
	                "LowerDir": "/var/lib/docker/overlay2/674fb664845fd5c6a2ef24debb7531ad5eb9beab7fa93bd8dc00561d5a5ed330-init/diff:/var/lib/docker/overlay2/3c9e424a3298c821d4195922375c499065668db3230de879006c717dc268a220/diff",
	                "MergedDir": "/var/lib/docker/overlay2/674fb664845fd5c6a2ef24debb7531ad5eb9beab7fa93bd8dc00561d5a5ed330/merged",
	                "UpperDir": "/var/lib/docker/overlay2/674fb664845fd5c6a2ef24debb7531ad5eb9beab7fa93bd8dc00561d5a5ed330/diff",
	                "WorkDir": "/var/lib/docker/overlay2/674fb664845fd5c6a2ef24debb7531ad5eb9beab7fa93bd8dc00561d5a5ed330/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-876718",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-876718/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-876718",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766979815-22353@sha256:20dad5895b49b986a1253c0faab60865204843ac97fd3a6e6210da5896244409",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-876718",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-876718",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "ebbe8ca2dca6bdf95a3270e89afdc17c45ad6b1cdebf2233a52bf180d8bb7fdb",
	            "SandboxKey": "/var/run/docker/netns/ebbe8ca2dca6",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33058"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33059"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33062"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33060"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33061"
	                    }
	                ]
	            },
	            "Networks": {
	                "old-k8s-version-876718": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.103.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "6961f21bb90e6befcbf5f75f7b239c49f9b8e14ab6e6619030de29754825fc86",
	                    "EndpointID": "ebb562e8c62100e33a361035255127bbb64ca101f3dbe4aef373d7293436d382",
	                    "Gateway": "192.168.103.1",
	                    "IPAddress": "192.168.103.2",
	                    "MacAddress": "12:32:d8:2c:5d:54",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "old-k8s-version-876718",
	                        "707d2d5cd5ce"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-876718 -n old-k8s-version-876718
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-876718 -n old-k8s-version-876718: exit status 2 (324.467694ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestStartStop/group/old-k8s-version/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-876718 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-amd64 -p old-k8s-version-876718 logs -n 25: (1.114465683s)
helpers_test.go:261: TestStartStop/group/old-k8s-version/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────
────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │          PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────
────┤
	│ ssh     │ -p NoKubernetes-868221 sudo systemctl is-active --quiet service kubelet                                                                                                                                                                       │ NoKubernetes-868221       │ jenkins │ v1.37.0 │ 29 Dec 25 07:12 UTC │                     │
	│ delete  │ -p NoKubernetes-868221                                                                                                                                                                                                                        │ NoKubernetes-868221       │ jenkins │ v1.37.0 │ 29 Dec 25 07:12 UTC │ 29 Dec 25 07:12 UTC │
	│ start   │ -p kubernetes-upgrade-174577 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                                                                                      │ kubernetes-upgrade-174577 │ jenkins │ v1.37.0 │ 29 Dec 25 07:12 UTC │ 29 Dec 25 07:12 UTC │
	│ image   │ test-preload-457393 image list                                                                                                                                                                                                                │ test-preload-457393       │ jenkins │ v1.37.0 │ 29 Dec 25 07:12 UTC │ 29 Dec 25 07:12 UTC │
	│ delete  │ -p test-preload-457393                                                                                                                                                                                                                        │ test-preload-457393       │ jenkins │ v1.37.0 │ 29 Dec 25 07:12 UTC │ 29 Dec 25 07:12 UTC │
	│ start   │ -p cert-expiration-452455 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio                                                                                                                                        │ cert-expiration-452455    │ jenkins │ v1.37.0 │ 29 Dec 25 07:12 UTC │ 29 Dec 25 07:12 UTC │
	│ delete  │ -p missing-upgrade-967138                                                                                                                                                                                                                     │ missing-upgrade-967138    │ jenkins │ v1.37.0 │ 29 Dec 25 07:12 UTC │ 29 Dec 25 07:12 UTC │
	│ start   │ -p force-systemd-flag-074338 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                                                                                                                   │ force-systemd-flag-074338 │ jenkins │ v1.37.0 │ 29 Dec 25 07:12 UTC │ 29 Dec 25 07:12 UTC │
	│ stop    │ -p kubernetes-upgrade-174577 --alsologtostderr                                                                                                                                                                                                │ kubernetes-upgrade-174577 │ jenkins │ v1.37.0 │ 29 Dec 25 07:12 UTC │ 29 Dec 25 07:12 UTC │
	│ start   │ -p kubernetes-upgrade-174577 --memory=3072 --kubernetes-version=v1.35.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                                                                                      │ kubernetes-upgrade-174577 │ jenkins │ v1.37.0 │ 29 Dec 25 07:12 UTC │                     │
	│ ssh     │ force-systemd-flag-074338 ssh cat /etc/crio/crio.conf.d/02-crio.conf                                                                                                                                                                          │ force-systemd-flag-074338 │ jenkins │ v1.37.0 │ 29 Dec 25 07:12 UTC │ 29 Dec 25 07:12 UTC │
	│ delete  │ -p force-systemd-flag-074338                                                                                                                                                                                                                  │ force-systemd-flag-074338 │ jenkins │ v1.37.0 │ 29 Dec 25 07:12 UTC │ 29 Dec 25 07:12 UTC │
	│ start   │ -p cert-options-001954 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio                     │ cert-options-001954       │ jenkins │ v1.37.0 │ 29 Dec 25 07:12 UTC │ 29 Dec 25 07:13 UTC │
	│ ssh     │ cert-options-001954 ssh openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt                                                                                                                                                   │ cert-options-001954       │ jenkins │ v1.37.0 │ 29 Dec 25 07:13 UTC │ 29 Dec 25 07:13 UTC │
	│ ssh     │ -p cert-options-001954 -- sudo cat /etc/kubernetes/admin.conf                                                                                                                                                                                 │ cert-options-001954       │ jenkins │ v1.37.0 │ 29 Dec 25 07:13 UTC │ 29 Dec 25 07:13 UTC │
	│ delete  │ -p cert-options-001954                                                                                                                                                                                                                        │ cert-options-001954       │ jenkins │ v1.37.0 │ 29 Dec 25 07:13 UTC │ 29 Dec 25 07:13 UTC │
	│ start   │ -p old-k8s-version-876718 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-876718    │ jenkins │ v1.37.0 │ 29 Dec 25 07:13 UTC │ 29 Dec 25 07:13 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-876718 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-876718    │ jenkins │ v1.37.0 │ 29 Dec 25 07:14 UTC │                     │
	│ stop    │ -p old-k8s-version-876718 --alsologtostderr -v=3                                                                                                                                                                                              │ old-k8s-version-876718    │ jenkins │ v1.37.0 │ 29 Dec 25 07:14 UTC │ 29 Dec 25 07:14 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-876718 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-876718    │ jenkins │ v1.37.0 │ 29 Dec 25 07:14 UTC │ 29 Dec 25 07:14 UTC │
	│ start   │ -p old-k8s-version-876718 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-876718    │ jenkins │ v1.37.0 │ 29 Dec 25 07:14 UTC │ 29 Dec 25 07:15 UTC │
	│ delete  │ -p stopped-upgrade-518014                                                                                                                                                                                                                     │ stopped-upgrade-518014    │ jenkins │ v1.37.0 │ 29 Dec 25 07:14 UTC │ 29 Dec 25 07:14 UTC │
	│ start   │ -p no-preload-122332 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0                                                                                       │ no-preload-122332         │ jenkins │ v1.37.0 │ 29 Dec 25 07:14 UTC │                     │
	│ image   │ old-k8s-version-876718 image list --format=json                                                                                                                                                                                               │ old-k8s-version-876718    │ jenkins │ v1.37.0 │ 29 Dec 25 07:15 UTC │ 29 Dec 25 07:15 UTC │
	│ pause   │ -p old-k8s-version-876718 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-876718    │ jenkins │ v1.37.0 │ 29 Dec 25 07:15 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────
────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/29 07:14:48
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1229 07:14:48.560108  245459 out.go:360] Setting OutFile to fd 1 ...
	I1229 07:14:48.560372  245459 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1229 07:14:48.560383  245459 out.go:374] Setting ErrFile to fd 2...
	I1229 07:14:48.560387  245459 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1229 07:14:48.560567  245459 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22353-9207/.minikube/bin
	I1229 07:14:48.561012  245459 out.go:368] Setting JSON to false
	I1229 07:14:48.562191  245459 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":3441,"bootTime":1766989048,"procs":334,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1229 07:14:48.562259  245459 start.go:143] virtualization: kvm guest
	I1229 07:14:48.564311  245459 out.go:179] * [no-preload-122332] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1229 07:14:48.565627  245459 notify.go:221] Checking for updates...
	I1229 07:14:48.565639  245459 out.go:179]   - MINIKUBE_LOCATION=22353
	I1229 07:14:48.566993  245459 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1229 07:14:48.568298  245459 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22353-9207/kubeconfig
	I1229 07:14:48.569552  245459 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22353-9207/.minikube
	I1229 07:14:48.570771  245459 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1229 07:14:48.571881  245459 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1229 07:14:48.573474  245459 config.go:182] Loaded profile config "cert-expiration-452455": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1229 07:14:48.573586  245459 config.go:182] Loaded profile config "kubernetes-upgrade-174577": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1229 07:14:48.573692  245459 config.go:182] Loaded profile config "old-k8s-version-876718": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1229 07:14:48.573787  245459 driver.go:422] Setting default libvirt URI to qemu:///system
	I1229 07:14:48.599681  245459 docker.go:124] docker version: linux-29.1.3:Docker Engine - Community
	I1229 07:14:48.599801  245459 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1229 07:14:48.654053  245459 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:65 OomKillDisable:false NGoroutines:75 SystemTime:2025-12-29 07:14:48.64421016 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x8
6_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:dea7da592f5d1d2b7755e3a161be07f43fad8f75 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[ma
p[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1229 07:14:48.654160  245459 docker.go:319] overlay module found
	I1229 07:14:48.656072  245459 out.go:179] * Using the docker driver based on user configuration
	I1229 07:14:48.657317  245459 start.go:309] selected driver: docker
	I1229 07:14:48.657331  245459 start.go:928] validating driver "docker" against <nil>
	I1229 07:14:48.657342  245459 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1229 07:14:48.657831  245459 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1229 07:14:48.718281  245459 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:65 OomKillDisable:false NGoroutines:75 SystemTime:2025-12-29 07:14:48.708609847 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:dea7da592f5d1d2b7755e3a161be07f43fad8f75 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1229 07:14:48.718463  245459 start_flags.go:333] no existing cluster config was found, will generate one from the flags 
	I1229 07:14:48.718661  245459 start_flags.go:1019] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1229 07:14:48.720655  245459 out.go:179] * Using Docker driver with root privileges
	I1229 07:14:48.721984  245459 cni.go:84] Creating CNI manager for ""
	I1229 07:14:48.722055  245459 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1229 07:14:48.722067  245459 start_flags.go:342] Found "CNI" CNI - setting NetworkPlugin=cni
	I1229 07:14:48.722137  245459 start.go:353] cluster config:
	{Name:no-preload-122332 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766979815-22353@sha256:20dad5895b49b986a1253c0faab60865204843ac97fd3a6e6210da5896244409 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:no-preload-122332 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID
:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1229 07:14:48.723613  245459 out.go:179] * Starting "no-preload-122332" primary control-plane node in "no-preload-122332" cluster
	I1229 07:14:48.724802  245459 cache.go:134] Beginning downloading kic base image for docker with crio
	I1229 07:14:48.726037  245459 out.go:179] * Pulling base image v0.0.48-1766979815-22353 ...
	I1229 07:14:48.727189  245459 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime crio
	I1229 07:14:48.727284  245459 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766979815-22353@sha256:20dad5895b49b986a1253c0faab60865204843ac97fd3a6e6210da5896244409 in local docker daemon
	I1229 07:14:48.727332  245459 profile.go:143] Saving config to /home/jenkins/minikube-integration/22353-9207/.minikube/profiles/no-preload-122332/config.json ...
	I1229 07:14:48.727362  245459 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22353-9207/.minikube/profiles/no-preload-122332/config.json: {Name:mk58103441ab97c89bed4e107503b27d1a73b80e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1229 07:14:48.727475  245459 cache.go:107] acquiring lock: {Name:mk524ccc7d3121d195adc7d1863af70c1e10cb09 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1229 07:14:48.727510  245459 cache.go:107] acquiring lock: {Name:mkca02c24b265c83f3ba73c3e4bff2d28831259c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1229 07:14:48.727559  245459 cache.go:115] /home/jenkins/minikube-integration/22353-9207/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1229 07:14:48.727587  245459 cache.go:115] /home/jenkins/minikube-integration/22353-9207/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.6-0 exists
	I1229 07:14:48.727578  245459 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/22353-9207/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5" took 124.639µs
	I1229 07:14:48.727576  245459 cache.go:107] acquiring lock: {Name:mkceb8935c60ed9a529274ab83854aa71dbe9a7d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1229 07:14:48.727600  245459 cache.go:96] cache image "registry.k8s.io/etcd:3.6.6-0" -> "/home/jenkins/minikube-integration/22353-9207/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.6-0" took 92.417µs
	I1229 07:14:48.727585  245459 cache.go:107] acquiring lock: {Name:mk2827ee73a1c5c546c3035bd69b730bda1ef682 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1229 07:14:48.727609  245459 cache.go:80] save to tar file registry.k8s.io/etcd:3.6.6-0 -> /home/jenkins/minikube-integration/22353-9207/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.6-0 succeeded
	I1229 07:14:48.727603  245459 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/22353-9207/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1229 07:14:48.727541  245459 cache.go:107] acquiring lock: {Name:mk52f4077c79f8806c7eb2c6a7253ed35dcf09ce Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1229 07:14:48.727521  245459 cache.go:107] acquiring lock: {Name:mk4e3cc5ac4b58daa39b77bf4639b595a7b5e1bb Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1229 07:14:48.727664  245459 cache.go:115] /home/jenkins/minikube-integration/22353-9207/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.35.0 exists
	I1229 07:14:48.727676  245459 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.35.0" -> "/home/jenkins/minikube-integration/22353-9207/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.35.0" took 166.582µs
	I1229 07:14:48.727655  245459 cache.go:107] acquiring lock: {Name:mk6876db4017aa5ef89eab36b68c600dec62345c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1229 07:14:48.727662  245459 cache.go:107] acquiring lock: {Name:mkeb7d05fa98b741eb24c41313df007ce9bbb93e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1229 07:14:48.727685  245459 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.35.0 -> /home/jenkins/minikube-integration/22353-9207/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.35.0 succeeded
	I1229 07:14:48.727668  245459 cache.go:115] /home/jenkins/minikube-integration/22353-9207/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.35.0 exists
	I1229 07:14:48.727775  245459 cache.go:115] /home/jenkins/minikube-integration/22353-9207/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1 exists
	I1229 07:14:48.727797  245459 cache.go:96] cache image "registry.k8s.io/pause:3.10.1" -> "/home/jenkins/minikube-integration/22353-9207/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1" took 210.438µs
	I1229 07:14:48.727820  245459 cache.go:115] /home/jenkins/minikube-integration/22353-9207/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.13.1 exists
	I1229 07:14:48.727769  245459 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.35.0" -> "/home/jenkins/minikube-integration/22353-9207/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.35.0" took 234.428µs
	I1229 07:14:48.727829  245459 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.35.0 -> /home/jenkins/minikube-integration/22353-9207/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.35.0 succeeded
	I1229 07:14:48.727829  245459 cache.go:115] /home/jenkins/minikube-integration/22353-9207/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.35.0 exists
	I1229 07:14:48.727830  245459 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.13.1" -> "/home/jenkins/minikube-integration/22353-9207/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.13.1" took 232.169µs
	I1229 07:14:48.727842  245459 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.13.1 -> /home/jenkins/minikube-integration/22353-9207/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.13.1 succeeded
	I1229 07:14:48.727822  245459 cache.go:80] save to tar file registry.k8s.io/pause:3.10.1 -> /home/jenkins/minikube-integration/22353-9207/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1 succeeded
	I1229 07:14:48.727840  245459 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.35.0" -> "/home/jenkins/minikube-integration/22353-9207/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.35.0" took 233.187µs
	I1229 07:14:48.727851  245459 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.35.0 -> /home/jenkins/minikube-integration/22353-9207/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.35.0 succeeded
	I1229 07:14:48.727684  245459 cache.go:115] /home/jenkins/minikube-integration/22353-9207/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.35.0 exists
	I1229 07:14:48.727868  245459 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.35.0" -> "/home/jenkins/minikube-integration/22353-9207/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.35.0" took 294.495µs
	I1229 07:14:48.727874  245459 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.35.0 -> /home/jenkins/minikube-integration/22353-9207/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.35.0 succeeded
	I1229 07:14:48.727880  245459 cache.go:87] Successfully saved all images to host disk.
	I1229 07:14:48.749020  245459 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766979815-22353@sha256:20dad5895b49b986a1253c0faab60865204843ac97fd3a6e6210da5896244409 in local docker daemon, skipping pull
	I1229 07:14:48.749037  245459 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766979815-22353@sha256:20dad5895b49b986a1253c0faab60865204843ac97fd3a6e6210da5896244409 exists in daemon, skipping load
	I1229 07:14:48.749053  245459 cache.go:243] Successfully downloaded all kic artifacts
	I1229 07:14:48.749090  245459 start.go:360] acquireMachinesLock for no-preload-122332: {Name:mka83f33e779c9aed23f5a0e4fef1298c9058532 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1229 07:14:48.749192  245459 start.go:364] duration metric: took 78.893µs to acquireMachinesLock for "no-preload-122332"
	I1229 07:14:48.749233  245459 start.go:93] Provisioning new machine with config: &{Name:no-preload-122332 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766979815-22353@sha256:20dad5895b49b986a1253c0faab60865204843ac97fd3a6e6210da5896244409 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:no-preload-122332 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwa
rePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false} &{Name: IP: Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1229 07:14:48.749320  245459 start.go:125] createHost starting for "" (driver="docker")
	W1229 07:14:47.932666  241214 pod_ready.go:104] pod "coredns-5dd5756b68-pnstl" is not "Ready", error: <nil>
	W1229 07:14:49.939330  241214 pod_ready.go:104] pod "coredns-5dd5756b68-pnstl" is not "Ready", error: <nil>
	I1229 07:14:48.751837  245459 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1229 07:14:48.752067  245459 start.go:159] libmachine.API.Create for "no-preload-122332" (driver="docker")
	I1229 07:14:48.752096  245459 client.go:173] LocalClient.Create starting
	I1229 07:14:48.752182  245459 main.go:144] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22353-9207/.minikube/certs/ca.pem
	I1229 07:14:48.752240  245459 main.go:144] libmachine: Decoding PEM data...
	I1229 07:14:48.752266  245459 main.go:144] libmachine: Parsing certificate...
	I1229 07:14:48.752322  245459 main.go:144] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22353-9207/.minikube/certs/cert.pem
	I1229 07:14:48.752344  245459 main.go:144] libmachine: Decoding PEM data...
	I1229 07:14:48.752353  245459 main.go:144] libmachine: Parsing certificate...
	I1229 07:14:48.752691  245459 cli_runner.go:164] Run: docker network inspect no-preload-122332 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1229 07:14:48.769711  245459 cli_runner.go:211] docker network inspect no-preload-122332 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1229 07:14:48.769793  245459 network_create.go:284] running [docker network inspect no-preload-122332] to gather additional debugging logs...
	I1229 07:14:48.769809  245459 cli_runner.go:164] Run: docker network inspect no-preload-122332
	W1229 07:14:48.786633  245459 cli_runner.go:211] docker network inspect no-preload-122332 returned with exit code 1
	I1229 07:14:48.786669  245459 network_create.go:287] error running [docker network inspect no-preload-122332]: docker network inspect no-preload-122332: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network no-preload-122332 not found
	I1229 07:14:48.786682  245459 network_create.go:289] output of [docker network inspect no-preload-122332]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network no-preload-122332 not found
	
	** /stderr **
	I1229 07:14:48.786807  245459 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1229 07:14:48.803944  245459 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-0cdc02b57a9c IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:d2:92:f5:d8:8c:53} reservation:<nil>}
	I1229 07:14:48.805484  245459 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-09c86d5ed1ab IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:da:3f:ba:d0:a8:f3} reservation:<nil>}
	I1229 07:14:48.806773  245459 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-5eb2f52e9e64 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:9e:e7:f2:5b:43:1d} reservation:<nil>}
	I1229 07:14:48.807284  245459 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-66e171323e2a IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:2a:d9:01:28:19:dc} reservation:<nil>}
	I1229 07:14:48.807714  245459 network.go:211] skipping subnet 192.168.85.0/24 that is taken: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName:br-faaa954500ab IfaceIPv4:192.168.85.1 IfaceMTU:1500 IfaceMAC:de:8a:1a:a6:08:26} reservation:<nil>}
	I1229 07:14:48.808357  245459 network.go:206] using free private subnet 192.168.94.0/24: &{IP:192.168.94.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.94.0/24 Gateway:192.168.94.1 ClientMin:192.168.94.2 ClientMax:192.168.94.254 Broadcast:192.168.94.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001c17fe0}
	I1229 07:14:48.808385  245459 network_create.go:124] attempt to create docker network no-preload-122332 192.168.94.0/24 with gateway 192.168.94.1 and MTU of 1500 ...
	I1229 07:14:48.808427  245459 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.94.0/24 --gateway=192.168.94.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=no-preload-122332 no-preload-122332
	I1229 07:14:48.860285  245459 network_create.go:108] docker network no-preload-122332 192.168.94.0/24 created
	I1229 07:14:48.860328  245459 kic.go:121] calculated static IP "192.168.94.2" for the "no-preload-122332" container
	I1229 07:14:48.860412  245459 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1229 07:14:48.885552  245459 cli_runner.go:164] Run: docker volume create no-preload-122332 --label name.minikube.sigs.k8s.io=no-preload-122332 --label created_by.minikube.sigs.k8s.io=true
	I1229 07:14:48.904065  245459 oci.go:103] Successfully created a docker volume no-preload-122332
	I1229 07:14:48.904155  245459 cli_runner.go:164] Run: docker run --rm --name no-preload-122332-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=no-preload-122332 --entrypoint /usr/bin/test -v no-preload-122332:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766979815-22353@sha256:20dad5895b49b986a1253c0faab60865204843ac97fd3a6e6210da5896244409 -d /var/lib
	I1229 07:14:49.410107  245459 oci.go:107] Successfully prepared a docker volume no-preload-122332
	I1229 07:14:49.410159  245459 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime crio
	W1229 07:14:49.410270  245459 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1229 07:14:49.410317  245459 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1229 07:14:49.410468  245459 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1229 07:14:49.490297  245459 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname no-preload-122332 --name no-preload-122332 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=no-preload-122332 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=no-preload-122332 --network no-preload-122332 --ip 192.168.94.2 --volume no-preload-122332:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766979815-22353@sha256:20dad5895b49b986a1253c0faab60865204843ac97fd3a6e6210da5896244409
	I1229 07:14:49.866241  245459 cli_runner.go:164] Run: docker container inspect no-preload-122332 --format={{.State.Running}}
	I1229 07:14:49.891171  245459 cli_runner.go:164] Run: docker container inspect no-preload-122332 --format={{.State.Status}}
	I1229 07:14:49.917198  245459 cli_runner.go:164] Run: docker exec no-preload-122332 stat /var/lib/dpkg/alternatives/iptables
	I1229 07:14:49.983409  245459 oci.go:144] the created container "no-preload-122332" has a running status.
	I1229 07:14:49.983438  245459 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/22353-9207/.minikube/machines/no-preload-122332/id_rsa...
	I1229 07:14:50.197679  245459 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22353-9207/.minikube/machines/no-preload-122332/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1229 07:14:50.237885  245459 cli_runner.go:164] Run: docker container inspect no-preload-122332 --format={{.State.Status}}
	I1229 07:14:50.265423  245459 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1229 07:14:50.265445  245459 kic_runner.go:114] Args: [docker exec --privileged no-preload-122332 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1229 07:14:50.339055  245459 cli_runner.go:164] Run: docker container inspect no-preload-122332 --format={{.State.Status}}
	I1229 07:14:50.362853  245459 machine.go:94] provisionDockerMachine start ...
	I1229 07:14:50.363174  245459 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-122332
	I1229 07:14:50.382988  245459 main.go:144] libmachine: Using SSH client type: native
	I1229 07:14:50.383338  245459 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e300] 0x850fa0 <nil>  [] 0s} 127.0.0.1 33063 <nil> <nil>}
	I1229 07:14:50.383357  245459 main.go:144] libmachine: About to run SSH command:
	hostname
	I1229 07:14:50.545461  245459 main.go:144] libmachine: SSH cmd err, output: <nil>: no-preload-122332
	
	I1229 07:14:50.545490  245459 ubuntu.go:182] provisioning hostname "no-preload-122332"
	I1229 07:14:50.545557  245459 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-122332
	I1229 07:14:50.569110  245459 main.go:144] libmachine: Using SSH client type: native
	I1229 07:14:50.569480  245459 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e300] 0x850fa0 <nil>  [] 0s} 127.0.0.1 33063 <nil> <nil>}
	I1229 07:14:50.569502  245459 main.go:144] libmachine: About to run SSH command:
	sudo hostname no-preload-122332 && echo "no-preload-122332" | sudo tee /etc/hostname
	I1229 07:14:50.737395  245459 main.go:144] libmachine: SSH cmd err, output: <nil>: no-preload-122332
	
	I1229 07:14:50.737485  245459 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-122332
	I1229 07:14:50.763043  245459 main.go:144] libmachine: Using SSH client type: native
	I1229 07:14:50.763394  245459 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e300] 0x850fa0 <nil>  [] 0s} 127.0.0.1 33063 <nil> <nil>}
	I1229 07:14:50.763437  245459 main.go:144] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-122332' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-122332/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-122332' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1229 07:14:50.914627  245459 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	I1229 07:14:50.914660  245459 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22353-9207/.minikube CaCertPath:/home/jenkins/minikube-integration/22353-9207/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22353-9207/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22353-9207/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22353-9207/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22353-9207/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22353-9207/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22353-9207/.minikube}
	I1229 07:14:50.914684  245459 ubuntu.go:190] setting up certificates
	I1229 07:14:50.914706  245459 provision.go:84] configureAuth start
	I1229 07:14:50.914777  245459 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-122332
	I1229 07:14:50.938733  245459 provision.go:143] copyHostCerts
	I1229 07:14:50.938802  245459 exec_runner.go:144] found /home/jenkins/minikube-integration/22353-9207/.minikube/ca.pem, removing ...
	I1229 07:14:50.938814  245459 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22353-9207/.minikube/ca.pem
	I1229 07:14:50.938892  245459 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22353-9207/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22353-9207/.minikube/ca.pem (1082 bytes)
	I1229 07:14:50.938996  245459 exec_runner.go:144] found /home/jenkins/minikube-integration/22353-9207/.minikube/cert.pem, removing ...
	I1229 07:14:50.939010  245459 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22353-9207/.minikube/cert.pem
	I1229 07:14:50.939051  245459 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22353-9207/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22353-9207/.minikube/cert.pem (1123 bytes)
	I1229 07:14:50.939144  245459 exec_runner.go:144] found /home/jenkins/minikube-integration/22353-9207/.minikube/key.pem, removing ...
	I1229 07:14:50.939158  245459 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22353-9207/.minikube/key.pem
	I1229 07:14:50.939199  245459 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22353-9207/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22353-9207/.minikube/key.pem (1675 bytes)
	I1229 07:14:50.939325  245459 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22353-9207/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22353-9207/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22353-9207/.minikube/certs/ca-key.pem org=jenkins.no-preload-122332 san=[127.0.0.1 192.168.94.2 localhost minikube no-preload-122332]
	I1229 07:14:51.007801  245459 provision.go:177] copyRemoteCerts
	I1229 07:14:51.007864  245459 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1229 07:14:51.007913  245459 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-122332
	I1229 07:14:51.029044  245459 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33063 SSHKeyPath:/home/jenkins/minikube-integration/22353-9207/.minikube/machines/no-preload-122332/id_rsa Username:docker}
	I1229 07:14:51.134180  245459 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-9207/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1229 07:14:51.155889  245459 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-9207/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1229 07:14:51.174639  245459 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-9207/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1229 07:14:51.193206  245459 provision.go:87] duration metric: took 278.459636ms to configureAuth
	I1229 07:14:51.193246  245459 ubuntu.go:206] setting minikube options for container-runtime
	I1229 07:14:51.193433  245459 config.go:182] Loaded profile config "no-preload-122332": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1229 07:14:51.193542  245459 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-122332
	I1229 07:14:51.215072  245459 main.go:144] libmachine: Using SSH client type: native
	I1229 07:14:51.215353  245459 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e300] 0x850fa0 <nil>  [] 0s} 127.0.0.1 33063 <nil> <nil>}
	I1229 07:14:51.215374  245459 main.go:144] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1229 07:14:51.502390  245459 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1229 07:14:51.502420  245459 machine.go:97] duration metric: took 1.139540323s to provisionDockerMachine
	I1229 07:14:51.502434  245459 client.go:176] duration metric: took 2.750327082s to LocalClient.Create
	I1229 07:14:51.502461  245459 start.go:167] duration metric: took 2.750392298s to libmachine.API.Create "no-preload-122332"
	I1229 07:14:51.502476  245459 start.go:293] postStartSetup for "no-preload-122332" (driver="docker")
	I1229 07:14:51.502490  245459 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1229 07:14:51.502570  245459 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1229 07:14:51.502623  245459 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-122332
	I1229 07:14:51.521000  245459 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33063 SSHKeyPath:/home/jenkins/minikube-integration/22353-9207/.minikube/machines/no-preload-122332/id_rsa Username:docker}
	I1229 07:14:51.624776  245459 ssh_runner.go:195] Run: cat /etc/os-release
	I1229 07:14:51.629374  245459 main.go:144] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1229 07:14:51.629402  245459 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1229 07:14:51.629415  245459 filesync.go:126] Scanning /home/jenkins/minikube-integration/22353-9207/.minikube/addons for local assets ...
	I1229 07:14:51.629466  245459 filesync.go:126] Scanning /home/jenkins/minikube-integration/22353-9207/.minikube/files for local assets ...
	I1229 07:14:51.629575  245459 filesync.go:149] local asset: /home/jenkins/minikube-integration/22353-9207/.minikube/files/etc/ssl/certs/127332.pem -> 127332.pem in /etc/ssl/certs
	I1229 07:14:51.629697  245459 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1229 07:14:51.639088  245459 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-9207/.minikube/files/etc/ssl/certs/127332.pem --> /etc/ssl/certs/127332.pem (1708 bytes)
	I1229 07:14:51.662317  245459 start.go:296] duration metric: took 159.824652ms for postStartSetup
	I1229 07:14:51.662717  245459 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-122332
	I1229 07:14:51.685599  245459 profile.go:143] Saving config to /home/jenkins/minikube-integration/22353-9207/.minikube/profiles/no-preload-122332/config.json ...
	I1229 07:14:51.685875  245459 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1229 07:14:51.685923  245459 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-122332
	I1229 07:14:51.708201  245459 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33063 SSHKeyPath:/home/jenkins/minikube-integration/22353-9207/.minikube/machines/no-preload-122332/id_rsa Username:docker}
	I1229 07:14:51.811411  245459 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1229 07:14:51.817079  245459 start.go:128] duration metric: took 3.06774455s to createHost
	I1229 07:14:51.817106  245459 start.go:83] releasing machines lock for "no-preload-122332", held for 3.067898739s
	I1229 07:14:51.817187  245459 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-122332
	I1229 07:14:51.840236  245459 ssh_runner.go:195] Run: cat /version.json
	I1229 07:14:51.840295  245459 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-122332
	I1229 07:14:51.840297  245459 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1229 07:14:51.840370  245459 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-122332
	I1229 07:14:51.863761  245459 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33063 SSHKeyPath:/home/jenkins/minikube-integration/22353-9207/.minikube/machines/no-preload-122332/id_rsa Username:docker}
	I1229 07:14:51.864308  245459 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33063 SSHKeyPath:/home/jenkins/minikube-integration/22353-9207/.minikube/machines/no-preload-122332/id_rsa Username:docker}
	I1229 07:14:52.042079  245459 ssh_runner.go:195] Run: systemctl --version
	I1229 07:14:52.050752  245459 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1229 07:14:52.093375  245459 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1229 07:14:52.099342  245459 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1229 07:14:52.099414  245459 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1229 07:14:52.130588  245459 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1229 07:14:52.130617  245459 start.go:496] detecting cgroup driver to use...
	I1229 07:14:52.130656  245459 detect.go:190] detected "systemd" cgroup driver on host os
	I1229 07:14:52.130702  245459 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1229 07:14:52.150541  245459 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1229 07:14:52.166197  245459 docker.go:218] disabling cri-docker service (if available) ...
	I1229 07:14:52.166302  245459 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1229 07:14:52.187330  245459 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1229 07:14:52.212528  245459 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1229 07:14:52.328916  245459 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1229 07:14:52.447171  245459 docker.go:234] disabling docker service ...
	I1229 07:14:52.447281  245459 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1229 07:14:52.471655  245459 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1229 07:14:52.488320  245459 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1229 07:14:52.602578  245459 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1229 07:14:52.716362  245459 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1229 07:14:52.732482  245459 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1229 07:14:52.749518  245459 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1229 07:14:52.749580  245459 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1229 07:14:52.763000  245459 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1229 07:14:52.763070  245459 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1229 07:14:52.773911  245459 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1229 07:14:52.785178  245459 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1229 07:14:52.797174  245459 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1229 07:14:52.807995  245459 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1229 07:14:52.819485  245459 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1229 07:14:52.837882  245459 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1229 07:14:52.848729  245459 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1229 07:14:52.858423  245459 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1229 07:14:52.867838  245459 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1229 07:14:52.970897  245459 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1229 07:14:53.890892  245459 start.go:553] Will wait 60s for socket path /var/run/crio/crio.sock
	I1229 07:14:53.890969  245459 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1229 07:14:53.896105  245459 start.go:574] Will wait 60s for crictl version
	I1229 07:14:53.896167  245459 ssh_runner.go:195] Run: which crictl
	I1229 07:14:53.901269  245459 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1229 07:14:53.933757  245459 start.go:590] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.35.0
	RuntimeApiVersion:  v1
	I1229 07:14:53.933834  245459 ssh_runner.go:195] Run: crio --version
	I1229 07:14:53.970748  245459 ssh_runner.go:195] Run: crio --version
	I1229 07:14:54.008255  245459 out.go:179] * Preparing Kubernetes v1.35.0 on CRI-O 1.35.0 ...
	I1229 07:14:49.542623  225445 api_server.go:299] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1229 07:14:49.543067  225445 api_server.go:315] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1229 07:14:49.543123  225445 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1229 07:14:49.543195  225445 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1229 07:14:49.582785  225445 cri.go:96] found id: "864abb65f432c26fa136421795c1a96c0e9342e9a1e658790ff31d6cfc64ee5e"
	I1229 07:14:49.582812  225445 cri.go:96] found id: "3a9c461f6549cedfcb902d493a98dfdfd45573c28ec554e6b1d2ce904ee1fd67"
	I1229 07:14:49.582819  225445 cri.go:96] found id: ""
	I1229 07:14:49.582828  225445 logs.go:282] 2 containers: [864abb65f432c26fa136421795c1a96c0e9342e9a1e658790ff31d6cfc64ee5e 3a9c461f6549cedfcb902d493a98dfdfd45573c28ec554e6b1d2ce904ee1fd67]
	I1229 07:14:49.582883  225445 ssh_runner.go:195] Run: which crictl
	I1229 07:14:49.588955  225445 ssh_runner.go:195] Run: which crictl
	I1229 07:14:49.594900  225445 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1229 07:14:49.595020  225445 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1229 07:14:49.632157  225445 cri.go:96] found id: "02694b727a7ef5615b676832bc306017d23da15eff0a3aa9c1e32630ffe9e1bd"
	I1229 07:14:49.632181  225445 cri.go:96] found id: ""
	I1229 07:14:49.632191  225445 logs.go:282] 1 containers: [02694b727a7ef5615b676832bc306017d23da15eff0a3aa9c1e32630ffe9e1bd]
	I1229 07:14:49.632866  225445 ssh_runner.go:195] Run: which crictl
	I1229 07:14:49.638456  225445 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1229 07:14:49.638519  225445 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1229 07:14:49.679909  225445 cri.go:96] found id: ""
	I1229 07:14:49.679939  225445 logs.go:282] 0 containers: []
	W1229 07:14:49.679951  225445 logs.go:284] No container was found matching "coredns"
	I1229 07:14:49.679958  225445 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1229 07:14:49.680009  225445 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1229 07:14:49.719747  225445 cri.go:96] found id: "bc46a059c0b20ae0cdb359909a2896f904772ffa6178a77a2cc0269f181bd298"
	I1229 07:14:49.719772  225445 cri.go:96] found id: "1a2a5ec730671fe2fec7f6b801e54afe17547e25677f0f6dc85052e44f7374d4"
	I1229 07:14:49.719778  225445 cri.go:96] found id: "83f0b45b14d0f5c0f2428e774b2c22cc1cbf1cdc631be13cb8017de1791ed9c1"
	I1229 07:14:49.719783  225445 cri.go:96] found id: ""
	I1229 07:14:49.719792  225445 logs.go:282] 3 containers: [bc46a059c0b20ae0cdb359909a2896f904772ffa6178a77a2cc0269f181bd298 1a2a5ec730671fe2fec7f6b801e54afe17547e25677f0f6dc85052e44f7374d4 83f0b45b14d0f5c0f2428e774b2c22cc1cbf1cdc631be13cb8017de1791ed9c1]
	I1229 07:14:49.719886  225445 ssh_runner.go:195] Run: which crictl
	I1229 07:14:49.725641  225445 ssh_runner.go:195] Run: which crictl
	I1229 07:14:49.731065  225445 ssh_runner.go:195] Run: which crictl
	I1229 07:14:49.737676  225445 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1229 07:14:49.737759  225445 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1229 07:14:49.777182  225445 cri.go:96] found id: ""
	I1229 07:14:49.777214  225445 logs.go:282] 0 containers: []
	W1229 07:14:49.777252  225445 logs.go:284] No container was found matching "kube-proxy"
	I1229 07:14:49.777261  225445 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1229 07:14:49.777334  225445 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1229 07:14:49.813525  225445 cri.go:96] found id: "e9d5b34dd02d51c0010757fb5a8250c74f693ed88fa87d76d4b6b6a6938ab9de"
	I1229 07:14:49.813550  225445 cri.go:96] found id: "c6904a1539983ab61b05abbaa0001209617c96ddc1921ff83557531b68191f66"
	I1229 07:14:49.813557  225445 cri.go:96] found id: ""
	I1229 07:14:49.813567  225445 logs.go:282] 2 containers: [e9d5b34dd02d51c0010757fb5a8250c74f693ed88fa87d76d4b6b6a6938ab9de c6904a1539983ab61b05abbaa0001209617c96ddc1921ff83557531b68191f66]
	I1229 07:14:49.813630  225445 ssh_runner.go:195] Run: which crictl
	I1229 07:14:49.818968  225445 ssh_runner.go:195] Run: which crictl
	I1229 07:14:49.822985  225445 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1229 07:14:49.823060  225445 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1229 07:14:49.857262  225445 cri.go:96] found id: ""
	I1229 07:14:49.857292  225445 logs.go:282] 0 containers: []
	W1229 07:14:49.857304  225445 logs.go:284] No container was found matching "kindnet"
	I1229 07:14:49.857312  225445 cri.go:61] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1229 07:14:49.857371  225445 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=storage-provisioner
	I1229 07:14:49.894388  225445 cri.go:96] found id: ""
	I1229 07:14:49.894417  225445 logs.go:282] 0 containers: []
	W1229 07:14:49.894428  225445 logs.go:284] No container was found matching "storage-provisioner"
	I1229 07:14:49.894441  225445 logs.go:123] Gathering logs for kube-scheduler [83f0b45b14d0f5c0f2428e774b2c22cc1cbf1cdc631be13cb8017de1791ed9c1] ...
	I1229 07:14:49.894458  225445 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 83f0b45b14d0f5c0f2428e774b2c22cc1cbf1cdc631be13cb8017de1791ed9c1"
	I1229 07:14:49.938527  225445 logs.go:123] Gathering logs for kube-controller-manager [e9d5b34dd02d51c0010757fb5a8250c74f693ed88fa87d76d4b6b6a6938ab9de] ...
	I1229 07:14:49.938583  225445 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e9d5b34dd02d51c0010757fb5a8250c74f693ed88fa87d76d4b6b6a6938ab9de"
	I1229 07:14:49.974192  225445 logs.go:123] Gathering logs for kube-controller-manager [c6904a1539983ab61b05abbaa0001209617c96ddc1921ff83557531b68191f66] ...
	I1229 07:14:49.974497  225445 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c6904a1539983ab61b05abbaa0001209617c96ddc1921ff83557531b68191f66"
	I1229 07:14:50.016194  225445 logs.go:123] Gathering logs for kubelet ...
	I1229 07:14:50.016237  225445 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1229 07:14:50.140324  225445 logs.go:123] Gathering logs for describe nodes ...
	I1229 07:14:50.140411  225445 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1229 07:14:50.210120  225445 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1229 07:14:50.210321  225445 logs.go:123] Gathering logs for kube-apiserver [3a9c461f6549cedfcb902d493a98dfdfd45573c28ec554e6b1d2ce904ee1fd67] ...
	I1229 07:14:50.210339  225445 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3a9c461f6549cedfcb902d493a98dfdfd45573c28ec554e6b1d2ce904ee1fd67"
	I1229 07:14:50.261264  225445 logs.go:123] Gathering logs for etcd [02694b727a7ef5615b676832bc306017d23da15eff0a3aa9c1e32630ffe9e1bd] ...
	I1229 07:14:50.261300  225445 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 02694b727a7ef5615b676832bc306017d23da15eff0a3aa9c1e32630ffe9e1bd"
	I1229 07:14:50.301092  225445 logs.go:123] Gathering logs for kube-scheduler [bc46a059c0b20ae0cdb359909a2896f904772ffa6178a77a2cc0269f181bd298] ...
	I1229 07:14:50.301120  225445 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 bc46a059c0b20ae0cdb359909a2896f904772ffa6178a77a2cc0269f181bd298"
	W1229 07:14:50.330316  225445 logs.go:138] Found kube-scheduler [bc46a059c0b20ae0cdb359909a2896f904772ffa6178a77a2cc0269f181bd298] problem: E1229 07:14:06.198316       1 run.go:72] "command failed" err="failed to create listener: failed to listen on 127.0.0.1:10259: listen tcp 127.0.0.1:10259: bind: address already in use"
	I1229 07:14:50.330340  225445 logs.go:123] Gathering logs for kube-scheduler [1a2a5ec730671fe2fec7f6b801e54afe17547e25677f0f6dc85052e44f7374d4] ...
	I1229 07:14:50.330354  225445 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 1a2a5ec730671fe2fec7f6b801e54afe17547e25677f0f6dc85052e44f7374d4"
	I1229 07:14:50.408422  225445 logs.go:123] Gathering logs for CRI-O ...
	I1229 07:14:50.408458  225445 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1229 07:14:50.496177  225445 logs.go:123] Gathering logs for container status ...
	I1229 07:14:50.496212  225445 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1229 07:14:50.535512  225445 logs.go:123] Gathering logs for dmesg ...
	I1229 07:14:50.535543  225445 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1229 07:14:50.551208  225445 logs.go:123] Gathering logs for kube-apiserver [864abb65f432c26fa136421795c1a96c0e9342e9a1e658790ff31d6cfc64ee5e] ...
	I1229 07:14:50.551251  225445 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 864abb65f432c26fa136421795c1a96c0e9342e9a1e658790ff31d6cfc64ee5e"
	I1229 07:14:50.593904  225445 out.go:374] Setting ErrFile to fd 2...
	I1229 07:14:50.593930  225445 out.go:408] TERM=,COLORTERM=, which probably does not support color
	W1229 07:14:50.593986  225445 out.go:285] X Problems detected in kube-scheduler [bc46a059c0b20ae0cdb359909a2896f904772ffa6178a77a2cc0269f181bd298]:
	W1229 07:14:50.594002  225445 out.go:285]   E1229 07:14:06.198316       1 run.go:72] "command failed" err="failed to create listener: failed to listen on 127.0.0.1:10259: listen tcp 127.0.0.1:10259: bind: address already in use"
	I1229 07:14:50.594010  225445 out.go:374] Setting ErrFile to fd 2...
	I1229 07:14:50.594016  225445 out.go:408] TERM=,COLORTERM=, which probably does not support color
	W1229 07:14:52.434165  241214 pod_ready.go:104] pod "coredns-5dd5756b68-pnstl" is not "Ready", error: <nil>
	W1229 07:14:54.436704  241214 pod_ready.go:104] pod "coredns-5dd5756b68-pnstl" is not "Ready", error: <nil>
	I1229 07:14:54.012919  245459 cli_runner.go:164] Run: docker network inspect no-preload-122332 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1229 07:14:54.035947  245459 ssh_runner.go:195] Run: grep 192.168.94.1	host.minikube.internal$ /etc/hosts
	I1229 07:14:54.041248  245459 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.94.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1229 07:14:54.055334  245459 kubeadm.go:884] updating cluster {Name:no-preload-122332 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766979815-22353@sha256:20dad5895b49b986a1253c0faab60865204843ac97fd3a6e6210da5896244409 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:no-preload-122332 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false} ...
	I1229 07:14:54.055450  245459 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime crio
	I1229 07:14:54.055483  245459 ssh_runner.go:195] Run: sudo crictl images --output json
	I1229 07:14:54.082944  245459 crio.go:557] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.35.0". assuming images are not preloaded.
	I1229 07:14:54.082966  245459 cache_images.go:90] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.35.0 registry.k8s.io/kube-controller-manager:v1.35.0 registry.k8s.io/kube-scheduler:v1.35.0 registry.k8s.io/kube-proxy:v1.35.0 registry.k8s.io/pause:3.10.1 registry.k8s.io/etcd:3.6.6-0 registry.k8s.io/coredns/coredns:v1.13.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1229 07:14:54.083029  245459 image.go:138] retrieving image: registry.k8s.io/kube-proxy:v1.35.0
	I1229 07:14:54.083059  245459 image.go:138] retrieving image: registry.k8s.io/coredns/coredns:v1.13.1
	I1229 07:14:54.083073  245459 image.go:138] retrieving image: registry.k8s.io/kube-controller-manager:v1.35.0
	I1229 07:14:54.083091  245459 image.go:138] retrieving image: registry.k8s.io/kube-apiserver:v1.35.0
	I1229 07:14:54.083105  245459 image.go:138] retrieving image: registry.k8s.io/kube-scheduler:v1.35.0
	I1229 07:14:54.083053  245459 image.go:138] retrieving image: registry.k8s.io/etcd:3.6.6-0
	I1229 07:14:54.083027  245459 image.go:138] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1229 07:14:54.083071  245459 image.go:138] retrieving image: registry.k8s.io/pause:3.10.1
	I1229 07:14:54.084395  245459 image.go:181] daemon lookup for registry.k8s.io/kube-proxy:v1.35.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.35.0
	I1229 07:14:54.084407  245459 image.go:181] daemon lookup for registry.k8s.io/etcd:3.6.6-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.6.6-0
	I1229 07:14:54.084395  245459 image.go:181] daemon lookup for registry.k8s.io/pause:3.10.1: Error response from daemon: No such image: registry.k8s.io/pause:3.10.1
	I1229 07:14:54.084397  245459 image.go:181] daemon lookup for registry.k8s.io/kube-scheduler:v1.35.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.35.0
	I1229 07:14:54.084397  245459 image.go:181] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1229 07:14:54.084455  245459 image.go:181] daemon lookup for registry.k8s.io/kube-apiserver:v1.35.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.35.0
	I1229 07:14:54.084458  245459 image.go:181] daemon lookup for registry.k8s.io/kube-controller-manager:v1.35.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.35.0
	I1229 07:14:54.084397  245459 image.go:181] daemon lookup for registry.k8s.io/coredns/coredns:v1.13.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.13.1
	I1229 07:14:54.221549  245459 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.35.0
	I1229 07:14:54.229003  245459 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.6.6-0
	I1229 07:14:54.235095  245459 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.35.0
	I1229 07:14:54.236198  245459 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.35.0
	I1229 07:14:54.239326  245459 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.13.1
	I1229 07:14:54.255782  245459 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.10.1
	I1229 07:14:54.267443  245459 cache_images.go:118] "registry.k8s.io/kube-controller-manager:v1.35.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.35.0" does not exist at hash "2c9a4b058bd7e6ec479c38f9e8a1dac2f5ee5b0a3ebda6dfac968e8720229508" in container runtime
	I1229 07:14:54.267497  245459 cri.go:226] Removing image: registry.k8s.io/kube-controller-manager:v1.35.0
	I1229 07:14:54.267544  245459 ssh_runner.go:195] Run: which crictl
	I1229 07:14:54.273949  245459 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.35.0
	I1229 07:14:54.280402  245459 cache_images.go:118] "registry.k8s.io/etcd:3.6.6-0" needs transfer: "registry.k8s.io/etcd:3.6.6-0" does not exist at hash "0a108f7189562e99793bdecab61fdf1a7c9d913af3385de9da17fb9d6ff430e2" in container runtime
	I1229 07:14:54.280444  245459 cri.go:226] Removing image: registry.k8s.io/etcd:3.6.6-0
	I1229 07:14:54.280487  245459 ssh_runner.go:195] Run: which crictl
	I1229 07:14:54.332094  245459 cache_images.go:118] "registry.k8s.io/kube-scheduler:v1.35.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.35.0" does not exist at hash "550794e3b12ac21ec7fd940bdfb45f7c8dae3c52acd8c70e580b882de20c3dcc" in container runtime
	I1229 07:14:54.332135  245459 cri.go:226] Removing image: registry.k8s.io/kube-scheduler:v1.35.0
	I1229 07:14:54.332137  245459 cache_images.go:118] "registry.k8s.io/kube-apiserver:v1.35.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.35.0" does not exist at hash "5c6acd67e9cd10eb60a246cd233db251ef62ea97e6572f897e873f0cb648f499" in container runtime
	I1229 07:14:54.332150  245459 cache_images.go:118] "registry.k8s.io/coredns/coredns:v1.13.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.13.1" does not exist at hash "aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139" in container runtime
	I1229 07:14:54.332168  245459 cri.go:226] Removing image: registry.k8s.io/kube-apiserver:v1.35.0
	I1229 07:14:54.332187  245459 cri.go:226] Removing image: registry.k8s.io/coredns/coredns:v1.13.1
	I1229 07:14:54.332193  245459 ssh_runner.go:195] Run: which crictl
	I1229 07:14:54.332205  245459 cache_images.go:118] "registry.k8s.io/pause:3.10.1" needs transfer: "registry.k8s.io/pause:3.10.1" does not exist at hash "cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f" in container runtime
	I1229 07:14:54.332214  245459 ssh_runner.go:195] Run: which crictl
	I1229 07:14:54.332237  245459 ssh_runner.go:195] Run: which crictl
	I1229 07:14:54.332246  245459 cri.go:226] Removing image: registry.k8s.io/pause:3.10.1
	I1229 07:14:54.332280  245459 ssh_runner.go:195] Run: which crictl
	I1229 07:14:54.332288  245459 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.35.0
	I1229 07:14:54.332308  245459 cache_images.go:118] "registry.k8s.io/kube-proxy:v1.35.0" needs transfer: "registry.k8s.io/kube-proxy:v1.35.0" does not exist at hash "32652ff1bbe6b6df16a3dc621fcad4e6e185a2852bffc5847aab79e36c54bfb8" in container runtime
	I1229 07:14:54.332334  245459 cri.go:226] Removing image: registry.k8s.io/kube-proxy:v1.35.0
	I1229 07:14:54.332341  245459 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.6.6-0
	I1229 07:14:54.332365  245459 ssh_runner.go:195] Run: which crictl
	I1229 07:14:54.338127  245459 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.35.0
	I1229 07:14:54.338153  245459 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I1229 07:14:54.338671  245459 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.13.1
	I1229 07:14:54.338747  245459 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.35.0
	I1229 07:14:54.369754  245459 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.35.0
	I1229 07:14:54.372134  245459 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.6.6-0
	I1229 07:14:54.372134  245459 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.35.0
	I1229 07:14:54.376767  245459 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I1229 07:14:54.376879  245459 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.35.0
	I1229 07:14:54.376954  245459 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.13.1
	I1229 07:14:54.377031  245459 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.35.0
	I1229 07:14:54.408860  245459 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.35.0
	I1229 07:14:54.411446  245459 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.35.0
	I1229 07:14:54.411466  245459 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.6.6-0
	I1229 07:14:54.416828  245459 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I1229 07:14:54.418351  245459 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.13.1
	I1229 07:14:54.418417  245459 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.35.0
	I1229 07:14:54.418458  245459 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.35.0
	I1229 07:14:54.450693  245459 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.35.0
	I1229 07:14:54.457783  245459 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22353-9207/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.6-0
	I1229 07:14:54.457839  245459 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22353-9207/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.35.0
	I1229 07:14:54.457867  245459 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22353-9207/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1
	I1229 07:14:54.457887  245459 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.6.6-0
	I1229 07:14:54.457918  245459 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.35.0
	I1229 07:14:54.457927  245459 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.10.1
	I1229 07:14:54.465328  245459 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22353-9207/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.13.1
	I1229 07:14:54.465432  245459 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.13.1
	I1229 07:14:54.465559  245459 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22353-9207/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.35.0
	I1229 07:14:54.465565  245459 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22353-9207/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.35.0
	I1229 07:14:54.465650  245459 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.35.0
	I1229 07:14:54.465671  245459 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.35.0
	I1229 07:14:54.484007  245459 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22353-9207/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.35.0
	I1229 07:14:54.484055  245459 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-controller-manager_v1.35.0: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.35.0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-controller-manager_v1.35.0': No such file or directory
	I1229 07:14:54.484104  245459 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-9207/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.35.0 --> /var/lib/minikube/images/kube-controller-manager_v1.35.0 (23144960 bytes)
	I1229 07:14:54.484133  245459 ssh_runner.go:352] existence check for /var/lib/minikube/images/etcd_3.6.6-0: stat -c "%s %y" /var/lib/minikube/images/etcd_3.6.6-0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/etcd_3.6.6-0': No such file or directory
	I1229 07:14:54.484160  245459 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-9207/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.6-0 --> /var/lib/minikube/images/etcd_3.6.6-0 (23653376 bytes)
	I1229 07:14:54.484110  245459 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.35.0
	I1229 07:14:54.484170  245459 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.10.1: stat -c "%s %y" /var/lib/minikube/images/pause_3.10.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/pause_3.10.1': No such file or directory
	I1229 07:14:54.484213  245459 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.13.1: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.13.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/coredns_v1.13.1': No such file or directory
	I1229 07:14:54.484234  245459 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-9207/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1 --> /var/lib/minikube/images/pause_3.10.1 (321024 bytes)
	I1229 07:14:54.484285  245459 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-scheduler_v1.35.0: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.35.0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-scheduler_v1.35.0': No such file or directory
	I1229 07:14:54.484239  245459 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-9207/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.13.1 --> /var/lib/minikube/images/coredns_v1.13.1 (23562752 bytes)
	I1229 07:14:54.484327  245459 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-apiserver_v1.35.0: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.35.0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-apiserver_v1.35.0': No such file or directory
	I1229 07:14:54.484341  245459 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-9207/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.35.0 --> /var/lib/minikube/images/kube-apiserver_v1.35.0 (27696640 bytes)
	I1229 07:14:54.484311  245459 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-9207/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.35.0 --> /var/lib/minikube/images/kube-scheduler_v1.35.0 (17248256 bytes)
	I1229 07:14:54.493421  245459 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-proxy_v1.35.0: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.35.0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-proxy_v1.35.0': No such file or directory
	I1229 07:14:54.493449  245459 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-9207/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.35.0 --> /var/lib/minikube/images/kube-proxy_v1.35.0 (25791488 bytes)
	I1229 07:14:54.581788  245459 crio.go:275] Loading image: /var/lib/minikube/images/pause_3.10.1
	I1229 07:14:54.581864  245459 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/pause_3.10.1
	I1229 07:14:55.067346  245459 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1229 07:14:55.323126  245459 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22353-9207/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1 from cache
	I1229 07:14:55.323159  245459 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.35.0
	I1229 07:14:55.323168  245459 cache_images.go:118] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I1229 07:14:55.323196  245459 cri.go:226] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1229 07:14:55.323215  245459 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.35.0
	I1229 07:14:55.323246  245459 ssh_runner.go:195] Run: which crictl
	I1229 07:14:56.709236  245459 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.35.0: (1.385969585s)
	I1229 07:14:56.709266  245459 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22353-9207/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.35.0 from cache
	I1229 07:14:56.709273  245459 ssh_runner.go:235] Completed: which crictl: (1.386006105s)
	I1229 07:14:56.709288  245459 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.35.0
	I1229 07:14:56.709328  245459 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1229 07:14:56.709329  245459 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.35.0
	I1229 07:14:56.736290  245459 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1229 07:14:57.527383  245459 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22353-9207/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.35.0 from cache
	I1229 07:14:57.527424  245459 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.6.6-0
	I1229 07:14:57.527480  245459 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.6.6-0
	I1229 07:14:57.527485  245459 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	W1229 07:14:56.932390  241214 pod_ready.go:104] pod "coredns-5dd5756b68-pnstl" is not "Ready", error: <nil>
	W1229 07:14:58.932686  241214 pod_ready.go:104] pod "coredns-5dd5756b68-pnstl" is not "Ready", error: <nil>
	W1229 07:15:00.934504  241214 pod_ready.go:104] pod "coredns-5dd5756b68-pnstl" is not "Ready", error: <nil>
	I1229 07:14:58.743101  245459 ssh_runner.go:235] Completed: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.215584622s)
	I1229 07:14:58.743169  245459 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22353-9207/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I1229 07:14:58.743167  245459 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.6.6-0: (1.215661248s)
	I1229 07:14:58.743191  245459 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22353-9207/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.6-0 from cache
	I1229 07:14:58.743234  245459 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.13.1
	I1229 07:14:58.743275  245459 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I1229 07:14:58.743280  245459 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.13.1
	I1229 07:14:59.913084  245459 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.13.1: (1.16978422s)
	I1229 07:14:59.913110  245459 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22353-9207/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.13.1 from cache
	I1229 07:14:59.913112  245459 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: (1.169814917s)
	I1229 07:14:59.913133  245459 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.35.0
	I1229 07:14:59.913157  245459 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I1229 07:14:59.913168  245459 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.35.0
	I1229 07:14:59.913184  245459 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-9207/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (9060352 bytes)
	I1229 07:15:01.299073  245459 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.35.0: (1.385876515s)
	I1229 07:15:01.299106  245459 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22353-9207/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.35.0 from cache
	I1229 07:15:01.299125  245459 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.35.0
	I1229 07:15:01.299171  245459 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.35.0
	I1229 07:15:02.364232  245459 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.35.0: (1.065025379s)
	I1229 07:15:02.364259  245459 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22353-9207/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.35.0 from cache
	I1229 07:15:02.364288  245459 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I1229 07:15:02.364342  245459 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I1229 07:15:02.901195  245459 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22353-9207/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I1229 07:15:02.901252  245459 cache_images.go:125] Successfully loaded all cached images
	I1229 07:15:02.901266  245459 cache_images.go:94] duration metric: took 8.818282444s to LoadCachedImages
	I1229 07:15:02.901278  245459 kubeadm.go:935] updating node { 192.168.94.2 8443 v1.35.0 crio true true} ...
	I1229 07:15:02.901360  245459 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=no-preload-122332 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.94.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0 ClusterName:no-preload-122332 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1229 07:15:02.901423  245459 ssh_runner.go:195] Run: crio config
	I1229 07:15:02.946475  245459 cni.go:84] Creating CNI manager for ""
	I1229 07:15:02.946504  245459 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1229 07:15:02.946527  245459 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1229 07:15:02.946551  245459 kubeadm.go:197] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.94.2 APIServerPort:8443 KubernetesVersion:v1.35.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-122332 NodeName:no-preload-122332 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.94.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.94.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/
kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1229 07:15:02.946715  245459 kubeadm.go:203] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.94.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-122332"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.94.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.94.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1229 07:15:02.946773  245459 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0
	I1229 07:15:02.955426  245459 binaries.go:54] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.35.0: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.35.0': No such file or directory
	
	Initiating transfer...
	I1229 07:15:02.955485  245459 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.35.0
	I1229 07:15:02.963767  245459 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.35.0/bin/linux/amd64/kubectl.sha256
	I1229 07:15:02.963815  245459 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.35.0/bin/linux/amd64/kubeadm.sha256
	I1229 07:15:02.963815  245459 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.35.0/bin/linux/amd64/kubelet.sha256
	I1229 07:15:02.963860  245459 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0/kubectl
	I1229 07:15:02.963900  245459 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0/kubeadm
	I1229 07:15:02.963914  245459 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1229 07:15:02.968099  245459 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.35.0/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.35.0/kubectl': No such file or directory
	I1229 07:15:02.968137  245459 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-9207/.minikube/cache/linux/amd64/v1.35.0/kubectl --> /var/lib/minikube/binaries/v1.35.0/kubectl (58597560 bytes)
	I1229 07:15:02.969449  245459 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.35.0/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.35.0/kubeadm': No such file or directory
	I1229 07:15:02.969475  245459 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-9207/.minikube/cache/linux/amd64/v1.35.0/kubeadm --> /var/lib/minikube/binaries/v1.35.0/kubeadm (72368312 bytes)
	I1229 07:15:02.985772  245459 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0/kubelet
	I1229 07:15:03.031174  245459 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.35.0/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.35.0/kubelet': No such file or directory
	I1229 07:15:03.031210  245459 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-9207/.minikube/cache/linux/amd64/v1.35.0/kubelet --> /var/lib/minikube/binaries/v1.35.0/kubelet (58110244 bytes)
	I1229 07:15:03.485571  245459 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1229 07:15:03.494174  245459 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I1229 07:15:03.506958  245459 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1229 07:15:00.599298  225445 api_server.go:299] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1229 07:15:00.599755  225445 api_server.go:315] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1229 07:15:00.599816  225445 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1229 07:15:00.599864  225445 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1229 07:15:00.630594  225445 cri.go:96] found id: "864abb65f432c26fa136421795c1a96c0e9342e9a1e658790ff31d6cfc64ee5e"
	I1229 07:15:00.630618  225445 cri.go:96] found id: "3a9c461f6549cedfcb902d493a98dfdfd45573c28ec554e6b1d2ce904ee1fd67"
	I1229 07:15:00.630625  225445 cri.go:96] found id: ""
	I1229 07:15:00.630634  225445 logs.go:282] 2 containers: [864abb65f432c26fa136421795c1a96c0e9342e9a1e658790ff31d6cfc64ee5e 3a9c461f6549cedfcb902d493a98dfdfd45573c28ec554e6b1d2ce904ee1fd67]
	I1229 07:15:00.630698  225445 ssh_runner.go:195] Run: which crictl
	I1229 07:15:00.635205  225445 ssh_runner.go:195] Run: which crictl
	I1229 07:15:00.639105  225445 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1229 07:15:00.639168  225445 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1229 07:15:00.668518  225445 cri.go:96] found id: "02694b727a7ef5615b676832bc306017d23da15eff0a3aa9c1e32630ffe9e1bd"
	I1229 07:15:00.668542  225445 cri.go:96] found id: ""
	I1229 07:15:00.668551  225445 logs.go:282] 1 containers: [02694b727a7ef5615b676832bc306017d23da15eff0a3aa9c1e32630ffe9e1bd]
	I1229 07:15:00.668624  225445 ssh_runner.go:195] Run: which crictl
	I1229 07:15:00.672911  225445 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1229 07:15:00.672977  225445 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1229 07:15:00.703605  225445 cri.go:96] found id: ""
	I1229 07:15:00.703647  225445 logs.go:282] 0 containers: []
	W1229 07:15:00.703655  225445 logs.go:284] No container was found matching "coredns"
	I1229 07:15:00.703660  225445 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1229 07:15:00.703704  225445 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1229 07:15:00.730689  225445 cri.go:96] found id: "bcf9a8233b7fe61acd5d7b0071cecd58e00a26cec4247f4ead31d75316b79078"
	I1229 07:15:00.730709  225445 cri.go:96] found id: "1a2a5ec730671fe2fec7f6b801e54afe17547e25677f0f6dc85052e44f7374d4"
	I1229 07:15:00.730712  225445 cri.go:96] found id: "83f0b45b14d0f5c0f2428e774b2c22cc1cbf1cdc631be13cb8017de1791ed9c1"
	I1229 07:15:00.730715  225445 cri.go:96] found id: ""
	I1229 07:15:00.730724  225445 logs.go:282] 3 containers: [bcf9a8233b7fe61acd5d7b0071cecd58e00a26cec4247f4ead31d75316b79078 1a2a5ec730671fe2fec7f6b801e54afe17547e25677f0f6dc85052e44f7374d4 83f0b45b14d0f5c0f2428e774b2c22cc1cbf1cdc631be13cb8017de1791ed9c1]
	I1229 07:15:00.730801  225445 ssh_runner.go:195] Run: which crictl
	I1229 07:15:00.735191  225445 ssh_runner.go:195] Run: which crictl
	I1229 07:15:00.739491  225445 ssh_runner.go:195] Run: which crictl
	I1229 07:15:00.743740  225445 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1229 07:15:00.743796  225445 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1229 07:15:00.774983  225445 cri.go:96] found id: ""
	I1229 07:15:00.775012  225445 logs.go:282] 0 containers: []
	W1229 07:15:00.775023  225445 logs.go:284] No container was found matching "kube-proxy"
	I1229 07:15:00.775031  225445 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1229 07:15:00.775083  225445 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1229 07:15:00.804277  225445 cri.go:96] found id: "e9d5b34dd02d51c0010757fb5a8250c74f693ed88fa87d76d4b6b6a6938ab9de"
	I1229 07:15:00.804302  225445 cri.go:96] found id: "c6904a1539983ab61b05abbaa0001209617c96ddc1921ff83557531b68191f66"
	I1229 07:15:00.804308  225445 cri.go:96] found id: ""
	I1229 07:15:00.804317  225445 logs.go:282] 2 containers: [e9d5b34dd02d51c0010757fb5a8250c74f693ed88fa87d76d4b6b6a6938ab9de c6904a1539983ab61b05abbaa0001209617c96ddc1921ff83557531b68191f66]
	I1229 07:15:00.804370  225445 ssh_runner.go:195] Run: which crictl
	I1229 07:15:00.808514  225445 ssh_runner.go:195] Run: which crictl
	I1229 07:15:00.812116  225445 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1229 07:15:00.812191  225445 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1229 07:15:00.841069  225445 cri.go:96] found id: ""
	I1229 07:15:00.841090  225445 logs.go:282] 0 containers: []
	W1229 07:15:00.841098  225445 logs.go:284] No container was found matching "kindnet"
	I1229 07:15:00.841103  225445 cri.go:61] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1229 07:15:00.841164  225445 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=storage-provisioner
	I1229 07:15:00.870655  225445 cri.go:96] found id: ""
	I1229 07:15:00.870680  225445 logs.go:282] 0 containers: []
	W1229 07:15:00.870690  225445 logs.go:284] No container was found matching "storage-provisioner"
	I1229 07:15:00.870700  225445 logs.go:123] Gathering logs for dmesg ...
	I1229 07:15:00.870714  225445 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1229 07:15:00.883900  225445 logs.go:123] Gathering logs for kube-apiserver [864abb65f432c26fa136421795c1a96c0e9342e9a1e658790ff31d6cfc64ee5e] ...
	I1229 07:15:00.883926  225445 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 864abb65f432c26fa136421795c1a96c0e9342e9a1e658790ff31d6cfc64ee5e"
	I1229 07:15:00.919064  225445 logs.go:123] Gathering logs for kube-apiserver [3a9c461f6549cedfcb902d493a98dfdfd45573c28ec554e6b1d2ce904ee1fd67] ...
	I1229 07:15:00.919098  225445 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3a9c461f6549cedfcb902d493a98dfdfd45573c28ec554e6b1d2ce904ee1fd67"
	I1229 07:15:00.955095  225445 logs.go:123] Gathering logs for etcd [02694b727a7ef5615b676832bc306017d23da15eff0a3aa9c1e32630ffe9e1bd] ...
	I1229 07:15:00.955123  225445 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 02694b727a7ef5615b676832bc306017d23da15eff0a3aa9c1e32630ffe9e1bd"
	I1229 07:15:00.990776  225445 logs.go:123] Gathering logs for kube-scheduler [83f0b45b14d0f5c0f2428e774b2c22cc1cbf1cdc631be13cb8017de1791ed9c1] ...
	I1229 07:15:00.990801  225445 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 83f0b45b14d0f5c0f2428e774b2c22cc1cbf1cdc631be13cb8017de1791ed9c1"
	I1229 07:15:01.026346  225445 logs.go:123] Gathering logs for kube-controller-manager [e9d5b34dd02d51c0010757fb5a8250c74f693ed88fa87d76d4b6b6a6938ab9de] ...
	I1229 07:15:01.026378  225445 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e9d5b34dd02d51c0010757fb5a8250c74f693ed88fa87d76d4b6b6a6938ab9de"
	I1229 07:15:01.054756  225445 logs.go:123] Gathering logs for kube-controller-manager [c6904a1539983ab61b05abbaa0001209617c96ddc1921ff83557531b68191f66] ...
	I1229 07:15:01.054785  225445 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c6904a1539983ab61b05abbaa0001209617c96ddc1921ff83557531b68191f66"
	I1229 07:15:01.084122  225445 logs.go:123] Gathering logs for describe nodes ...
	I1229 07:15:01.084152  225445 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1229 07:15:01.156209  225445 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1229 07:15:01.156248  225445 logs.go:123] Gathering logs for kube-scheduler [bcf9a8233b7fe61acd5d7b0071cecd58e00a26cec4247f4ead31d75316b79078] ...
	I1229 07:15:01.156265  225445 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 bcf9a8233b7fe61acd5d7b0071cecd58e00a26cec4247f4ead31d75316b79078"
	W1229 07:15:01.188644  225445 logs.go:138] Found kube-scheduler [bcf9a8233b7fe61acd5d7b0071cecd58e00a26cec4247f4ead31d75316b79078] problem: E1229 07:14:51.301899       1 run.go:72] "command failed" err="failed to create listener: failed to listen on 127.0.0.1:10259: listen tcp 127.0.0.1:10259: bind: address already in use"
	I1229 07:15:01.188691  225445 logs.go:123] Gathering logs for kube-scheduler [1a2a5ec730671fe2fec7f6b801e54afe17547e25677f0f6dc85052e44f7374d4] ...
	I1229 07:15:01.188709  225445 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 1a2a5ec730671fe2fec7f6b801e54afe17547e25677f0f6dc85052e44f7374d4"
	I1229 07:15:01.266991  225445 logs.go:123] Gathering logs for CRI-O ...
	I1229 07:15:01.267026  225445 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1229 07:15:01.335632  225445 logs.go:123] Gathering logs for container status ...
	I1229 07:15:01.335672  225445 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1229 07:15:01.369818  225445 logs.go:123] Gathering logs for kubelet ...
	I1229 07:15:01.369853  225445 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1229 07:15:01.462080  225445 out.go:374] Setting ErrFile to fd 2...
	I1229 07:15:01.462107  225445 out.go:408] TERM=,COLORTERM=, which probably does not support color
	W1229 07:15:01.462167  225445 out.go:285] X Problems detected in kube-scheduler [bcf9a8233b7fe61acd5d7b0071cecd58e00a26cec4247f4ead31d75316b79078]:
	W1229 07:15:01.462182  225445 out.go:285]   E1229 07:14:51.301899       1 run.go:72] "command failed" err="failed to create listener: failed to listen on 127.0.0.1:10259: listen tcp 127.0.0.1:10259: bind: address already in use"
	I1229 07:15:01.462188  225445 out.go:374] Setting ErrFile to fd 2...
	I1229 07:15:01.462195  225445 out.go:408] TERM=,COLORTERM=, which probably does not support color
	W1229 07:15:03.433485  241214 pod_ready.go:104] pod "coredns-5dd5756b68-pnstl" is not "Ready", error: <nil>
	W1229 07:15:05.433731  241214 pod_ready.go:104] pod "coredns-5dd5756b68-pnstl" is not "Ready", error: <nil>
	I1229 07:15:03.630717  245459 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2213 bytes)
	I1229 07:15:03.644828  245459 ssh_runner.go:195] Run: grep 192.168.94.2	control-plane.minikube.internal$ /etc/hosts
	I1229 07:15:03.648791  245459 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.94.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1229 07:15:03.659287  245459 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1229 07:15:03.741781  245459 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1229 07:15:03.768679  245459 certs.go:69] Setting up /home/jenkins/minikube-integration/22353-9207/.minikube/profiles/no-preload-122332 for IP: 192.168.94.2
	I1229 07:15:03.768704  245459 certs.go:195] generating shared ca certs ...
	I1229 07:15:03.768723  245459 certs.go:227] acquiring lock for ca certs: {Name:mk9ea2caa33885d3eff30cd6b31b0954826457bd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1229 07:15:03.768858  245459 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22353-9207/.minikube/ca.key
	I1229 07:15:03.768905  245459 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22353-9207/.minikube/proxy-client-ca.key
	I1229 07:15:03.768920  245459 certs.go:257] generating profile certs ...
	I1229 07:15:03.768981  245459 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22353-9207/.minikube/profiles/no-preload-122332/client.key
	I1229 07:15:03.769002  245459 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22353-9207/.minikube/profiles/no-preload-122332/client.crt with IP's: []
	I1229 07:15:03.837813  245459 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22353-9207/.minikube/profiles/no-preload-122332/client.crt ...
	I1229 07:15:03.837840  245459 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22353-9207/.minikube/profiles/no-preload-122332/client.crt: {Name:mkd414613fec1a2dd800ddc9ca6bc6a4705cfab2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1229 07:15:03.837999  245459 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22353-9207/.minikube/profiles/no-preload-122332/client.key ...
	I1229 07:15:03.838010  245459 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22353-9207/.minikube/profiles/no-preload-122332/client.key: {Name:mk6f029bf999c498ef9ce3ce68c7c0381a32c859 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1229 07:15:03.838087  245459 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22353-9207/.minikube/profiles/no-preload-122332/apiserver.key.8c20c595
	I1229 07:15:03.838103  245459 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22353-9207/.minikube/profiles/no-preload-122332/apiserver.crt.8c20c595 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.94.2]
	I1229 07:15:03.868115  245459 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22353-9207/.minikube/profiles/no-preload-122332/apiserver.crt.8c20c595 ...
	I1229 07:15:03.868139  245459 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22353-9207/.minikube/profiles/no-preload-122332/apiserver.crt.8c20c595: {Name:mk0aa58ec87a70bc621ab6f481dd4ea712bbcdbb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1229 07:15:03.868287  245459 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22353-9207/.minikube/profiles/no-preload-122332/apiserver.key.8c20c595 ...
	I1229 07:15:03.868300  245459 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22353-9207/.minikube/profiles/no-preload-122332/apiserver.key.8c20c595: {Name:mkf61453ffeb635bf79fcbe951ac5845a6244320 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1229 07:15:03.868369  245459 certs.go:382] copying /home/jenkins/minikube-integration/22353-9207/.minikube/profiles/no-preload-122332/apiserver.crt.8c20c595 -> /home/jenkins/minikube-integration/22353-9207/.minikube/profiles/no-preload-122332/apiserver.crt
	I1229 07:15:03.868440  245459 certs.go:386] copying /home/jenkins/minikube-integration/22353-9207/.minikube/profiles/no-preload-122332/apiserver.key.8c20c595 -> /home/jenkins/minikube-integration/22353-9207/.minikube/profiles/no-preload-122332/apiserver.key
	I1229 07:15:03.868491  245459 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22353-9207/.minikube/profiles/no-preload-122332/proxy-client.key
	I1229 07:15:03.868505  245459 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22353-9207/.minikube/profiles/no-preload-122332/proxy-client.crt with IP's: []
	I1229 07:15:03.964388  245459 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22353-9207/.minikube/profiles/no-preload-122332/proxy-client.crt ...
	I1229 07:15:03.964415  245459 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22353-9207/.minikube/profiles/no-preload-122332/proxy-client.crt: {Name:mk23a95ba3379465e793c7c74c3ec80fed5ae7b4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1229 07:15:03.964557  245459 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22353-9207/.minikube/profiles/no-preload-122332/proxy-client.key ...
	I1229 07:15:03.964570  245459 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22353-9207/.minikube/profiles/no-preload-122332/proxy-client.key: {Name:mk352b6374d05a023835df7477e511e85a67fab8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1229 07:15:03.964749  245459 certs.go:484] found cert: /home/jenkins/minikube-integration/22353-9207/.minikube/certs/12733.pem (1338 bytes)
	W1229 07:15:03.964786  245459 certs.go:480] ignoring /home/jenkins/minikube-integration/22353-9207/.minikube/certs/12733_empty.pem, impossibly tiny 0 bytes
	I1229 07:15:03.964798  245459 certs.go:484] found cert: /home/jenkins/minikube-integration/22353-9207/.minikube/certs/ca-key.pem (1675 bytes)
	I1229 07:15:03.964823  245459 certs.go:484] found cert: /home/jenkins/minikube-integration/22353-9207/.minikube/certs/ca.pem (1082 bytes)
	I1229 07:15:03.964848  245459 certs.go:484] found cert: /home/jenkins/minikube-integration/22353-9207/.minikube/certs/cert.pem (1123 bytes)
	I1229 07:15:03.964873  245459 certs.go:484] found cert: /home/jenkins/minikube-integration/22353-9207/.minikube/certs/key.pem (1675 bytes)
	I1229 07:15:03.964948  245459 certs.go:484] found cert: /home/jenkins/minikube-integration/22353-9207/.minikube/files/etc/ssl/certs/127332.pem (1708 bytes)
	I1229 07:15:03.965509  245459 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-9207/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1229 07:15:03.983731  245459 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-9207/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1229 07:15:04.001766  245459 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-9207/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1229 07:15:04.019067  245459 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-9207/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1229 07:15:04.036086  245459 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-9207/.minikube/profiles/no-preload-122332/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1229 07:15:04.052492  245459 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-9207/.minikube/profiles/no-preload-122332/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1229 07:15:04.069080  245459 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-9207/.minikube/profiles/no-preload-122332/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1229 07:15:04.086290  245459 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-9207/.minikube/profiles/no-preload-122332/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1229 07:15:04.103177  245459 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-9207/.minikube/certs/12733.pem --> /usr/share/ca-certificates/12733.pem (1338 bytes)
	I1229 07:15:04.123209  245459 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-9207/.minikube/files/etc/ssl/certs/127332.pem --> /usr/share/ca-certificates/127332.pem (1708 bytes)
	I1229 07:15:04.140734  245459 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-9207/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1229 07:15:04.157885  245459 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (722 bytes)
	I1229 07:15:04.170000  245459 ssh_runner.go:195] Run: openssl version
	I1229 07:15:04.176125  245459 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/12733.pem
	I1229 07:15:04.183418  245459 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/12733.pem /etc/ssl/certs/12733.pem
	I1229 07:15:04.190458  245459 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12733.pem
	I1229 07:15:04.194486  245459 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 29 06:49 /usr/share/ca-certificates/12733.pem
	I1229 07:15:04.194536  245459 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12733.pem
	I1229 07:15:04.229647  245459 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1229 07:15:04.237298  245459 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/12733.pem /etc/ssl/certs/51391683.0
	I1229 07:15:04.245013  245459 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/127332.pem
	I1229 07:15:04.252185  245459 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/127332.pem /etc/ssl/certs/127332.pem
	I1229 07:15:04.259492  245459 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/127332.pem
	I1229 07:15:04.262992  245459 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 29 06:49 /usr/share/ca-certificates/127332.pem
	I1229 07:15:04.263033  245459 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/127332.pem
	I1229 07:15:04.296758  245459 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1229 07:15:04.304343  245459 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/127332.pem /etc/ssl/certs/3ec20f2e.0
	I1229 07:15:04.311720  245459 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1229 07:15:04.319013  245459 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1229 07:15:04.326130  245459 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1229 07:15:04.329968  245459 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 29 06:46 /usr/share/ca-certificates/minikubeCA.pem
	I1229 07:15:04.330024  245459 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1229 07:15:04.365777  245459 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1229 07:15:04.373840  245459 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1229 07:15:04.381407  245459 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1229 07:15:04.385104  245459 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1229 07:15:04.385167  245459 kubeadm.go:401] StartCluster: {Name:no-preload-122332 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766979815-22353@sha256:20dad5895b49b986a1253c0faab60865204843ac97fd3a6e6210da5896244409 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:no-preload-122332 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: So
cketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1229 07:15:04.385248  245459 cri.go:61] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1229 07:15:04.385292  245459 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1229 07:15:04.412702  245459 cri.go:96] found id: ""
	I1229 07:15:04.412772  245459 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1229 07:15:04.420913  245459 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1229 07:15:04.429486  245459 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1229 07:15:04.429543  245459 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1229 07:15:04.438362  245459 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1229 07:15:04.438379  245459 kubeadm.go:158] found existing configuration files:
	
	I1229 07:15:04.438413  245459 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1229 07:15:04.445947  245459 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1229 07:15:04.446001  245459 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1229 07:15:04.453407  245459 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1229 07:15:04.460635  245459 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1229 07:15:04.460679  245459 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1229 07:15:04.467824  245459 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1229 07:15:04.475197  245459 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1229 07:15:04.475260  245459 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1229 07:15:04.482620  245459 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1229 07:15:04.490555  245459 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1229 07:15:04.490607  245459 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1229 07:15:04.498848  245459 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1229 07:15:04.594562  245459 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1045-gcp\n", err: exit status 1
	I1229 07:15:04.651558  245459 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	W1229 07:15:07.433883  241214 pod_ready.go:104] pod "coredns-5dd5756b68-pnstl" is not "Ready", error: <nil>
	W1229 07:15:09.933334  241214 pod_ready.go:104] pod "coredns-5dd5756b68-pnstl" is not "Ready", error: <nil>
	I1229 07:15:12.498044  245459 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0
	I1229 07:15:12.498107  245459 kubeadm.go:319] [preflight] Running pre-flight checks
	I1229 07:15:12.498206  245459 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1229 07:15:12.498322  245459 kubeadm.go:319] KERNEL_VERSION: 6.8.0-1045-gcp
	I1229 07:15:12.498381  245459 kubeadm.go:319] OS: Linux
	I1229 07:15:12.498454  245459 kubeadm.go:319] CGROUPS_CPU: enabled
	I1229 07:15:12.498521  245459 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1229 07:15:12.498573  245459 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1229 07:15:12.498631  245459 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1229 07:15:12.498676  245459 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1229 07:15:12.498717  245459 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1229 07:15:12.498764  245459 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1229 07:15:12.498804  245459 kubeadm.go:319] CGROUPS_IO: enabled
	I1229 07:15:12.498870  245459 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1229 07:15:12.498961  245459 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1229 07:15:12.499070  245459 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1229 07:15:12.499136  245459 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1229 07:15:12.500731  245459 out.go:252]   - Generating certificates and keys ...
	I1229 07:15:12.500835  245459 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1229 07:15:12.500908  245459 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1229 07:15:12.500982  245459 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1229 07:15:12.501033  245459 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1229 07:15:12.501101  245459 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1229 07:15:12.501169  245459 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1229 07:15:12.501300  245459 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1229 07:15:12.501461  245459 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [localhost no-preload-122332] and IPs [192.168.94.2 127.0.0.1 ::1]
	I1229 07:15:12.501539  245459 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1229 07:15:12.501674  245459 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [localhost no-preload-122332] and IPs [192.168.94.2 127.0.0.1 ::1]
	I1229 07:15:12.501767  245459 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1229 07:15:12.501831  245459 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1229 07:15:12.501876  245459 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1229 07:15:12.501927  245459 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1229 07:15:12.501977  245459 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1229 07:15:12.502026  245459 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1229 07:15:12.502072  245459 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1229 07:15:12.502138  245459 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1229 07:15:12.502194  245459 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1229 07:15:12.502298  245459 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1229 07:15:12.502372  245459 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1229 07:15:12.504411  245459 out.go:252]   - Booting up control plane ...
	I1229 07:15:12.504496  245459 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1229 07:15:12.504563  245459 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1229 07:15:12.504620  245459 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1229 07:15:12.504711  245459 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1229 07:15:12.504794  245459 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1229 07:15:12.504895  245459 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1229 07:15:12.504972  245459 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1229 07:15:12.505011  245459 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1229 07:15:12.505125  245459 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1229 07:15:12.505286  245459 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1229 07:15:12.505364  245459 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 501.796195ms
	I1229 07:15:12.505494  245459 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1229 07:15:12.505565  245459 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.94.2:8443/livez
	I1229 07:15:12.505651  245459 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1229 07:15:12.505745  245459 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1229 07:15:12.505815  245459 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 510.728293ms
	I1229 07:15:12.505872  245459 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 2.184700007s
	I1229 07:15:12.505935  245459 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 4.00121426s
	I1229 07:15:12.506096  245459 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1229 07:15:12.506265  245459 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1229 07:15:12.506360  245459 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1229 07:15:12.506575  245459 kubeadm.go:319] [mark-control-plane] Marking the node no-preload-122332 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1229 07:15:12.506633  245459 kubeadm.go:319] [bootstrap-token] Using token: n59rak.5imj7ctdwsn26hut
	I1229 07:15:12.507814  245459 out.go:252]   - Configuring RBAC rules ...
	I1229 07:15:12.507947  245459 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1229 07:15:12.508029  245459 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1229 07:15:12.508177  245459 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1229 07:15:12.508318  245459 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1229 07:15:12.508432  245459 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1229 07:15:12.508517  245459 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1229 07:15:12.508621  245459 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1229 07:15:12.508695  245459 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1229 07:15:12.508765  245459 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1229 07:15:12.508775  245459 kubeadm.go:319] 
	I1229 07:15:12.508864  245459 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1229 07:15:12.508871  245459 kubeadm.go:319] 
	I1229 07:15:12.508989  245459 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1229 07:15:12.509004  245459 kubeadm.go:319] 
	I1229 07:15:12.509030  245459 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1229 07:15:12.509089  245459 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1229 07:15:12.509136  245459 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1229 07:15:12.509145  245459 kubeadm.go:319] 
	I1229 07:15:12.509200  245459 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1229 07:15:12.509206  245459 kubeadm.go:319] 
	I1229 07:15:12.509273  245459 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1229 07:15:12.509285  245459 kubeadm.go:319] 
	I1229 07:15:12.509336  245459 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1229 07:15:12.509407  245459 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1229 07:15:12.509468  245459 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1229 07:15:12.509473  245459 kubeadm.go:319] 
	I1229 07:15:12.509563  245459 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1229 07:15:12.509658  245459 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1229 07:15:12.509666  245459 kubeadm.go:319] 
	I1229 07:15:12.509736  245459 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token n59rak.5imj7ctdwsn26hut \
	I1229 07:15:12.509829  245459 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:d97bd7e33b97d9ccbd4ec53c25ea13f0ec6aabee3d7ac32cc2829ba2c3b93811 \
	I1229 07:15:12.509852  245459 kubeadm.go:319] 	--control-plane 
	I1229 07:15:12.509857  245459 kubeadm.go:319] 
	I1229 07:15:12.509927  245459 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1229 07:15:12.509935  245459 kubeadm.go:319] 
	I1229 07:15:12.510048  245459 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token n59rak.5imj7ctdwsn26hut \
	I1229 07:15:12.510159  245459 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:d97bd7e33b97d9ccbd4ec53c25ea13f0ec6aabee3d7ac32cc2829ba2c3b93811 
	I1229 07:15:12.510170  245459 cni.go:84] Creating CNI manager for ""
	I1229 07:15:12.510176  245459 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1229 07:15:12.511411  245459 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1229 07:15:12.512524  245459 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1229 07:15:12.517100  245459 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.35.0/kubectl ...
	I1229 07:15:12.517122  245459 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2620 bytes)
	I1229 07:15:12.530368  245459 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1229 07:15:12.729377  245459 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1229 07:15:12.729493  245459 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1229 07:15:12.729564  245459 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes no-preload-122332 minikube.k8s.io/updated_at=2025_12_29T07_15_12_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=306c26d738eaa3534a776d7c684ed563998a48b8 minikube.k8s.io/name=no-preload-122332 minikube.k8s.io/primary=true
	I1229 07:15:12.806649  245459 ops.go:34] apiserver oom_adj: -16
	I1229 07:15:12.806736  245459 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1229 07:15:13.306968  245459 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1229 07:15:11.464342  225445 api_server.go:299] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1229 07:15:11.464751  225445 api_server.go:315] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1229 07:15:11.464806  225445 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1229 07:15:11.464856  225445 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1229 07:15:11.492541  225445 cri.go:96] found id: "864abb65f432c26fa136421795c1a96c0e9342e9a1e658790ff31d6cfc64ee5e"
	I1229 07:15:11.492567  225445 cri.go:96] found id: "3a9c461f6549cedfcb902d493a98dfdfd45573c28ec554e6b1d2ce904ee1fd67"
	I1229 07:15:11.492576  225445 cri.go:96] found id: ""
	I1229 07:15:11.492585  225445 logs.go:282] 2 containers: [864abb65f432c26fa136421795c1a96c0e9342e9a1e658790ff31d6cfc64ee5e 3a9c461f6549cedfcb902d493a98dfdfd45573c28ec554e6b1d2ce904ee1fd67]
	I1229 07:15:11.492648  225445 ssh_runner.go:195] Run: which crictl
	I1229 07:15:11.497133  225445 ssh_runner.go:195] Run: which crictl
	I1229 07:15:11.500792  225445 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1229 07:15:11.500862  225445 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1229 07:15:11.529062  225445 cri.go:96] found id: "02694b727a7ef5615b676832bc306017d23da15eff0a3aa9c1e32630ffe9e1bd"
	I1229 07:15:11.529088  225445 cri.go:96] found id: ""
	I1229 07:15:11.529098  225445 logs.go:282] 1 containers: [02694b727a7ef5615b676832bc306017d23da15eff0a3aa9c1e32630ffe9e1bd]
	I1229 07:15:11.529155  225445 ssh_runner.go:195] Run: which crictl
	I1229 07:15:11.534990  225445 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1229 07:15:11.535056  225445 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1229 07:15:11.566743  225445 cri.go:96] found id: ""
	I1229 07:15:11.566769  225445 logs.go:282] 0 containers: []
	W1229 07:15:11.566780  225445 logs.go:284] No container was found matching "coredns"
	I1229 07:15:11.566787  225445 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1229 07:15:11.566855  225445 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1229 07:15:11.596946  225445 cri.go:96] found id: "bcf9a8233b7fe61acd5d7b0071cecd58e00a26cec4247f4ead31d75316b79078"
	I1229 07:15:11.596971  225445 cri.go:96] found id: "1a2a5ec730671fe2fec7f6b801e54afe17547e25677f0f6dc85052e44f7374d4"
	I1229 07:15:11.596978  225445 cri.go:96] found id: "83f0b45b14d0f5c0f2428e774b2c22cc1cbf1cdc631be13cb8017de1791ed9c1"
	I1229 07:15:11.596982  225445 cri.go:96] found id: ""
	I1229 07:15:11.596991  225445 logs.go:282] 3 containers: [bcf9a8233b7fe61acd5d7b0071cecd58e00a26cec4247f4ead31d75316b79078 1a2a5ec730671fe2fec7f6b801e54afe17547e25677f0f6dc85052e44f7374d4 83f0b45b14d0f5c0f2428e774b2c22cc1cbf1cdc631be13cb8017de1791ed9c1]
	I1229 07:15:11.597047  225445 ssh_runner.go:195] Run: which crictl
	I1229 07:15:11.601285  225445 ssh_runner.go:195] Run: which crictl
	I1229 07:15:11.605011  225445 ssh_runner.go:195] Run: which crictl
	I1229 07:15:11.608892  225445 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1229 07:15:11.608954  225445 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1229 07:15:11.636667  225445 cri.go:96] found id: ""
	I1229 07:15:11.636688  225445 logs.go:282] 0 containers: []
	W1229 07:15:11.636696  225445 logs.go:284] No container was found matching "kube-proxy"
	I1229 07:15:11.636701  225445 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1229 07:15:11.636747  225445 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1229 07:15:11.677966  225445 cri.go:96] found id: "e9d5b34dd02d51c0010757fb5a8250c74f693ed88fa87d76d4b6b6a6938ab9de"
	I1229 07:15:11.677992  225445 cri.go:96] found id: "c6904a1539983ab61b05abbaa0001209617c96ddc1921ff83557531b68191f66"
	I1229 07:15:11.677998  225445 cri.go:96] found id: ""
	I1229 07:15:11.678007  225445 logs.go:282] 2 containers: [e9d5b34dd02d51c0010757fb5a8250c74f693ed88fa87d76d4b6b6a6938ab9de c6904a1539983ab61b05abbaa0001209617c96ddc1921ff83557531b68191f66]
	I1229 07:15:11.678065  225445 ssh_runner.go:195] Run: which crictl
	I1229 07:15:11.682709  225445 ssh_runner.go:195] Run: which crictl
	I1229 07:15:11.686715  225445 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1229 07:15:11.686777  225445 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1229 07:15:11.715970  225445 cri.go:96] found id: ""
	I1229 07:15:11.715998  225445 logs.go:282] 0 containers: []
	W1229 07:15:11.716008  225445 logs.go:284] No container was found matching "kindnet"
	I1229 07:15:11.716016  225445 cri.go:61] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1229 07:15:11.716074  225445 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=storage-provisioner
	I1229 07:15:11.755350  225445 cri.go:96] found id: ""
	I1229 07:15:11.755378  225445 logs.go:282] 0 containers: []
	W1229 07:15:11.755391  225445 logs.go:284] No container was found matching "storage-provisioner"
	I1229 07:15:11.755403  225445 logs.go:123] Gathering logs for CRI-O ...
	I1229 07:15:11.755417  225445 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1229 07:15:11.822637  225445 logs.go:123] Gathering logs for container status ...
	I1229 07:15:11.822672  225445 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1229 07:15:11.857279  225445 logs.go:123] Gathering logs for kubelet ...
	I1229 07:15:11.857308  225445 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1229 07:15:11.962906  225445 logs.go:123] Gathering logs for describe nodes ...
	I1229 07:15:11.962941  225445 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1229 07:15:12.022279  225445 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1229 07:15:12.022308  225445 logs.go:123] Gathering logs for kube-apiserver [864abb65f432c26fa136421795c1a96c0e9342e9a1e658790ff31d6cfc64ee5e] ...
	I1229 07:15:12.022320  225445 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 864abb65f432c26fa136421795c1a96c0e9342e9a1e658790ff31d6cfc64ee5e"
	I1229 07:15:12.058675  225445 logs.go:123] Gathering logs for kube-scheduler [bcf9a8233b7fe61acd5d7b0071cecd58e00a26cec4247f4ead31d75316b79078] ...
	I1229 07:15:12.058718  225445 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 bcf9a8233b7fe61acd5d7b0071cecd58e00a26cec4247f4ead31d75316b79078"
	W1229 07:15:12.086092  225445 logs.go:138] Found kube-scheduler [bcf9a8233b7fe61acd5d7b0071cecd58e00a26cec4247f4ead31d75316b79078] problem: E1229 07:14:51.301899       1 run.go:72] "command failed" err="failed to create listener: failed to listen on 127.0.0.1:10259: listen tcp 127.0.0.1:10259: bind: address already in use"
	I1229 07:15:12.086119  225445 logs.go:123] Gathering logs for kube-scheduler [1a2a5ec730671fe2fec7f6b801e54afe17547e25677f0f6dc85052e44f7374d4] ...
	I1229 07:15:12.086132  225445 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 1a2a5ec730671fe2fec7f6b801e54afe17547e25677f0f6dc85052e44f7374d4"
	I1229 07:15:12.148550  225445 logs.go:123] Gathering logs for dmesg ...
	I1229 07:15:12.148585  225445 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1229 07:15:12.162760  225445 logs.go:123] Gathering logs for kube-apiserver [3a9c461f6549cedfcb902d493a98dfdfd45573c28ec554e6b1d2ce904ee1fd67] ...
	I1229 07:15:12.162787  225445 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3a9c461f6549cedfcb902d493a98dfdfd45573c28ec554e6b1d2ce904ee1fd67"
	I1229 07:15:12.194638  225445 logs.go:123] Gathering logs for etcd [02694b727a7ef5615b676832bc306017d23da15eff0a3aa9c1e32630ffe9e1bd] ...
	I1229 07:15:12.194673  225445 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 02694b727a7ef5615b676832bc306017d23da15eff0a3aa9c1e32630ffe9e1bd"
	I1229 07:15:12.229590  225445 logs.go:123] Gathering logs for kube-scheduler [83f0b45b14d0f5c0f2428e774b2c22cc1cbf1cdc631be13cb8017de1791ed9c1] ...
	I1229 07:15:12.229618  225445 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 83f0b45b14d0f5c0f2428e774b2c22cc1cbf1cdc631be13cb8017de1791ed9c1"
	I1229 07:15:12.258090  225445 logs.go:123] Gathering logs for kube-controller-manager [e9d5b34dd02d51c0010757fb5a8250c74f693ed88fa87d76d4b6b6a6938ab9de] ...
	I1229 07:15:12.258118  225445 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e9d5b34dd02d51c0010757fb5a8250c74f693ed88fa87d76d4b6b6a6938ab9de"
	I1229 07:15:12.285030  225445 logs.go:123] Gathering logs for kube-controller-manager [c6904a1539983ab61b05abbaa0001209617c96ddc1921ff83557531b68191f66] ...
	I1229 07:15:12.285053  225445 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c6904a1539983ab61b05abbaa0001209617c96ddc1921ff83557531b68191f66"
	I1229 07:15:12.313633  225445 out.go:374] Setting ErrFile to fd 2...
	I1229 07:15:12.313656  225445 out.go:408] TERM=,COLORTERM=, which probably does not support color
	W1229 07:15:12.313718  225445 out.go:285] X Problems detected in kube-scheduler [bcf9a8233b7fe61acd5d7b0071cecd58e00a26cec4247f4ead31d75316b79078]:
	W1229 07:15:12.313732  225445 out.go:285]   E1229 07:14:51.301899       1 run.go:72] "command failed" err="failed to create listener: failed to listen on 127.0.0.1:10259: listen tcp 127.0.0.1:10259: bind: address already in use"
	I1229 07:15:12.313736  225445 out.go:374] Setting ErrFile to fd 2...
	I1229 07:15:12.313741  225445 out.go:408] TERM=,COLORTERM=, which probably does not support color
	W1229 07:15:11.933775  241214 pod_ready.go:104] pod "coredns-5dd5756b68-pnstl" is not "Ready", error: <nil>
	W1229 07:15:14.433392  241214 pod_ready.go:104] pod "coredns-5dd5756b68-pnstl" is not "Ready", error: <nil>
	I1229 07:15:15.932655  241214 pod_ready.go:94] pod "coredns-5dd5756b68-pnstl" is "Ready"
	I1229 07:15:15.932682  241214 pod_ready.go:86] duration metric: took 39.005533371s for pod "coredns-5dd5756b68-pnstl" in "kube-system" namespace to be "Ready" or be gone ...
	I1229 07:15:15.935615  241214 pod_ready.go:83] waiting for pod "etcd-old-k8s-version-876718" in "kube-system" namespace to be "Ready" or be gone ...
	I1229 07:15:15.939504  241214 pod_ready.go:94] pod "etcd-old-k8s-version-876718" is "Ready"
	I1229 07:15:15.939521  241214 pod_ready.go:86] duration metric: took 3.884552ms for pod "etcd-old-k8s-version-876718" in "kube-system" namespace to be "Ready" or be gone ...
	I1229 07:15:15.943368  241214 pod_ready.go:83] waiting for pod "kube-apiserver-old-k8s-version-876718" in "kube-system" namespace to be "Ready" or be gone ...
	I1229 07:15:15.948475  241214 pod_ready.go:94] pod "kube-apiserver-old-k8s-version-876718" is "Ready"
	I1229 07:15:15.948497  241214 pod_ready.go:86] duration metric: took 5.11088ms for pod "kube-apiserver-old-k8s-version-876718" in "kube-system" namespace to be "Ready" or be gone ...
	I1229 07:15:15.951294  241214 pod_ready.go:83] waiting for pod "kube-controller-manager-old-k8s-version-876718" in "kube-system" namespace to be "Ready" or be gone ...
	I1229 07:15:16.131126  241214 pod_ready.go:94] pod "kube-controller-manager-old-k8s-version-876718" is "Ready"
	I1229 07:15:16.131156  241214 pod_ready.go:86] duration metric: took 179.84269ms for pod "kube-controller-manager-old-k8s-version-876718" in "kube-system" namespace to be "Ready" or be gone ...
	I1229 07:15:16.330850  241214 pod_ready.go:83] waiting for pod "kube-proxy-2v9kr" in "kube-system" namespace to be "Ready" or be gone ...
	I1229 07:15:13.807434  245459 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1229 07:15:14.307053  245459 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1229 07:15:14.807023  245459 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1229 07:15:15.307113  245459 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1229 07:15:15.806957  245459 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1229 07:15:16.307413  245459 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1229 07:15:16.807208  245459 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1229 07:15:16.875084  245459 kubeadm.go:1114] duration metric: took 4.145669474s to wait for elevateKubeSystemPrivileges
	I1229 07:15:16.875133  245459 kubeadm.go:403] duration metric: took 12.489967583s to StartCluster
	I1229 07:15:16.875155  245459 settings.go:142] acquiring lock: {Name:mkc0a676e8e9f981283a1b55a28ae5261bf35d87 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1229 07:15:16.875240  245459 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22353-9207/kubeconfig
	I1229 07:15:16.876897  245459 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22353-9207/kubeconfig: {Name:mkdd37af1bd6d0f9cc034a64c07c8d33ee5c10ac Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1229 07:15:16.877094  245459 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1229 07:15:16.877106  245459 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1229 07:15:16.877167  245459 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1229 07:15:16.877281  245459 addons.go:70] Setting storage-provisioner=true in profile "no-preload-122332"
	I1229 07:15:16.877302  245459 addons.go:239] Setting addon storage-provisioner=true in "no-preload-122332"
	I1229 07:15:16.877304  245459 addons.go:70] Setting default-storageclass=true in profile "no-preload-122332"
	I1229 07:15:16.877326  245459 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "no-preload-122332"
	I1229 07:15:16.877326  245459 config.go:182] Loaded profile config "no-preload-122332": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1229 07:15:16.877339  245459 host.go:66] Checking if "no-preload-122332" exists ...
	I1229 07:15:16.877705  245459 cli_runner.go:164] Run: docker container inspect no-preload-122332 --format={{.State.Status}}
	I1229 07:15:16.877850  245459 cli_runner.go:164] Run: docker container inspect no-preload-122332 --format={{.State.Status}}
	I1229 07:15:16.879393  245459 out.go:179] * Verifying Kubernetes components...
	I1229 07:15:16.880601  245459 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1229 07:15:16.902200  245459 addons.go:239] Setting addon default-storageclass=true in "no-preload-122332"
	I1229 07:15:16.902261  245459 host.go:66] Checking if "no-preload-122332" exists ...
	I1229 07:15:16.902672  245459 cli_runner.go:164] Run: docker container inspect no-preload-122332 --format={{.State.Status}}
	I1229 07:15:16.903154  245459 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1229 07:15:16.904611  245459 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1229 07:15:16.904635  245459 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1229 07:15:16.904697  245459 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-122332
	I1229 07:15:16.931251  245459 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1229 07:15:16.931278  245459 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1229 07:15:16.931348  245459 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-122332
	I1229 07:15:16.937725  245459 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33063 SSHKeyPath:/home/jenkins/minikube-integration/22353-9207/.minikube/machines/no-preload-122332/id_rsa Username:docker}
	I1229 07:15:16.956103  245459 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33063 SSHKeyPath:/home/jenkins/minikube-integration/22353-9207/.minikube/machines/no-preload-122332/id_rsa Username:docker}
	I1229 07:15:16.969554  245459 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.94.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.35.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1229 07:15:17.017385  245459 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1229 07:15:17.052632  245459 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1229 07:15:17.067932  245459 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1229 07:15:17.125695  245459 start.go:987] {"host.minikube.internal": 192.168.94.1} host record injected into CoreDNS's ConfigMap
	I1229 07:15:17.127886  245459 node_ready.go:35] waiting up to 6m0s for node "no-preload-122332" to be "Ready" ...
	I1229 07:15:17.365855  245459 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1229 07:15:16.730892  241214 pod_ready.go:94] pod "kube-proxy-2v9kr" is "Ready"
	I1229 07:15:16.730923  241214 pod_ready.go:86] duration metric: took 400.042744ms for pod "kube-proxy-2v9kr" in "kube-system" namespace to be "Ready" or be gone ...
	I1229 07:15:16.931966  241214 pod_ready.go:83] waiting for pod "kube-scheduler-old-k8s-version-876718" in "kube-system" namespace to be "Ready" or be gone ...
	I1229 07:15:17.331306  241214 pod_ready.go:94] pod "kube-scheduler-old-k8s-version-876718" is "Ready"
	I1229 07:15:17.331331  241214 pod_ready.go:86] duration metric: took 399.339839ms for pod "kube-scheduler-old-k8s-version-876718" in "kube-system" namespace to be "Ready" or be gone ...
	I1229 07:15:17.331342  241214 pod_ready.go:40] duration metric: took 40.408734279s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1229 07:15:17.385823  241214 start.go:625] kubectl: 1.35.0, cluster: 1.28.0 (minor skew: 7)
	I1229 07:15:17.387294  241214 out.go:203] 
	W1229 07:15:17.388729  241214 out.go:285] ! /usr/local/bin/kubectl is version 1.35.0, which may have incompatibilities with Kubernetes 1.28.0.
	I1229 07:15:17.389634  241214 out.go:179]   - Want kubectl v1.28.0? Try 'minikube kubectl -- get pods -A'
	I1229 07:15:17.393320  241214 out.go:179] * Done! kubectl is now configured to use "old-k8s-version-876718" cluster and "default" namespace by default
	I1229 07:15:17.367294  245459 addons.go:530] duration metric: took 490.128613ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I1229 07:15:17.630299  245459 kapi.go:214] "coredns" deployment in "kube-system" namespace and "no-preload-122332" context rescaled to 1 replicas
	W1229 07:15:19.131291  245459 node_ready.go:57] node "no-preload-122332" has "Ready":"False" status (will retry)
	W1229 07:15:21.131616  245459 node_ready.go:57] node "no-preload-122332" has "Ready":"False" status (will retry)
	I1229 07:15:22.314644  225445 api_server.go:299] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	W1229 07:15:23.631413  245459 node_ready.go:57] node "no-preload-122332" has "Ready":"False" status (will retry)
	W1229 07:15:25.631599  245459 node_ready.go:57] node "no-preload-122332" has "Ready":"False" status (will retry)
	W1229 07:15:28.131467  245459 node_ready.go:57] node "no-preload-122332" has "Ready":"False" status (will retry)
	I1229 07:15:27.315664  225445 api_server.go:315] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1229 07:15:27.315722  225445 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1229 07:15:27.315775  225445 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1229 07:15:27.343539  225445 cri.go:96] found id: "8b80982715281e14912ea726d2a5a182323febb349dfab3bcb2a610db7fa1d11"
	I1229 07:15:27.343559  225445 cri.go:96] found id: "864abb65f432c26fa136421795c1a96c0e9342e9a1e658790ff31d6cfc64ee5e"
	I1229 07:15:27.343564  225445 cri.go:96] found id: "3a9c461f6549cedfcb902d493a98dfdfd45573c28ec554e6b1d2ce904ee1fd67"
	I1229 07:15:27.343569  225445 cri.go:96] found id: ""
	I1229 07:15:27.343577  225445 logs.go:282] 3 containers: [8b80982715281e14912ea726d2a5a182323febb349dfab3bcb2a610db7fa1d11 864abb65f432c26fa136421795c1a96c0e9342e9a1e658790ff31d6cfc64ee5e 3a9c461f6549cedfcb902d493a98dfdfd45573c28ec554e6b1d2ce904ee1fd67]
	I1229 07:15:27.343639  225445 ssh_runner.go:195] Run: which crictl
	I1229 07:15:27.347759  225445 ssh_runner.go:195] Run: which crictl
	I1229 07:15:27.351644  225445 ssh_runner.go:195] Run: which crictl
	I1229 07:15:27.355264  225445 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1229 07:15:27.355319  225445 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1229 07:15:27.381569  225445 cri.go:96] found id: "02694b727a7ef5615b676832bc306017d23da15eff0a3aa9c1e32630ffe9e1bd"
	I1229 07:15:27.381592  225445 cri.go:96] found id: ""
	I1229 07:15:27.381601  225445 logs.go:282] 1 containers: [02694b727a7ef5615b676832bc306017d23da15eff0a3aa9c1e32630ffe9e1bd]
	I1229 07:15:27.381643  225445 ssh_runner.go:195] Run: which crictl
	I1229 07:15:27.385479  225445 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1229 07:15:27.385538  225445 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1229 07:15:27.412489  225445 cri.go:96] found id: ""
	I1229 07:15:27.412509  225445 logs.go:282] 0 containers: []
	W1229 07:15:27.412522  225445 logs.go:284] No container was found matching "coredns"
	I1229 07:15:27.412538  225445 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1229 07:15:27.412597  225445 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1229 07:15:27.439607  225445 cri.go:96] found id: "bcf9a8233b7fe61acd5d7b0071cecd58e00a26cec4247f4ead31d75316b79078"
	I1229 07:15:27.439626  225445 cri.go:96] found id: "1a2a5ec730671fe2fec7f6b801e54afe17547e25677f0f6dc85052e44f7374d4"
	I1229 07:15:27.439629  225445 cri.go:96] found id: "83f0b45b14d0f5c0f2428e774b2c22cc1cbf1cdc631be13cb8017de1791ed9c1"
	I1229 07:15:27.439633  225445 cri.go:96] found id: ""
	I1229 07:15:27.439640  225445 logs.go:282] 3 containers: [bcf9a8233b7fe61acd5d7b0071cecd58e00a26cec4247f4ead31d75316b79078 1a2a5ec730671fe2fec7f6b801e54afe17547e25677f0f6dc85052e44f7374d4 83f0b45b14d0f5c0f2428e774b2c22cc1cbf1cdc631be13cb8017de1791ed9c1]
	I1229 07:15:27.439692  225445 ssh_runner.go:195] Run: which crictl
	I1229 07:15:27.443554  225445 ssh_runner.go:195] Run: which crictl
	I1229 07:15:27.447225  225445 ssh_runner.go:195] Run: which crictl
	I1229 07:15:27.450775  225445 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1229 07:15:27.450822  225445 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1229 07:15:27.477556  225445 cri.go:96] found id: ""
	I1229 07:15:27.477578  225445 logs.go:282] 0 containers: []
	W1229 07:15:27.477588  225445 logs.go:284] No container was found matching "kube-proxy"
	I1229 07:15:27.477594  225445 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1229 07:15:27.477647  225445 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1229 07:15:27.503963  225445 cri.go:96] found id: "a67832fbf27ab39d26d7cd60ce6b9b5a496df64418a8f0557221862df96d6685"
	I1229 07:15:27.503985  225445 cri.go:96] found id: "e9d5b34dd02d51c0010757fb5a8250c74f693ed88fa87d76d4b6b6a6938ab9de"
	I1229 07:15:27.503989  225445 cri.go:96] found id: "c6904a1539983ab61b05abbaa0001209617c96ddc1921ff83557531b68191f66"
	I1229 07:15:27.503993  225445 cri.go:96] found id: ""
	I1229 07:15:27.504000  225445 logs.go:282] 3 containers: [a67832fbf27ab39d26d7cd60ce6b9b5a496df64418a8f0557221862df96d6685 e9d5b34dd02d51c0010757fb5a8250c74f693ed88fa87d76d4b6b6a6938ab9de c6904a1539983ab61b05abbaa0001209617c96ddc1921ff83557531b68191f66]
	I1229 07:15:27.504053  225445 ssh_runner.go:195] Run: which crictl
	I1229 07:15:27.507958  225445 ssh_runner.go:195] Run: which crictl
	I1229 07:15:27.511470  225445 ssh_runner.go:195] Run: which crictl
	I1229 07:15:27.514939  225445 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1229 07:15:27.514985  225445 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1229 07:15:27.541451  225445 cri.go:96] found id: ""
	I1229 07:15:27.541470  225445 logs.go:282] 0 containers: []
	W1229 07:15:27.541478  225445 logs.go:284] No container was found matching "kindnet"
	I1229 07:15:27.541483  225445 cri.go:61] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1229 07:15:27.541521  225445 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=storage-provisioner
	I1229 07:15:27.568143  225445 cri.go:96] found id: ""
	I1229 07:15:27.568170  225445 logs.go:282] 0 containers: []
	W1229 07:15:27.568178  225445 logs.go:284] No container was found matching "storage-provisioner"
	I1229 07:15:27.568198  225445 logs.go:123] Gathering logs for CRI-O ...
	I1229 07:15:27.568214  225445 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1229 07:15:27.637286  225445 logs.go:123] Gathering logs for kube-apiserver [8b80982715281e14912ea726d2a5a182323febb349dfab3bcb2a610db7fa1d11] ...
	I1229 07:15:27.637320  225445 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 8b80982715281e14912ea726d2a5a182323febb349dfab3bcb2a610db7fa1d11"
	I1229 07:15:27.667307  225445 logs.go:123] Gathering logs for etcd [02694b727a7ef5615b676832bc306017d23da15eff0a3aa9c1e32630ffe9e1bd] ...
	I1229 07:15:27.667336  225445 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 02694b727a7ef5615b676832bc306017d23da15eff0a3aa9c1e32630ffe9e1bd"
	I1229 07:15:27.701378  225445 logs.go:123] Gathering logs for kube-scheduler [bcf9a8233b7fe61acd5d7b0071cecd58e00a26cec4247f4ead31d75316b79078] ...
	I1229 07:15:27.701406  225445 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 bcf9a8233b7fe61acd5d7b0071cecd58e00a26cec4247f4ead31d75316b79078"
	W1229 07:15:27.728319  225445 logs.go:138] Found kube-scheduler [bcf9a8233b7fe61acd5d7b0071cecd58e00a26cec4247f4ead31d75316b79078] problem: E1229 07:14:51.301899       1 run.go:72] "command failed" err="failed to create listener: failed to listen on 127.0.0.1:10259: listen tcp 127.0.0.1:10259: bind: address already in use"
	I1229 07:15:27.728344  225445 logs.go:123] Gathering logs for kube-controller-manager [a67832fbf27ab39d26d7cd60ce6b9b5a496df64418a8f0557221862df96d6685] ...
	I1229 07:15:27.728357  225445 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a67832fbf27ab39d26d7cd60ce6b9b5a496df64418a8f0557221862df96d6685"
	I1229 07:15:27.754049  225445 logs.go:123] Gathering logs for kube-controller-manager [e9d5b34dd02d51c0010757fb5a8250c74f693ed88fa87d76d4b6b6a6938ab9de] ...
	I1229 07:15:27.754078  225445 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e9d5b34dd02d51c0010757fb5a8250c74f693ed88fa87d76d4b6b6a6938ab9de"
	I1229 07:15:27.782641  225445 logs.go:123] Gathering logs for describe nodes ...
	I1229 07:15:27.782667  225445 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	
	
	==> CRI-O <==
	Dec 29 07:14:54 old-k8s-version-876718 crio[581]: time="2025-12-29T07:14:54.214772047Z" level=info msg="Created container 9d62d4e5a2727d58ed4b3c8405a2b5330cd761d5a2c4e9aa2d1157dc7249f99d: kubernetes-dashboard/kubernetes-dashboard-8694d4445c-bfg2s/kubernetes-dashboard" id=80e0f23f-ca41-4cbf-a9c6-d031753609fd name=/runtime.v1.RuntimeService/CreateContainer
	Dec 29 07:14:54 old-k8s-version-876718 crio[581]: time="2025-12-29T07:14:54.215348646Z" level=info msg="Starting container: 9d62d4e5a2727d58ed4b3c8405a2b5330cd761d5a2c4e9aa2d1157dc7249f99d" id=668d5f43-79ef-40f3-8f66-4b89cbe5f865 name=/runtime.v1.RuntimeService/StartContainer
	Dec 29 07:14:54 old-k8s-version-876718 crio[581]: time="2025-12-29T07:14:54.217068343Z" level=info msg="Started container" PID=1760 containerID=9d62d4e5a2727d58ed4b3c8405a2b5330cd761d5a2c4e9aa2d1157dc7249f99d description=kubernetes-dashboard/kubernetes-dashboard-8694d4445c-bfg2s/kubernetes-dashboard id=668d5f43-79ef-40f3-8f66-4b89cbe5f865 name=/runtime.v1.RuntimeService/StartContainer sandboxID=c7a686f95cd2efe1677e49f57f8079986cb576a6e2e1017007868f4a846348f4
	Dec 29 07:15:06 old-k8s-version-876718 crio[581]: time="2025-12-29T07:15:06.920534725Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=53abb409-d5dd-444e-a09a-cc295cbebccc name=/runtime.v1.ImageService/ImageStatus
	Dec 29 07:15:06 old-k8s-version-876718 crio[581]: time="2025-12-29T07:15:06.921428709Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=93465f7b-0e62-42d7-8bce-5619bb058b6a name=/runtime.v1.ImageService/ImageStatus
	Dec 29 07:15:06 old-k8s-version-876718 crio[581]: time="2025-12-29T07:15:06.922429967Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=9a1a4ae6-0cce-423d-97c3-1e844626b155 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 29 07:15:06 old-k8s-version-876718 crio[581]: time="2025-12-29T07:15:06.922574052Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 29 07:15:06 old-k8s-version-876718 crio[581]: time="2025-12-29T07:15:06.92711518Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 29 07:15:06 old-k8s-version-876718 crio[581]: time="2025-12-29T07:15:06.927350882Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/358467f876ebda4dc98b2a58ee54a2d105544a149b8439a201f686350f83f461/merged/etc/passwd: no such file or directory"
	Dec 29 07:15:06 old-k8s-version-876718 crio[581]: time="2025-12-29T07:15:06.927393449Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/358467f876ebda4dc98b2a58ee54a2d105544a149b8439a201f686350f83f461/merged/etc/group: no such file or directory"
	Dec 29 07:15:06 old-k8s-version-876718 crio[581]: time="2025-12-29T07:15:06.927670123Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 29 07:15:06 old-k8s-version-876718 crio[581]: time="2025-12-29T07:15:06.952331517Z" level=info msg="Created container 1580265780bb72872432923c6589598a07efda6af2d5ede23afbf8a4ff201291: kube-system/storage-provisioner/storage-provisioner" id=9a1a4ae6-0cce-423d-97c3-1e844626b155 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 29 07:15:06 old-k8s-version-876718 crio[581]: time="2025-12-29T07:15:06.952875969Z" level=info msg="Starting container: 1580265780bb72872432923c6589598a07efda6af2d5ede23afbf8a4ff201291" id=a4f38460-1867-419d-b4e5-cd100b9ee64e name=/runtime.v1.RuntimeService/StartContainer
	Dec 29 07:15:06 old-k8s-version-876718 crio[581]: time="2025-12-29T07:15:06.954630557Z" level=info msg="Started container" PID=1783 containerID=1580265780bb72872432923c6589598a07efda6af2d5ede23afbf8a4ff201291 description=kube-system/storage-provisioner/storage-provisioner id=a4f38460-1867-419d-b4e5-cd100b9ee64e name=/runtime.v1.RuntimeService/StartContainer sandboxID=4b7e9a850b29e52b48cd76092abef8f4ac926e2341d6e98a4421700eba006433
	Dec 29 07:15:11 old-k8s-version-876718 crio[581]: time="2025-12-29T07:15:11.816038313Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=d40568c0-f508-4f9a-b0ca-2fce8005217c name=/runtime.v1.ImageService/ImageStatus
	Dec 29 07:15:11 old-k8s-version-876718 crio[581]: time="2025-12-29T07:15:11.817138726Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=32949e6d-08f9-4189-8eec-f715c9abf720 name=/runtime.v1.ImageService/ImageStatus
	Dec 29 07:15:11 old-k8s-version-876718 crio[581]: time="2025-12-29T07:15:11.818257066Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-crtg5/dashboard-metrics-scraper" id=98c0fb52-2225-4cb4-97e3-bf71bdd792a9 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 29 07:15:11 old-k8s-version-876718 crio[581]: time="2025-12-29T07:15:11.818423339Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 29 07:15:11 old-k8s-version-876718 crio[581]: time="2025-12-29T07:15:11.82515048Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 29 07:15:11 old-k8s-version-876718 crio[581]: time="2025-12-29T07:15:11.825644405Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 29 07:15:11 old-k8s-version-876718 crio[581]: time="2025-12-29T07:15:11.859401715Z" level=info msg="Created container 066fa833e37233849566c1e1480b105296402328ef58b83df823298bf2eb8f4e: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-crtg5/dashboard-metrics-scraper" id=98c0fb52-2225-4cb4-97e3-bf71bdd792a9 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 29 07:15:11 old-k8s-version-876718 crio[581]: time="2025-12-29T07:15:11.86004188Z" level=info msg="Starting container: 066fa833e37233849566c1e1480b105296402328ef58b83df823298bf2eb8f4e" id=8d63c492-bd2a-4f58-a80f-b5e1028bf08b name=/runtime.v1.RuntimeService/StartContainer
	Dec 29 07:15:11 old-k8s-version-876718 crio[581]: time="2025-12-29T07:15:11.862394835Z" level=info msg="Started container" PID=1799 containerID=066fa833e37233849566c1e1480b105296402328ef58b83df823298bf2eb8f4e description=kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-crtg5/dashboard-metrics-scraper id=8d63c492-bd2a-4f58-a80f-b5e1028bf08b name=/runtime.v1.RuntimeService/StartContainer sandboxID=010a1e3a7f346f7621d89fae77040ebba110b7d244254feed8eb905d42eabb66
	Dec 29 07:15:11 old-k8s-version-876718 crio[581]: time="2025-12-29T07:15:11.937693385Z" level=info msg="Removing container: b8f6036c257e401b2cb337b3060c8ff3b35cd180ef95916218798d2ecc64f2e3" id=4421796b-28d7-4c51-ab1a-0df137c78ad7 name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 29 07:15:11 old-k8s-version-876718 crio[581]: time="2025-12-29T07:15:11.947936553Z" level=info msg="Removed container b8f6036c257e401b2cb337b3060c8ff3b35cd180ef95916218798d2ecc64f2e3: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-crtg5/dashboard-metrics-scraper" id=4421796b-28d7-4c51-ab1a-0df137c78ad7 name=/runtime.v1.RuntimeService/RemoveContainer
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                              NAMESPACE
	066fa833e3723       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           19 seconds ago      Exited              dashboard-metrics-scraper   2                   010a1e3a7f346       dashboard-metrics-scraper-5f989dc9cf-crtg5       kubernetes-dashboard
	1580265780bb7       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           24 seconds ago      Running             storage-provisioner         1                   4b7e9a850b29e       storage-provisioner                              kube-system
	9d62d4e5a2727       docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029   37 seconds ago      Running             kubernetes-dashboard        0                   c7a686f95cd2e       kubernetes-dashboard-8694d4445c-bfg2s            kubernetes-dashboard
	31a800f24afba       56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c                                           55 seconds ago      Running             busybox                     1                   a0d5a7e778c7e       busybox                                          default
	ae7be12ff50cb       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc                                           55 seconds ago      Running             coredns                     0                   2e01748f89f3a       coredns-5dd5756b68-pnstl                         kube-system
	eccdd751d5c90       4921d7a6dffa922dd679732ba4797085c4f39e9a53bee8b6fdb1d463e8571251                                           55 seconds ago      Running             kindnet-cni                 0                   032867e7b8ecb       kindnet-kgr4x                                    kube-system
	604c0d1f5c7a0       ea1030da44aa18666a7bf15fddd2a38c3143c3277159cb8bdd95f45c8ce62d7a                                           55 seconds ago      Running             kube-proxy                  0                   44cf0ff67c1a3       kube-proxy-2v9kr                                 kube-system
	ffdc68478751c       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           55 seconds ago      Exited              storage-provisioner         0                   4b7e9a850b29e       storage-provisioner                              kube-system
	96d9acdaa9e81       f6f496300a2ae7a6727ccf3080d66d2fd22b6cfc271df5351c976c23a28bb157                                           58 seconds ago      Running             kube-scheduler              0                   c400430fdb9b7       kube-scheduler-old-k8s-version-876718            kube-system
	bacf752453b6e       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9                                           58 seconds ago      Running             etcd                        0                   127d072e7e307       etcd-old-k8s-version-876718                      kube-system
	69931aee6620e       bb5e0dde9054c02d6badee88547be7e7bb7b7b818d277c8a61b4b29484bbff95                                           58 seconds ago      Running             kube-apiserver              0                   784883888af1d       kube-apiserver-old-k8s-version-876718            kube-system
	176fbe8370904       4be79c38a4bab6e1252a35697500e8a0d9c5c7c771d9fcc1935c9a7f6cdf4c62                                           58 seconds ago      Running             kube-controller-manager     0                   293f6383c4e5a       kube-controller-manager-old-k8s-version-876718   kube-system
	
	
	==> coredns [ae7be12ff50cb259b5279dc02c3c2df281a1f08343c6bdd43a0534b08ec9a6b6] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 25cf5af2951e282c4b0e961a02fb5d3e57c974501832fee92eec17b5135b9ec9d9e87d2ac94e6d117a5ed3dd54e8800aa7b4479706eb54497145ccdb80397d1b
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] 127.0.0.1:35966 - 65488 "HINFO IN 2699774808872182651.1480664370928045377. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.018340458s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[WARNING] plugin/kubernetes: Kubernetes API connection failure: Get "https://10.96.0.1:443/version": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> describe nodes <==
	Name:               old-k8s-version-876718
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=old-k8s-version-876718
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=306c26d738eaa3534a776d7c684ed563998a48b8
	                    minikube.k8s.io/name=old-k8s-version-876718
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_29T07_13_32_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 29 Dec 2025 07:13:28 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  old-k8s-version-876718
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 29 Dec 2025 07:15:26 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 29 Dec 2025 07:15:06 +0000   Mon, 29 Dec 2025 07:13:26 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 29 Dec 2025 07:15:06 +0000   Mon, 29 Dec 2025 07:13:26 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 29 Dec 2025 07:15:06 +0000   Mon, 29 Dec 2025 07:13:26 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 29 Dec 2025 07:15:06 +0000   Mon, 29 Dec 2025 07:13:56 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.103.2
	  Hostname:    old-k8s-version-876718
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863360Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863360Ki
	  pods:               110
	System Info:
	  Machine ID:                 44f990608ba801cb32708aeb6951f96d
	  System UUID:                89c29f88-abf1-4b86-a174-1e64c8cd0857
	  Boot ID:                    38b59c2f-2e15-4538-b2f7-46a8b6545a02
	  Kernel Version:             6.8.0-1045-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.35.0
	  Kubelet Version:            v1.28.0
	  Kube-Proxy Version:         v1.28.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                              ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         92s
	  kube-system                 coredns-5dd5756b68-pnstl                          100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     107s
	  kube-system                 etcd-old-k8s-version-876718                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         2m
	  kube-system                 kindnet-kgr4x                                     100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      107s
	  kube-system                 kube-apiserver-old-k8s-version-876718             250m (3%)     0 (0%)      0 (0%)           0 (0%)         2m
	  kube-system                 kube-controller-manager-old-k8s-version-876718    200m (2%)     0 (0%)      0 (0%)           0 (0%)         2m
	  kube-system                 kube-proxy-2v9kr                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         107s
	  kube-system                 kube-scheduler-old-k8s-version-876718             100m (1%)     0 (0%)      0 (0%)           0 (0%)         2m
	  kube-system                 storage-provisioner                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         106s
	  kubernetes-dashboard        dashboard-metrics-scraper-5f989dc9cf-crtg5        0 (0%)        0 (0%)      0 (0%)           0 (0%)         44s
	  kubernetes-dashboard        kubernetes-dashboard-8694d4445c-bfg2s             0 (0%)        0 (0%)      0 (0%)           0 (0%)         44s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 107s                 kube-proxy       
	  Normal  Starting                 55s                  kube-proxy       
	  Normal  Starting                 2m5s                 kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  2m5s (x9 over 2m5s)  kubelet          Node old-k8s-version-876718 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m5s (x8 over 2m5s)  kubelet          Node old-k8s-version-876718 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m5s (x7 over 2m5s)  kubelet          Node old-k8s-version-876718 status is now: NodeHasSufficientPID
	  Normal  NodeHasNoDiskPressure    2m                   kubelet          Node old-k8s-version-876718 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  2m                   kubelet          Node old-k8s-version-876718 status is now: NodeHasSufficientMemory
	  Normal  NodeHasSufficientPID     2m                   kubelet          Node old-k8s-version-876718 status is now: NodeHasSufficientPID
	  Normal  Starting                 2m                   kubelet          Starting kubelet.
	  Normal  RegisteredNode           107s                 node-controller  Node old-k8s-version-876718 event: Registered Node old-k8s-version-876718 in Controller
	  Normal  NodeReady                95s                  kubelet          Node old-k8s-version-876718 status is now: NodeReady
	  Normal  Starting                 59s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  59s (x9 over 59s)    kubelet          Node old-k8s-version-876718 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    59s (x8 over 59s)    kubelet          Node old-k8s-version-876718 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     59s (x7 over 59s)    kubelet          Node old-k8s-version-876718 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           44s                  node-controller  Node old-k8s-version-876718 event: Registered Node old-k8s-version-876718 in Controller
	
	
	==> dmesg <==
	[Dec29 06:17] MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details.
	[  +0.001714] TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details.
	[  +0.001000] MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details.
	[  +0.084008] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge
	[  +0.380672] i8042: Warning: Keylock active
	[  +0.013979] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.493300] block sda: the capability attribute has been deprecated.
	[  +0.084603] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.024785] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +4.737934] kauditd_printk_skb: 47 callbacks suppressed
	
	
	==> etcd [bacf752453b6e31e76322e28d8bd8e4495c2626f31b52d8c86de2430551e0205] <==
	{"level":"info","ts":"2025-12-29T07:14:33.370838Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-12-29T07:14:33.370851Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f23060b075c4c089 switched to configuration voters=(17451554867067011209)"}
	{"level":"info","ts":"2025-12-29T07:14:33.370997Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"3336683c081d149d","local-member-id":"f23060b075c4c089","added-peer-id":"f23060b075c4c089","added-peer-peer-urls":["https://192.168.103.2:2380"]}
	{"level":"info","ts":"2025-12-29T07:14:33.370862Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-12-29T07:14:33.371098Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"3336683c081d149d","local-member-id":"f23060b075c4c089","cluster-version":"3.5"}
	{"level":"info","ts":"2025-12-29T07:14:33.371128Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2025-12-29T07:14:33.37342Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2025-12-29T07:14:33.373585Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.103.2:2380"}
	{"level":"info","ts":"2025-12-29T07:14:33.373641Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.103.2:2380"}
	{"level":"info","ts":"2025-12-29T07:14:33.373712Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"f23060b075c4c089","initial-advertise-peer-urls":["https://192.168.103.2:2380"],"listen-peer-urls":["https://192.168.103.2:2380"],"advertise-client-urls":["https://192.168.103.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.103.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2025-12-29T07:14:33.373741Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2025-12-29T07:14:34.461064Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f23060b075c4c089 is starting a new election at term 2"}
	{"level":"info","ts":"2025-12-29T07:14:34.461112Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f23060b075c4c089 became pre-candidate at term 2"}
	{"level":"info","ts":"2025-12-29T07:14:34.461157Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f23060b075c4c089 received MsgPreVoteResp from f23060b075c4c089 at term 2"}
	{"level":"info","ts":"2025-12-29T07:14:34.461175Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f23060b075c4c089 became candidate at term 3"}
	{"level":"info","ts":"2025-12-29T07:14:34.461183Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f23060b075c4c089 received MsgVoteResp from f23060b075c4c089 at term 3"}
	{"level":"info","ts":"2025-12-29T07:14:34.461231Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f23060b075c4c089 became leader at term 3"}
	{"level":"info","ts":"2025-12-29T07:14:34.461246Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: f23060b075c4c089 elected leader f23060b075c4c089 at term 3"}
	{"level":"info","ts":"2025-12-29T07:14:34.462324Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"f23060b075c4c089","local-member-attributes":"{Name:old-k8s-version-876718 ClientURLs:[https://192.168.103.2:2379]}","request-path":"/0/members/f23060b075c4c089/attributes","cluster-id":"3336683c081d149d","publish-timeout":"7s"}
	{"level":"info","ts":"2025-12-29T07:14:34.462398Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-12-29T07:14:34.462543Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-12-29T07:14:34.462459Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-12-29T07:14:34.46257Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-12-29T07:14:34.463807Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2025-12-29T07:14:34.46381Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.103.2:2379"}
	
	
	==> kernel <==
	 07:15:31 up 58 min,  0 user,  load average: 1.86, 2.58, 1.89
	Linux old-k8s-version-876718 6.8.0-1045-gcp #48~22.04.1-Ubuntu SMP Tue Nov 25 13:07:56 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [eccdd751d5c90dc102d5991e820df94c667027233d147fc5276fe889a9653468] <==
	I1229 07:14:36.405724       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1229 07:14:36.406037       1 main.go:139] hostIP = 192.168.103.2
	podIP = 192.168.103.2
	I1229 07:14:36.406253       1 main.go:148] setting mtu 1500 for CNI 
	I1229 07:14:36.406283       1 main.go:178] kindnetd IP family: "ipv4"
	I1229 07:14:36.406309       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-12-29T07:14:36Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1229 07:14:36.606465       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1229 07:14:36.606530       1 controller.go:381] "Waiting for informer caches to sync"
	I1229 07:14:36.606546       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1229 07:14:36.606704       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1229 07:14:37.002398       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1229 07:14:37.002452       1 metrics.go:72] Registering metrics
	I1229 07:14:37.002536       1 controller.go:711] "Syncing nftables rules"
	I1229 07:14:46.606900       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1229 07:14:46.606980       1 main.go:301] handling current node
	I1229 07:14:56.607477       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1229 07:14:56.607511       1 main.go:301] handling current node
	I1229 07:15:06.607122       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1229 07:15:06.607156       1 main.go:301] handling current node
	I1229 07:15:16.606431       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1229 07:15:16.606474       1 main.go:301] handling current node
	I1229 07:15:26.609318       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1229 07:15:26.609367       1 main.go:301] handling current node
	
	
	==> kube-apiserver [69931aee6620ecef0e707aa69dde3c1c55637a74c6d0b2b17435ae34321b5fda] <==
	I1229 07:14:35.360418       1 handler_discovery.go:404] Starting ResourceDiscoveryManager
	I1229 07:14:35.407866       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I1229 07:14:35.417459       1 shared_informer.go:318] Caches are synced for node_authorizer
	I1229 07:14:35.456206       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I1229 07:14:35.456401       1 shared_informer.go:318] Caches are synced for configmaps
	I1229 07:14:35.456416       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I1229 07:14:35.456761       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1229 07:14:35.457121       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I1229 07:14:35.457237       1 aggregator.go:166] initial CRD sync complete...
	I1229 07:14:35.457249       1 autoregister_controller.go:141] Starting autoregister controller
	I1229 07:14:35.457257       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1229 07:14:35.457264       1 cache.go:39] Caches are synced for autoregister controller
	I1229 07:14:35.461130       1 apf_controller.go:377] Running API Priority and Fairness config worker
	I1229 07:14:35.461144       1 apf_controller.go:380] Running API Priority and Fairness periodic rebalancing process
	I1229 07:14:36.271586       1 controller.go:624] quota admission added evaluator for: namespaces
	I1229 07:14:36.301915       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I1229 07:14:36.318236       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1229 07:14:36.324784       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1229 07:14:36.331912       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I1229 07:14:36.359052       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1229 07:14:36.364013       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.105.219.198"}
	I1229 07:14:36.377553       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.106.34.49"}
	I1229 07:14:47.680135       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	I1229 07:14:47.829270       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1229 07:14:48.080275       1 controller.go:624] quota admission added evaluator for: endpoints
	
	
	==> kube-controller-manager [176fbe8370904a1abad1e6ed78d46681127fa2c11cbc919f309fe0a96e3bf559] <==
	I1229 07:14:47.832395       1 shared_informer.go:318] Caches are synced for certificate-csrsigning-kubelet-serving
	I1229 07:14:47.938556       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="246.35682ms"
	I1229 07:14:47.938656       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="55.521µs"
	I1229 07:14:47.940514       1 event.go:307] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kubernetes-dashboard-8694d4445c-bfg2s"
	I1229 07:14:47.940546       1 event.go:307] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: dashboard-metrics-scraper-5f989dc9cf-crtg5"
	I1229 07:14:47.948454       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="256.258325ms"
	I1229 07:14:47.948897       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="256.667654ms"
	I1229 07:14:47.955279       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="6.218925ms"
	I1229 07:14:47.955376       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="53.627µs"
	I1229 07:14:47.956154       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="7.65015ms"
	I1229 07:14:47.956264       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="66.538µs"
	I1229 07:14:47.960385       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="157.17µs"
	I1229 07:14:47.979753       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="64.47µs"
	I1229 07:14:48.200095       1 shared_informer.go:318] Caches are synced for garbage collector
	I1229 07:14:48.217480       1 shared_informer.go:318] Caches are synced for garbage collector
	I1229 07:14:48.217508       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	I1229 07:14:51.892621       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="99.417µs"
	I1229 07:14:52.900934       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="104.655µs"
	I1229 07:14:53.898209       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="70.108µs"
	I1229 07:14:55.006120       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="61.911636ms"
	I1229 07:14:55.006254       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="89.837µs"
	I1229 07:15:11.948150       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="63.703µs"
	I1229 07:15:15.684770       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="7.118027ms"
	I1229 07:15:15.684860       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="49.275µs"
	I1229 07:15:18.858374       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="133.883µs"
	
	
	==> kube-proxy [604c0d1f5c7a0df5b8eb5cb40329d966a9ac5cc854e5051c0596c0c5eb5f91ed] <==
	I1229 07:14:36.245283       1 server_others.go:69] "Using iptables proxy"
	I1229 07:14:36.253880       1 node.go:141] Successfully retrieved node IP: 192.168.103.2
	I1229 07:14:36.273658       1 server.go:632] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1229 07:14:36.276069       1 server_others.go:152] "Using iptables Proxier"
	I1229 07:14:36.276112       1 server_others.go:421] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I1229 07:14:36.276122       1 server_others.go:438] "Defaulting to no-op detect-local"
	I1229 07:14:36.276169       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1229 07:14:36.276443       1 server.go:846] "Version info" version="v1.28.0"
	I1229 07:14:36.276462       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1229 07:14:36.277121       1 config.go:315] "Starting node config controller"
	I1229 07:14:36.277139       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1229 07:14:36.277365       1 config.go:97] "Starting endpoint slice config controller"
	I1229 07:14:36.278106       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1229 07:14:36.278255       1 config.go:188] "Starting service config controller"
	I1229 07:14:36.278267       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1229 07:14:36.377522       1 shared_informer.go:318] Caches are synced for node config
	I1229 07:14:36.378856       1 shared_informer.go:318] Caches are synced for service config
	I1229 07:14:36.378871       1 shared_informer.go:318] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [96d9acdaa9e812fcd678cb5aa4c56ffc81629c3f8f930d7c429c5c520e7684c8] <==
	I1229 07:14:33.907100       1 serving.go:348] Generated self-signed cert in-memory
	I1229 07:14:35.430321       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.28.0"
	I1229 07:14:35.430350       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1229 07:14:35.433947       1 requestheader_controller.go:169] Starting RequestHeaderAuthRequestController
	I1229 07:14:35.433971       1 shared_informer.go:311] Waiting for caches to sync for RequestHeaderAuthRequestController
	I1229 07:14:35.433991       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1229 07:14:35.434016       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1229 07:14:35.434017       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1229 07:14:35.434038       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
	I1229 07:14:35.435060       1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
	I1229 07:14:35.435110       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I1229 07:14:35.534215       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
	I1229 07:14:35.537316       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1229 07:14:35.537330       1 shared_informer.go:318] Caches are synced for RequestHeaderAuthRequestController
	
	
	==> kubelet <==
	Dec 29 07:14:48 old-k8s-version-876718 kubelet[741]: E1229 07:14:48.092822     741 projected.go:198] Error preparing data for projected volume kube-api-access-ksqf7 for pod kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-crtg5: configmap "kube-root-ca.crt" not found
	Dec 29 07:14:48 old-k8s-version-876718 kubelet[741]: E1229 07:14:48.092895     741 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/99331fe4-6ed1-40f6-a042-2fe358572968-kube-api-access-ksqf7 podName:99331fe4-6ed1-40f6-a042-2fe358572968 nodeName:}" failed. No retries permitted until 2025-12-29 07:14:48.592872375 +0000 UTC m=+15.869252498 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-ksqf7" (UniqueName: "kubernetes.io/projected/99331fe4-6ed1-40f6-a042-2fe358572968-kube-api-access-ksqf7") pod "dashboard-metrics-scraper-5f989dc9cf-crtg5" (UID: "99331fe4-6ed1-40f6-a042-2fe358572968") : configmap "kube-root-ca.crt" not found
	Dec 29 07:14:48 old-k8s-version-876718 kubelet[741]: E1229 07:14:48.093888     741 projected.go:292] Couldn't get configMap kubernetes-dashboard/kube-root-ca.crt: configmap "kube-root-ca.crt" not found
	Dec 29 07:14:48 old-k8s-version-876718 kubelet[741]: E1229 07:14:48.093925     741 projected.go:198] Error preparing data for projected volume kube-api-access-5btrb for pod kubernetes-dashboard/kubernetes-dashboard-8694d4445c-bfg2s: configmap "kube-root-ca.crt" not found
	Dec 29 07:14:48 old-k8s-version-876718 kubelet[741]: E1229 07:14:48.093973     741 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/afa0a6d0-35c1-415f-837e-8217b89f54fc-kube-api-access-5btrb podName:afa0a6d0-35c1-415f-837e-8217b89f54fc nodeName:}" failed. No retries permitted until 2025-12-29 07:14:48.59395877 +0000 UTC m=+15.870338894 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-5btrb" (UniqueName: "kubernetes.io/projected/afa0a6d0-35c1-415f-837e-8217b89f54fc-kube-api-access-5btrb") pod "kubernetes-dashboard-8694d4445c-bfg2s" (UID: "afa0a6d0-35c1-415f-837e-8217b89f54fc") : configmap "kube-root-ca.crt" not found
	Dec 29 07:14:51 old-k8s-version-876718 kubelet[741]: I1229 07:14:51.878872     741 scope.go:117] "RemoveContainer" containerID="b9ed89bf5f182630b48cc8fede54d1c3d86cd5a1df609989dc7ac13c1606f58b"
	Dec 29 07:14:52 old-k8s-version-876718 kubelet[741]: I1229 07:14:52.882664     741 scope.go:117] "RemoveContainer" containerID="b9ed89bf5f182630b48cc8fede54d1c3d86cd5a1df609989dc7ac13c1606f58b"
	Dec 29 07:14:52 old-k8s-version-876718 kubelet[741]: I1229 07:14:52.882934     741 scope.go:117] "RemoveContainer" containerID="b8f6036c257e401b2cb337b3060c8ff3b35cd180ef95916218798d2ecc64f2e3"
	Dec 29 07:14:52 old-k8s-version-876718 kubelet[741]: E1229 07:14:52.883339     741 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-crtg5_kubernetes-dashboard(99331fe4-6ed1-40f6-a042-2fe358572968)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-crtg5" podUID="99331fe4-6ed1-40f6-a042-2fe358572968"
	Dec 29 07:14:53 old-k8s-version-876718 kubelet[741]: I1229 07:14:53.885491     741 scope.go:117] "RemoveContainer" containerID="b8f6036c257e401b2cb337b3060c8ff3b35cd180ef95916218798d2ecc64f2e3"
	Dec 29 07:14:53 old-k8s-version-876718 kubelet[741]: E1229 07:14:53.885863     741 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-crtg5_kubernetes-dashboard(99331fe4-6ed1-40f6-a042-2fe358572968)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-crtg5" podUID="99331fe4-6ed1-40f6-a042-2fe358572968"
	Dec 29 07:14:54 old-k8s-version-876718 kubelet[741]: I1229 07:14:54.944305     741 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-bfg2s" podStartSLOduration=2.649624286 podCreationTimestamp="2025-12-29 07:14:47 +0000 UTC" firstStartedPulling="2025-12-29 07:14:48.879242852 +0000 UTC m=+16.155622989" lastFinishedPulling="2025-12-29 07:14:54.173847699 +0000 UTC m=+21.450227827" observedRunningTime="2025-12-29 07:14:54.944090784 +0000 UTC m=+22.220470928" watchObservedRunningTime="2025-12-29 07:14:54.944229124 +0000 UTC m=+22.220609248"
	Dec 29 07:14:58 old-k8s-version-876718 kubelet[741]: I1229 07:14:58.848328     741 scope.go:117] "RemoveContainer" containerID="b8f6036c257e401b2cb337b3060c8ff3b35cd180ef95916218798d2ecc64f2e3"
	Dec 29 07:14:58 old-k8s-version-876718 kubelet[741]: E1229 07:14:58.848595     741 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-crtg5_kubernetes-dashboard(99331fe4-6ed1-40f6-a042-2fe358572968)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-crtg5" podUID="99331fe4-6ed1-40f6-a042-2fe358572968"
	Dec 29 07:15:06 old-k8s-version-876718 kubelet[741]: I1229 07:15:06.920074     741 scope.go:117] "RemoveContainer" containerID="ffdc68478751c4ef8ecfb26589718e753fec507bdd303d88a626d88adc6b76b9"
	Dec 29 07:15:11 old-k8s-version-876718 kubelet[741]: I1229 07:15:11.815377     741 scope.go:117] "RemoveContainer" containerID="b8f6036c257e401b2cb337b3060c8ff3b35cd180ef95916218798d2ecc64f2e3"
	Dec 29 07:15:11 old-k8s-version-876718 kubelet[741]: I1229 07:15:11.936422     741 scope.go:117] "RemoveContainer" containerID="b8f6036c257e401b2cb337b3060c8ff3b35cd180ef95916218798d2ecc64f2e3"
	Dec 29 07:15:11 old-k8s-version-876718 kubelet[741]: I1229 07:15:11.936650     741 scope.go:117] "RemoveContainer" containerID="066fa833e37233849566c1e1480b105296402328ef58b83df823298bf2eb8f4e"
	Dec 29 07:15:11 old-k8s-version-876718 kubelet[741]: E1229 07:15:11.937013     741 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-crtg5_kubernetes-dashboard(99331fe4-6ed1-40f6-a042-2fe358572968)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-crtg5" podUID="99331fe4-6ed1-40f6-a042-2fe358572968"
	Dec 29 07:15:18 old-k8s-version-876718 kubelet[741]: I1229 07:15:18.848361     741 scope.go:117] "RemoveContainer" containerID="066fa833e37233849566c1e1480b105296402328ef58b83df823298bf2eb8f4e"
	Dec 29 07:15:18 old-k8s-version-876718 kubelet[741]: E1229 07:15:18.848652     741 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-crtg5_kubernetes-dashboard(99331fe4-6ed1-40f6-a042-2fe358572968)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-crtg5" podUID="99331fe4-6ed1-40f6-a042-2fe358572968"
	Dec 29 07:15:29 old-k8s-version-876718 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Dec 29 07:15:29 old-k8s-version-876718 systemd[1]: kubelet.service: Deactivated successfully.
	Dec 29 07:15:29 old-k8s-version-876718 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 29 07:15:29 old-k8s-version-876718 systemd[1]: kubelet.service: Consumed 1.566s CPU time.
	
	
	==> kubernetes-dashboard [9d62d4e5a2727d58ed4b3c8405a2b5330cd761d5a2c4e9aa2d1157dc7249f99d] <==
	2025/12/29 07:14:54 Using namespace: kubernetes-dashboard
	2025/12/29 07:14:54 Using in-cluster config to connect to apiserver
	2025/12/29 07:14:54 Using secret token for csrf signing
	2025/12/29 07:14:54 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/12/29 07:14:54 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/12/29 07:14:54 Successful initial request to the apiserver, version: v1.28.0
	2025/12/29 07:14:54 Generating JWE encryption key
	2025/12/29 07:14:54 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/12/29 07:14:54 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/12/29 07:14:54 Initializing JWE encryption key from synchronized object
	2025/12/29 07:14:54 Creating in-cluster Sidecar client
	2025/12/29 07:14:54 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/12/29 07:14:54 Serving insecurely on HTTP port: 9090
	2025/12/29 07:15:24 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/12/29 07:14:54 Starting overwatch
	
	
	==> storage-provisioner [1580265780bb72872432923c6589598a07efda6af2d5ede23afbf8a4ff201291] <==
	I1229 07:15:06.965798       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1229 07:15:06.973514       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1229 07:15:06.973563       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1229 07:15:24.370193       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1229 07:15:24.370392       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-876718_aa15927a-3006-42c9-92b8-345f1f431730!
	I1229 07:15:24.370359       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"9eb83101-af4b-4f08-89af-4c2a64d6d770", APIVersion:"v1", ResourceVersion:"653", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-876718_aa15927a-3006-42c9-92b8-345f1f431730 became leader
	I1229 07:15:24.470571       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-876718_aa15927a-3006-42c9-92b8-345f1f431730!
	
	
	==> storage-provisioner [ffdc68478751c4ef8ecfb26589718e753fec507bdd303d88a626d88adc6b76b9] <==
	I1229 07:14:36.182777       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1229 07:15:06.184543       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-876718 -n old-k8s-version-876718
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-876718 -n old-k8s-version-876718: exit status 2 (340.426966ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:270: (dbg) Run:  kubectl --context old-k8s-version-876718 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:294: <<< TestStartStop/group/old-k8s-version/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:295: ---------------------/post-mortem---------------------------------
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect old-k8s-version-876718
helpers_test.go:244: (dbg) docker inspect old-k8s-version-876718:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "707d2d5cd5ce42857aeddeecf651ef884f3999cc918cf89592a98dcee898883d",
	        "Created": "2025-12-29T07:13:19.142529229Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 241415,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-29T07:14:26.625069811Z",
	            "FinishedAt": "2025-12-29T07:14:25.774950509Z"
	        },
	        "Image": "sha256:a2c772d8c9235c5e8a66a3d0af7f5184436aa141d74f90a5659fe0d594ea14d5",
	        "ResolvConfPath": "/var/lib/docker/containers/707d2d5cd5ce42857aeddeecf651ef884f3999cc918cf89592a98dcee898883d/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/707d2d5cd5ce42857aeddeecf651ef884f3999cc918cf89592a98dcee898883d/hostname",
	        "HostsPath": "/var/lib/docker/containers/707d2d5cd5ce42857aeddeecf651ef884f3999cc918cf89592a98dcee898883d/hosts",
	        "LogPath": "/var/lib/docker/containers/707d2d5cd5ce42857aeddeecf651ef884f3999cc918cf89592a98dcee898883d/707d2d5cd5ce42857aeddeecf651ef884f3999cc918cf89592a98dcee898883d-json.log",
	        "Name": "/old-k8s-version-876718",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-876718:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "old-k8s-version-876718",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "707d2d5cd5ce42857aeddeecf651ef884f3999cc918cf89592a98dcee898883d",
	                "LowerDir": "/var/lib/docker/overlay2/674fb664845fd5c6a2ef24debb7531ad5eb9beab7fa93bd8dc00561d5a5ed330-init/diff:/var/lib/docker/overlay2/3c9e424a3298c821d4195922375c499065668db3230de879006c717dc268a220/diff",
	                "MergedDir": "/var/lib/docker/overlay2/674fb664845fd5c6a2ef24debb7531ad5eb9beab7fa93bd8dc00561d5a5ed330/merged",
	                "UpperDir": "/var/lib/docker/overlay2/674fb664845fd5c6a2ef24debb7531ad5eb9beab7fa93bd8dc00561d5a5ed330/diff",
	                "WorkDir": "/var/lib/docker/overlay2/674fb664845fd5c6a2ef24debb7531ad5eb9beab7fa93bd8dc00561d5a5ed330/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-876718",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-876718/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-876718",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766979815-22353@sha256:20dad5895b49b986a1253c0faab60865204843ac97fd3a6e6210da5896244409",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-876718",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-876718",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "ebbe8ca2dca6bdf95a3270e89afdc17c45ad6b1cdebf2233a52bf180d8bb7fdb",
	            "SandboxKey": "/var/run/docker/netns/ebbe8ca2dca6",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33058"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33059"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33062"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33060"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33061"
	                    }
	                ]
	            },
	            "Networks": {
	                "old-k8s-version-876718": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.103.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "6961f21bb90e6befcbf5f75f7b239c49f9b8e14ab6e6619030de29754825fc86",
	                    "EndpointID": "ebb562e8c62100e33a361035255127bbb64ca101f3dbe4aef373d7293436d382",
	                    "Gateway": "192.168.103.1",
	                    "IPAddress": "192.168.103.2",
	                    "MacAddress": "12:32:d8:2c:5d:54",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "old-k8s-version-876718",
	                        "707d2d5cd5ce"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-876718 -n old-k8s-version-876718
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-876718 -n old-k8s-version-876718: exit status 2 (325.828128ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestStartStop/group/old-k8s-version/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-876718 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-amd64 -p old-k8s-version-876718 logs -n 25: (1.147730251s)
helpers_test.go:261: TestStartStop/group/old-k8s-version/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────
────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │          PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────
────┤
	│ ssh     │ -p NoKubernetes-868221 sudo systemctl is-active --quiet service kubelet                                                                                                                                                                       │ NoKubernetes-868221       │ jenkins │ v1.37.0 │ 29 Dec 25 07:12 UTC │                     │
	│ delete  │ -p NoKubernetes-868221                                                                                                                                                                                                                        │ NoKubernetes-868221       │ jenkins │ v1.37.0 │ 29 Dec 25 07:12 UTC │ 29 Dec 25 07:12 UTC │
	│ start   │ -p kubernetes-upgrade-174577 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                                                                                      │ kubernetes-upgrade-174577 │ jenkins │ v1.37.0 │ 29 Dec 25 07:12 UTC │ 29 Dec 25 07:12 UTC │
	│ image   │ test-preload-457393 image list                                                                                                                                                                                                                │ test-preload-457393       │ jenkins │ v1.37.0 │ 29 Dec 25 07:12 UTC │ 29 Dec 25 07:12 UTC │
	│ delete  │ -p test-preload-457393                                                                                                                                                                                                                        │ test-preload-457393       │ jenkins │ v1.37.0 │ 29 Dec 25 07:12 UTC │ 29 Dec 25 07:12 UTC │
	│ start   │ -p cert-expiration-452455 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio                                                                                                                                        │ cert-expiration-452455    │ jenkins │ v1.37.0 │ 29 Dec 25 07:12 UTC │ 29 Dec 25 07:12 UTC │
	│ delete  │ -p missing-upgrade-967138                                                                                                                                                                                                                     │ missing-upgrade-967138    │ jenkins │ v1.37.0 │ 29 Dec 25 07:12 UTC │ 29 Dec 25 07:12 UTC │
	│ start   │ -p force-systemd-flag-074338 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                                                                                                                   │ force-systemd-flag-074338 │ jenkins │ v1.37.0 │ 29 Dec 25 07:12 UTC │ 29 Dec 25 07:12 UTC │
	│ stop    │ -p kubernetes-upgrade-174577 --alsologtostderr                                                                                                                                                                                                │ kubernetes-upgrade-174577 │ jenkins │ v1.37.0 │ 29 Dec 25 07:12 UTC │ 29 Dec 25 07:12 UTC │
	│ start   │ -p kubernetes-upgrade-174577 --memory=3072 --kubernetes-version=v1.35.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                                                                                      │ kubernetes-upgrade-174577 │ jenkins │ v1.37.0 │ 29 Dec 25 07:12 UTC │                     │
	│ ssh     │ force-systemd-flag-074338 ssh cat /etc/crio/crio.conf.d/02-crio.conf                                                                                                                                                                          │ force-systemd-flag-074338 │ jenkins │ v1.37.0 │ 29 Dec 25 07:12 UTC │ 29 Dec 25 07:12 UTC │
	│ delete  │ -p force-systemd-flag-074338                                                                                                                                                                                                                  │ force-systemd-flag-074338 │ jenkins │ v1.37.0 │ 29 Dec 25 07:12 UTC │ 29 Dec 25 07:12 UTC │
	│ start   │ -p cert-options-001954 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio                     │ cert-options-001954       │ jenkins │ v1.37.0 │ 29 Dec 25 07:12 UTC │ 29 Dec 25 07:13 UTC │
	│ ssh     │ cert-options-001954 ssh openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt                                                                                                                                                   │ cert-options-001954       │ jenkins │ v1.37.0 │ 29 Dec 25 07:13 UTC │ 29 Dec 25 07:13 UTC │
	│ ssh     │ -p cert-options-001954 -- sudo cat /etc/kubernetes/admin.conf                                                                                                                                                                                 │ cert-options-001954       │ jenkins │ v1.37.0 │ 29 Dec 25 07:13 UTC │ 29 Dec 25 07:13 UTC │
	│ delete  │ -p cert-options-001954                                                                                                                                                                                                                        │ cert-options-001954       │ jenkins │ v1.37.0 │ 29 Dec 25 07:13 UTC │ 29 Dec 25 07:13 UTC │
	│ start   │ -p old-k8s-version-876718 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-876718    │ jenkins │ v1.37.0 │ 29 Dec 25 07:13 UTC │ 29 Dec 25 07:13 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-876718 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-876718    │ jenkins │ v1.37.0 │ 29 Dec 25 07:14 UTC │                     │
	│ stop    │ -p old-k8s-version-876718 --alsologtostderr -v=3                                                                                                                                                                                              │ old-k8s-version-876718    │ jenkins │ v1.37.0 │ 29 Dec 25 07:14 UTC │ 29 Dec 25 07:14 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-876718 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-876718    │ jenkins │ v1.37.0 │ 29 Dec 25 07:14 UTC │ 29 Dec 25 07:14 UTC │
	│ start   │ -p old-k8s-version-876718 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-876718    │ jenkins │ v1.37.0 │ 29 Dec 25 07:14 UTC │ 29 Dec 25 07:15 UTC │
	│ delete  │ -p stopped-upgrade-518014                                                                                                                                                                                                                     │ stopped-upgrade-518014    │ jenkins │ v1.37.0 │ 29 Dec 25 07:14 UTC │ 29 Dec 25 07:14 UTC │
	│ start   │ -p no-preload-122332 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0                                                                                       │ no-preload-122332         │ jenkins │ v1.37.0 │ 29 Dec 25 07:14 UTC │ 29 Dec 25 07:15 UTC │
	│ image   │ old-k8s-version-876718 image list --format=json                                                                                                                                                                                               │ old-k8s-version-876718    │ jenkins │ v1.37.0 │ 29 Dec 25 07:15 UTC │ 29 Dec 25 07:15 UTC │
	│ pause   │ -p old-k8s-version-876718 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-876718    │ jenkins │ v1.37.0 │ 29 Dec 25 07:15 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────
────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/29 07:14:48
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1229 07:14:48.560108  245459 out.go:360] Setting OutFile to fd 1 ...
	I1229 07:14:48.560372  245459 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1229 07:14:48.560383  245459 out.go:374] Setting ErrFile to fd 2...
	I1229 07:14:48.560387  245459 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1229 07:14:48.560567  245459 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22353-9207/.minikube/bin
	I1229 07:14:48.561012  245459 out.go:368] Setting JSON to false
	I1229 07:14:48.562191  245459 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":3441,"bootTime":1766989048,"procs":334,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1229 07:14:48.562259  245459 start.go:143] virtualization: kvm guest
	I1229 07:14:48.564311  245459 out.go:179] * [no-preload-122332] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1229 07:14:48.565627  245459 notify.go:221] Checking for updates...
	I1229 07:14:48.565639  245459 out.go:179]   - MINIKUBE_LOCATION=22353
	I1229 07:14:48.566993  245459 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1229 07:14:48.568298  245459 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22353-9207/kubeconfig
	I1229 07:14:48.569552  245459 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22353-9207/.minikube
	I1229 07:14:48.570771  245459 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1229 07:14:48.571881  245459 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1229 07:14:48.573474  245459 config.go:182] Loaded profile config "cert-expiration-452455": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1229 07:14:48.573586  245459 config.go:182] Loaded profile config "kubernetes-upgrade-174577": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1229 07:14:48.573692  245459 config.go:182] Loaded profile config "old-k8s-version-876718": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1229 07:14:48.573787  245459 driver.go:422] Setting default libvirt URI to qemu:///system
	I1229 07:14:48.599681  245459 docker.go:124] docker version: linux-29.1.3:Docker Engine - Community
	I1229 07:14:48.599801  245459 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1229 07:14:48.654053  245459 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:65 OomKillDisable:false NGoroutines:75 SystemTime:2025-12-29 07:14:48.64421016 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x8
6_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:dea7da592f5d1d2b7755e3a161be07f43fad8f75 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[ma
p[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1229 07:14:48.654160  245459 docker.go:319] overlay module found
	I1229 07:14:48.656072  245459 out.go:179] * Using the docker driver based on user configuration
	I1229 07:14:48.657317  245459 start.go:309] selected driver: docker
	I1229 07:14:48.657331  245459 start.go:928] validating driver "docker" against <nil>
	I1229 07:14:48.657342  245459 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1229 07:14:48.657831  245459 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1229 07:14:48.718281  245459 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:65 OomKillDisable:false NGoroutines:75 SystemTime:2025-12-29 07:14:48.708609847 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:dea7da592f5d1d2b7755e3a161be07f43fad8f75 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1229 07:14:48.718463  245459 start_flags.go:333] no existing cluster config was found, will generate one from the flags 
	I1229 07:14:48.718661  245459 start_flags.go:1019] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1229 07:14:48.720655  245459 out.go:179] * Using Docker driver with root privileges
	I1229 07:14:48.721984  245459 cni.go:84] Creating CNI manager for ""
	I1229 07:14:48.722055  245459 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1229 07:14:48.722067  245459 start_flags.go:342] Found "CNI" CNI - setting NetworkPlugin=cni
	I1229 07:14:48.722137  245459 start.go:353] cluster config:
	{Name:no-preload-122332 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766979815-22353@sha256:20dad5895b49b986a1253c0faab60865204843ac97fd3a6e6210da5896244409 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:no-preload-122332 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID
:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1229 07:14:48.723613  245459 out.go:179] * Starting "no-preload-122332" primary control-plane node in "no-preload-122332" cluster
	I1229 07:14:48.724802  245459 cache.go:134] Beginning downloading kic base image for docker with crio
	I1229 07:14:48.726037  245459 out.go:179] * Pulling base image v0.0.48-1766979815-22353 ...
	I1229 07:14:48.727189  245459 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime crio
	I1229 07:14:48.727284  245459 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766979815-22353@sha256:20dad5895b49b986a1253c0faab60865204843ac97fd3a6e6210da5896244409 in local docker daemon
	I1229 07:14:48.727332  245459 profile.go:143] Saving config to /home/jenkins/minikube-integration/22353-9207/.minikube/profiles/no-preload-122332/config.json ...
	I1229 07:14:48.727362  245459 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22353-9207/.minikube/profiles/no-preload-122332/config.json: {Name:mk58103441ab97c89bed4e107503b27d1a73b80e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1229 07:14:48.727475  245459 cache.go:107] acquiring lock: {Name:mk524ccc7d3121d195adc7d1863af70c1e10cb09 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1229 07:14:48.727510  245459 cache.go:107] acquiring lock: {Name:mkca02c24b265c83f3ba73c3e4bff2d28831259c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1229 07:14:48.727559  245459 cache.go:115] /home/jenkins/minikube-integration/22353-9207/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1229 07:14:48.727587  245459 cache.go:115] /home/jenkins/minikube-integration/22353-9207/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.6-0 exists
	I1229 07:14:48.727578  245459 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/22353-9207/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5" took 124.639µs
	I1229 07:14:48.727576  245459 cache.go:107] acquiring lock: {Name:mkceb8935c60ed9a529274ab83854aa71dbe9a7d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1229 07:14:48.727600  245459 cache.go:96] cache image "registry.k8s.io/etcd:3.6.6-0" -> "/home/jenkins/minikube-integration/22353-9207/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.6-0" took 92.417µs
	I1229 07:14:48.727585  245459 cache.go:107] acquiring lock: {Name:mk2827ee73a1c5c546c3035bd69b730bda1ef682 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1229 07:14:48.727609  245459 cache.go:80] save to tar file registry.k8s.io/etcd:3.6.6-0 -> /home/jenkins/minikube-integration/22353-9207/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.6-0 succeeded
	I1229 07:14:48.727603  245459 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/22353-9207/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1229 07:14:48.727541  245459 cache.go:107] acquiring lock: {Name:mk52f4077c79f8806c7eb2c6a7253ed35dcf09ce Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1229 07:14:48.727521  245459 cache.go:107] acquiring lock: {Name:mk4e3cc5ac4b58daa39b77bf4639b595a7b5e1bb Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1229 07:14:48.727664  245459 cache.go:115] /home/jenkins/minikube-integration/22353-9207/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.35.0 exists
	I1229 07:14:48.727676  245459 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.35.0" -> "/home/jenkins/minikube-integration/22353-9207/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.35.0" took 166.582µs
	I1229 07:14:48.727655  245459 cache.go:107] acquiring lock: {Name:mk6876db4017aa5ef89eab36b68c600dec62345c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1229 07:14:48.727662  245459 cache.go:107] acquiring lock: {Name:mkeb7d05fa98b741eb24c41313df007ce9bbb93e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1229 07:14:48.727685  245459 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.35.0 -> /home/jenkins/minikube-integration/22353-9207/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.35.0 succeeded
	I1229 07:14:48.727668  245459 cache.go:115] /home/jenkins/minikube-integration/22353-9207/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.35.0 exists
	I1229 07:14:48.727775  245459 cache.go:115] /home/jenkins/minikube-integration/22353-9207/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1 exists
	I1229 07:14:48.727797  245459 cache.go:96] cache image "registry.k8s.io/pause:3.10.1" -> "/home/jenkins/minikube-integration/22353-9207/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1" took 210.438µs
	I1229 07:14:48.727820  245459 cache.go:115] /home/jenkins/minikube-integration/22353-9207/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.13.1 exists
	I1229 07:14:48.727769  245459 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.35.0" -> "/home/jenkins/minikube-integration/22353-9207/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.35.0" took 234.428µs
	I1229 07:14:48.727829  245459 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.35.0 -> /home/jenkins/minikube-integration/22353-9207/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.35.0 succeeded
	I1229 07:14:48.727829  245459 cache.go:115] /home/jenkins/minikube-integration/22353-9207/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.35.0 exists
	I1229 07:14:48.727830  245459 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.13.1" -> "/home/jenkins/minikube-integration/22353-9207/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.13.1" took 232.169µs
	I1229 07:14:48.727842  245459 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.13.1 -> /home/jenkins/minikube-integration/22353-9207/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.13.1 succeeded
	I1229 07:14:48.727822  245459 cache.go:80] save to tar file registry.k8s.io/pause:3.10.1 -> /home/jenkins/minikube-integration/22353-9207/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1 succeeded
	I1229 07:14:48.727840  245459 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.35.0" -> "/home/jenkins/minikube-integration/22353-9207/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.35.0" took 233.187µs
	I1229 07:14:48.727851  245459 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.35.0 -> /home/jenkins/minikube-integration/22353-9207/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.35.0 succeeded
	I1229 07:14:48.727684  245459 cache.go:115] /home/jenkins/minikube-integration/22353-9207/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.35.0 exists
	I1229 07:14:48.727868  245459 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.35.0" -> "/home/jenkins/minikube-integration/22353-9207/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.35.0" took 294.495µs
	I1229 07:14:48.727874  245459 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.35.0 -> /home/jenkins/minikube-integration/22353-9207/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.35.0 succeeded
	I1229 07:14:48.727880  245459 cache.go:87] Successfully saved all images to host disk.
	I1229 07:14:48.749020  245459 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766979815-22353@sha256:20dad5895b49b986a1253c0faab60865204843ac97fd3a6e6210da5896244409 in local docker daemon, skipping pull
	I1229 07:14:48.749037  245459 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766979815-22353@sha256:20dad5895b49b986a1253c0faab60865204843ac97fd3a6e6210da5896244409 exists in daemon, skipping load
	I1229 07:14:48.749053  245459 cache.go:243] Successfully downloaded all kic artifacts
	I1229 07:14:48.749090  245459 start.go:360] acquireMachinesLock for no-preload-122332: {Name:mka83f33e779c9aed23f5a0e4fef1298c9058532 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1229 07:14:48.749192  245459 start.go:364] duration metric: took 78.893µs to acquireMachinesLock for "no-preload-122332"
	I1229 07:14:48.749233  245459 start.go:93] Provisioning new machine with config: &{Name:no-preload-122332 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766979815-22353@sha256:20dad5895b49b986a1253c0faab60865204843ac97fd3a6e6210da5896244409 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:no-preload-122332 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwa
rePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false} &{Name: IP: Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1229 07:14:48.749320  245459 start.go:125] createHost starting for "" (driver="docker")
	W1229 07:14:47.932666  241214 pod_ready.go:104] pod "coredns-5dd5756b68-pnstl" is not "Ready", error: <nil>
	W1229 07:14:49.939330  241214 pod_ready.go:104] pod "coredns-5dd5756b68-pnstl" is not "Ready", error: <nil>
	I1229 07:14:48.751837  245459 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1229 07:14:48.752067  245459 start.go:159] libmachine.API.Create for "no-preload-122332" (driver="docker")
	I1229 07:14:48.752096  245459 client.go:173] LocalClient.Create starting
	I1229 07:14:48.752182  245459 main.go:144] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22353-9207/.minikube/certs/ca.pem
	I1229 07:14:48.752240  245459 main.go:144] libmachine: Decoding PEM data...
	I1229 07:14:48.752266  245459 main.go:144] libmachine: Parsing certificate...
	I1229 07:14:48.752322  245459 main.go:144] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22353-9207/.minikube/certs/cert.pem
	I1229 07:14:48.752344  245459 main.go:144] libmachine: Decoding PEM data...
	I1229 07:14:48.752353  245459 main.go:144] libmachine: Parsing certificate...
	I1229 07:14:48.752691  245459 cli_runner.go:164] Run: docker network inspect no-preload-122332 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1229 07:14:48.769711  245459 cli_runner.go:211] docker network inspect no-preload-122332 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1229 07:14:48.769793  245459 network_create.go:284] running [docker network inspect no-preload-122332] to gather additional debugging logs...
	I1229 07:14:48.769809  245459 cli_runner.go:164] Run: docker network inspect no-preload-122332
	W1229 07:14:48.786633  245459 cli_runner.go:211] docker network inspect no-preload-122332 returned with exit code 1
	I1229 07:14:48.786669  245459 network_create.go:287] error running [docker network inspect no-preload-122332]: docker network inspect no-preload-122332: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network no-preload-122332 not found
	I1229 07:14:48.786682  245459 network_create.go:289] output of [docker network inspect no-preload-122332]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network no-preload-122332 not found
	
	** /stderr **
	I1229 07:14:48.786807  245459 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1229 07:14:48.803944  245459 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-0cdc02b57a9c IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:d2:92:f5:d8:8c:53} reservation:<nil>}
	I1229 07:14:48.805484  245459 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-09c86d5ed1ab IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:da:3f:ba:d0:a8:f3} reservation:<nil>}
	I1229 07:14:48.806773  245459 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-5eb2f52e9e64 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:9e:e7:f2:5b:43:1d} reservation:<nil>}
	I1229 07:14:48.807284  245459 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-66e171323e2a IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:2a:d9:01:28:19:dc} reservation:<nil>}
	I1229 07:14:48.807714  245459 network.go:211] skipping subnet 192.168.85.0/24 that is taken: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName:br-faaa954500ab IfaceIPv4:192.168.85.1 IfaceMTU:1500 IfaceMAC:de:8a:1a:a6:08:26} reservation:<nil>}
	I1229 07:14:48.808357  245459 network.go:206] using free private subnet 192.168.94.0/24: &{IP:192.168.94.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.94.0/24 Gateway:192.168.94.1 ClientMin:192.168.94.2 ClientMax:192.168.94.254 Broadcast:192.168.94.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001c17fe0}
	I1229 07:14:48.808385  245459 network_create.go:124] attempt to create docker network no-preload-122332 192.168.94.0/24 with gateway 192.168.94.1 and MTU of 1500 ...
	I1229 07:14:48.808427  245459 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.94.0/24 --gateway=192.168.94.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=no-preload-122332 no-preload-122332
	I1229 07:14:48.860285  245459 network_create.go:108] docker network no-preload-122332 192.168.94.0/24 created
	I1229 07:14:48.860328  245459 kic.go:121] calculated static IP "192.168.94.2" for the "no-preload-122332" container
	I1229 07:14:48.860412  245459 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1229 07:14:48.885552  245459 cli_runner.go:164] Run: docker volume create no-preload-122332 --label name.minikube.sigs.k8s.io=no-preload-122332 --label created_by.minikube.sigs.k8s.io=true
	I1229 07:14:48.904065  245459 oci.go:103] Successfully created a docker volume no-preload-122332
	I1229 07:14:48.904155  245459 cli_runner.go:164] Run: docker run --rm --name no-preload-122332-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=no-preload-122332 --entrypoint /usr/bin/test -v no-preload-122332:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766979815-22353@sha256:20dad5895b49b986a1253c0faab60865204843ac97fd3a6e6210da5896244409 -d /var/lib
	I1229 07:14:49.410107  245459 oci.go:107] Successfully prepared a docker volume no-preload-122332
	I1229 07:14:49.410159  245459 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime crio
	W1229 07:14:49.410270  245459 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1229 07:14:49.410317  245459 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1229 07:14:49.410468  245459 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1229 07:14:49.490297  245459 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname no-preload-122332 --name no-preload-122332 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=no-preload-122332 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=no-preload-122332 --network no-preload-122332 --ip 192.168.94.2 --volume no-preload-122332:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766979815-22353@sha256:20dad5895b49b986a1253c0faab60865204843ac97fd3a6e6210da5896244409
	I1229 07:14:49.866241  245459 cli_runner.go:164] Run: docker container inspect no-preload-122332 --format={{.State.Running}}
	I1229 07:14:49.891171  245459 cli_runner.go:164] Run: docker container inspect no-preload-122332 --format={{.State.Status}}
	I1229 07:14:49.917198  245459 cli_runner.go:164] Run: docker exec no-preload-122332 stat /var/lib/dpkg/alternatives/iptables
	I1229 07:14:49.983409  245459 oci.go:144] the created container "no-preload-122332" has a running status.
	I1229 07:14:49.983438  245459 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/22353-9207/.minikube/machines/no-preload-122332/id_rsa...
	I1229 07:14:50.197679  245459 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22353-9207/.minikube/machines/no-preload-122332/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1229 07:14:50.237885  245459 cli_runner.go:164] Run: docker container inspect no-preload-122332 --format={{.State.Status}}
	I1229 07:14:50.265423  245459 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1229 07:14:50.265445  245459 kic_runner.go:114] Args: [docker exec --privileged no-preload-122332 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1229 07:14:50.339055  245459 cli_runner.go:164] Run: docker container inspect no-preload-122332 --format={{.State.Status}}
	I1229 07:14:50.362853  245459 machine.go:94] provisionDockerMachine start ...
	I1229 07:14:50.363174  245459 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-122332
	I1229 07:14:50.382988  245459 main.go:144] libmachine: Using SSH client type: native
	I1229 07:14:50.383338  245459 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e300] 0x850fa0 <nil>  [] 0s} 127.0.0.1 33063 <nil> <nil>}
	I1229 07:14:50.383357  245459 main.go:144] libmachine: About to run SSH command:
	hostname
	I1229 07:14:50.545461  245459 main.go:144] libmachine: SSH cmd err, output: <nil>: no-preload-122332
	
	I1229 07:14:50.545490  245459 ubuntu.go:182] provisioning hostname "no-preload-122332"
	I1229 07:14:50.545557  245459 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-122332
	I1229 07:14:50.569110  245459 main.go:144] libmachine: Using SSH client type: native
	I1229 07:14:50.569480  245459 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e300] 0x850fa0 <nil>  [] 0s} 127.0.0.1 33063 <nil> <nil>}
	I1229 07:14:50.569502  245459 main.go:144] libmachine: About to run SSH command:
	sudo hostname no-preload-122332 && echo "no-preload-122332" | sudo tee /etc/hostname
	I1229 07:14:50.737395  245459 main.go:144] libmachine: SSH cmd err, output: <nil>: no-preload-122332
	
	I1229 07:14:50.737485  245459 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-122332
	I1229 07:14:50.763043  245459 main.go:144] libmachine: Using SSH client type: native
	I1229 07:14:50.763394  245459 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e300] 0x850fa0 <nil>  [] 0s} 127.0.0.1 33063 <nil> <nil>}
	I1229 07:14:50.763437  245459 main.go:144] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-122332' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-122332/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-122332' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1229 07:14:50.914627  245459 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	I1229 07:14:50.914660  245459 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22353-9207/.minikube CaCertPath:/home/jenkins/minikube-integration/22353-9207/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22353-9207/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22353-9207/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22353-9207/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22353-9207/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22353-9207/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22353-9207/.minikube}
	I1229 07:14:50.914684  245459 ubuntu.go:190] setting up certificates
	I1229 07:14:50.914706  245459 provision.go:84] configureAuth start
	I1229 07:14:50.914777  245459 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-122332
	I1229 07:14:50.938733  245459 provision.go:143] copyHostCerts
	I1229 07:14:50.938802  245459 exec_runner.go:144] found /home/jenkins/minikube-integration/22353-9207/.minikube/ca.pem, removing ...
	I1229 07:14:50.938814  245459 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22353-9207/.minikube/ca.pem
	I1229 07:14:50.938892  245459 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22353-9207/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22353-9207/.minikube/ca.pem (1082 bytes)
	I1229 07:14:50.938996  245459 exec_runner.go:144] found /home/jenkins/minikube-integration/22353-9207/.minikube/cert.pem, removing ...
	I1229 07:14:50.939010  245459 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22353-9207/.minikube/cert.pem
	I1229 07:14:50.939051  245459 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22353-9207/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22353-9207/.minikube/cert.pem (1123 bytes)
	I1229 07:14:50.939144  245459 exec_runner.go:144] found /home/jenkins/minikube-integration/22353-9207/.minikube/key.pem, removing ...
	I1229 07:14:50.939158  245459 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22353-9207/.minikube/key.pem
	I1229 07:14:50.939199  245459 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22353-9207/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22353-9207/.minikube/key.pem (1675 bytes)
	I1229 07:14:50.939325  245459 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22353-9207/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22353-9207/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22353-9207/.minikube/certs/ca-key.pem org=jenkins.no-preload-122332 san=[127.0.0.1 192.168.94.2 localhost minikube no-preload-122332]
	I1229 07:14:51.007801  245459 provision.go:177] copyRemoteCerts
	I1229 07:14:51.007864  245459 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1229 07:14:51.007913  245459 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-122332
	I1229 07:14:51.029044  245459 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33063 SSHKeyPath:/home/jenkins/minikube-integration/22353-9207/.minikube/machines/no-preload-122332/id_rsa Username:docker}
	I1229 07:14:51.134180  245459 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-9207/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1229 07:14:51.155889  245459 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-9207/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1229 07:14:51.174639  245459 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-9207/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1229 07:14:51.193206  245459 provision.go:87] duration metric: took 278.459636ms to configureAuth
	I1229 07:14:51.193246  245459 ubuntu.go:206] setting minikube options for container-runtime
	I1229 07:14:51.193433  245459 config.go:182] Loaded profile config "no-preload-122332": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1229 07:14:51.193542  245459 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-122332
	I1229 07:14:51.215072  245459 main.go:144] libmachine: Using SSH client type: native
	I1229 07:14:51.215353  245459 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e300] 0x850fa0 <nil>  [] 0s} 127.0.0.1 33063 <nil> <nil>}
	I1229 07:14:51.215374  245459 main.go:144] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1229 07:14:51.502390  245459 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1229 07:14:51.502420  245459 machine.go:97] duration metric: took 1.139540323s to provisionDockerMachine
	I1229 07:14:51.502434  245459 client.go:176] duration metric: took 2.750327082s to LocalClient.Create
	I1229 07:14:51.502461  245459 start.go:167] duration metric: took 2.750392298s to libmachine.API.Create "no-preload-122332"
	I1229 07:14:51.502476  245459 start.go:293] postStartSetup for "no-preload-122332" (driver="docker")
	I1229 07:14:51.502490  245459 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1229 07:14:51.502570  245459 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1229 07:14:51.502623  245459 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-122332
	I1229 07:14:51.521000  245459 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33063 SSHKeyPath:/home/jenkins/minikube-integration/22353-9207/.minikube/machines/no-preload-122332/id_rsa Username:docker}
	I1229 07:14:51.624776  245459 ssh_runner.go:195] Run: cat /etc/os-release
	I1229 07:14:51.629374  245459 main.go:144] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1229 07:14:51.629402  245459 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1229 07:14:51.629415  245459 filesync.go:126] Scanning /home/jenkins/minikube-integration/22353-9207/.minikube/addons for local assets ...
	I1229 07:14:51.629466  245459 filesync.go:126] Scanning /home/jenkins/minikube-integration/22353-9207/.minikube/files for local assets ...
	I1229 07:14:51.629575  245459 filesync.go:149] local asset: /home/jenkins/minikube-integration/22353-9207/.minikube/files/etc/ssl/certs/127332.pem -> 127332.pem in /etc/ssl/certs
	I1229 07:14:51.629697  245459 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1229 07:14:51.639088  245459 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-9207/.minikube/files/etc/ssl/certs/127332.pem --> /etc/ssl/certs/127332.pem (1708 bytes)
	I1229 07:14:51.662317  245459 start.go:296] duration metric: took 159.824652ms for postStartSetup
	I1229 07:14:51.662717  245459 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-122332
	I1229 07:14:51.685599  245459 profile.go:143] Saving config to /home/jenkins/minikube-integration/22353-9207/.minikube/profiles/no-preload-122332/config.json ...
	I1229 07:14:51.685875  245459 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1229 07:14:51.685923  245459 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-122332
	I1229 07:14:51.708201  245459 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33063 SSHKeyPath:/home/jenkins/minikube-integration/22353-9207/.minikube/machines/no-preload-122332/id_rsa Username:docker}
	I1229 07:14:51.811411  245459 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1229 07:14:51.817079  245459 start.go:128] duration metric: took 3.06774455s to createHost
	I1229 07:14:51.817106  245459 start.go:83] releasing machines lock for "no-preload-122332", held for 3.067898739s
	I1229 07:14:51.817187  245459 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-122332
	I1229 07:14:51.840236  245459 ssh_runner.go:195] Run: cat /version.json
	I1229 07:14:51.840295  245459 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-122332
	I1229 07:14:51.840297  245459 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1229 07:14:51.840370  245459 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-122332
	I1229 07:14:51.863761  245459 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33063 SSHKeyPath:/home/jenkins/minikube-integration/22353-9207/.minikube/machines/no-preload-122332/id_rsa Username:docker}
	I1229 07:14:51.864308  245459 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33063 SSHKeyPath:/home/jenkins/minikube-integration/22353-9207/.minikube/machines/no-preload-122332/id_rsa Username:docker}
	I1229 07:14:52.042079  245459 ssh_runner.go:195] Run: systemctl --version
	I1229 07:14:52.050752  245459 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1229 07:14:52.093375  245459 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1229 07:14:52.099342  245459 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1229 07:14:52.099414  245459 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1229 07:14:52.130588  245459 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1229 07:14:52.130617  245459 start.go:496] detecting cgroup driver to use...
	I1229 07:14:52.130656  245459 detect.go:190] detected "systemd" cgroup driver on host os
	I1229 07:14:52.130702  245459 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1229 07:14:52.150541  245459 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1229 07:14:52.166197  245459 docker.go:218] disabling cri-docker service (if available) ...
	I1229 07:14:52.166302  245459 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1229 07:14:52.187330  245459 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1229 07:14:52.212528  245459 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1229 07:14:52.328916  245459 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1229 07:14:52.447171  245459 docker.go:234] disabling docker service ...
	I1229 07:14:52.447281  245459 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1229 07:14:52.471655  245459 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1229 07:14:52.488320  245459 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1229 07:14:52.602578  245459 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1229 07:14:52.716362  245459 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1229 07:14:52.732482  245459 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1229 07:14:52.749518  245459 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1229 07:14:52.749580  245459 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1229 07:14:52.763000  245459 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1229 07:14:52.763070  245459 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1229 07:14:52.773911  245459 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1229 07:14:52.785178  245459 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1229 07:14:52.797174  245459 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1229 07:14:52.807995  245459 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1229 07:14:52.819485  245459 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1229 07:14:52.837882  245459 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1229 07:14:52.848729  245459 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1229 07:14:52.858423  245459 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1229 07:14:52.867838  245459 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1229 07:14:52.970897  245459 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1229 07:14:53.890892  245459 start.go:553] Will wait 60s for socket path /var/run/crio/crio.sock
	I1229 07:14:53.890969  245459 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1229 07:14:53.896105  245459 start.go:574] Will wait 60s for crictl version
	I1229 07:14:53.896167  245459 ssh_runner.go:195] Run: which crictl
	I1229 07:14:53.901269  245459 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1229 07:14:53.933757  245459 start.go:590] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.35.0
	RuntimeApiVersion:  v1
	I1229 07:14:53.933834  245459 ssh_runner.go:195] Run: crio --version
	I1229 07:14:53.970748  245459 ssh_runner.go:195] Run: crio --version
	I1229 07:14:54.008255  245459 out.go:179] * Preparing Kubernetes v1.35.0 on CRI-O 1.35.0 ...
	I1229 07:14:49.542623  225445 api_server.go:299] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1229 07:14:49.543067  225445 api_server.go:315] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1229 07:14:49.543123  225445 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1229 07:14:49.543195  225445 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1229 07:14:49.582785  225445 cri.go:96] found id: "864abb65f432c26fa136421795c1a96c0e9342e9a1e658790ff31d6cfc64ee5e"
	I1229 07:14:49.582812  225445 cri.go:96] found id: "3a9c461f6549cedfcb902d493a98dfdfd45573c28ec554e6b1d2ce904ee1fd67"
	I1229 07:14:49.582819  225445 cri.go:96] found id: ""
	I1229 07:14:49.582828  225445 logs.go:282] 2 containers: [864abb65f432c26fa136421795c1a96c0e9342e9a1e658790ff31d6cfc64ee5e 3a9c461f6549cedfcb902d493a98dfdfd45573c28ec554e6b1d2ce904ee1fd67]
	I1229 07:14:49.582883  225445 ssh_runner.go:195] Run: which crictl
	I1229 07:14:49.588955  225445 ssh_runner.go:195] Run: which crictl
	I1229 07:14:49.594900  225445 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1229 07:14:49.595020  225445 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1229 07:14:49.632157  225445 cri.go:96] found id: "02694b727a7ef5615b676832bc306017d23da15eff0a3aa9c1e32630ffe9e1bd"
	I1229 07:14:49.632181  225445 cri.go:96] found id: ""
	I1229 07:14:49.632191  225445 logs.go:282] 1 containers: [02694b727a7ef5615b676832bc306017d23da15eff0a3aa9c1e32630ffe9e1bd]
	I1229 07:14:49.632866  225445 ssh_runner.go:195] Run: which crictl
	I1229 07:14:49.638456  225445 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1229 07:14:49.638519  225445 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1229 07:14:49.679909  225445 cri.go:96] found id: ""
	I1229 07:14:49.679939  225445 logs.go:282] 0 containers: []
	W1229 07:14:49.679951  225445 logs.go:284] No container was found matching "coredns"
	I1229 07:14:49.679958  225445 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1229 07:14:49.680009  225445 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1229 07:14:49.719747  225445 cri.go:96] found id: "bc46a059c0b20ae0cdb359909a2896f904772ffa6178a77a2cc0269f181bd298"
	I1229 07:14:49.719772  225445 cri.go:96] found id: "1a2a5ec730671fe2fec7f6b801e54afe17547e25677f0f6dc85052e44f7374d4"
	I1229 07:14:49.719778  225445 cri.go:96] found id: "83f0b45b14d0f5c0f2428e774b2c22cc1cbf1cdc631be13cb8017de1791ed9c1"
	I1229 07:14:49.719783  225445 cri.go:96] found id: ""
	I1229 07:14:49.719792  225445 logs.go:282] 3 containers: [bc46a059c0b20ae0cdb359909a2896f904772ffa6178a77a2cc0269f181bd298 1a2a5ec730671fe2fec7f6b801e54afe17547e25677f0f6dc85052e44f7374d4 83f0b45b14d0f5c0f2428e774b2c22cc1cbf1cdc631be13cb8017de1791ed9c1]
	I1229 07:14:49.719886  225445 ssh_runner.go:195] Run: which crictl
	I1229 07:14:49.725641  225445 ssh_runner.go:195] Run: which crictl
	I1229 07:14:49.731065  225445 ssh_runner.go:195] Run: which crictl
	I1229 07:14:49.737676  225445 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1229 07:14:49.737759  225445 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1229 07:14:49.777182  225445 cri.go:96] found id: ""
	I1229 07:14:49.777214  225445 logs.go:282] 0 containers: []
	W1229 07:14:49.777252  225445 logs.go:284] No container was found matching "kube-proxy"
	I1229 07:14:49.777261  225445 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1229 07:14:49.777334  225445 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1229 07:14:49.813525  225445 cri.go:96] found id: "e9d5b34dd02d51c0010757fb5a8250c74f693ed88fa87d76d4b6b6a6938ab9de"
	I1229 07:14:49.813550  225445 cri.go:96] found id: "c6904a1539983ab61b05abbaa0001209617c96ddc1921ff83557531b68191f66"
	I1229 07:14:49.813557  225445 cri.go:96] found id: ""
	I1229 07:14:49.813567  225445 logs.go:282] 2 containers: [e9d5b34dd02d51c0010757fb5a8250c74f693ed88fa87d76d4b6b6a6938ab9de c6904a1539983ab61b05abbaa0001209617c96ddc1921ff83557531b68191f66]
	I1229 07:14:49.813630  225445 ssh_runner.go:195] Run: which crictl
	I1229 07:14:49.818968  225445 ssh_runner.go:195] Run: which crictl
	I1229 07:14:49.822985  225445 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1229 07:14:49.823060  225445 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1229 07:14:49.857262  225445 cri.go:96] found id: ""
	I1229 07:14:49.857292  225445 logs.go:282] 0 containers: []
	W1229 07:14:49.857304  225445 logs.go:284] No container was found matching "kindnet"
	I1229 07:14:49.857312  225445 cri.go:61] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1229 07:14:49.857371  225445 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=storage-provisioner
	I1229 07:14:49.894388  225445 cri.go:96] found id: ""
	I1229 07:14:49.894417  225445 logs.go:282] 0 containers: []
	W1229 07:14:49.894428  225445 logs.go:284] No container was found matching "storage-provisioner"
	I1229 07:14:49.894441  225445 logs.go:123] Gathering logs for kube-scheduler [83f0b45b14d0f5c0f2428e774b2c22cc1cbf1cdc631be13cb8017de1791ed9c1] ...
	I1229 07:14:49.894458  225445 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 83f0b45b14d0f5c0f2428e774b2c22cc1cbf1cdc631be13cb8017de1791ed9c1"
	I1229 07:14:49.938527  225445 logs.go:123] Gathering logs for kube-controller-manager [e9d5b34dd02d51c0010757fb5a8250c74f693ed88fa87d76d4b6b6a6938ab9de] ...
	I1229 07:14:49.938583  225445 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e9d5b34dd02d51c0010757fb5a8250c74f693ed88fa87d76d4b6b6a6938ab9de"
	I1229 07:14:49.974192  225445 logs.go:123] Gathering logs for kube-controller-manager [c6904a1539983ab61b05abbaa0001209617c96ddc1921ff83557531b68191f66] ...
	I1229 07:14:49.974497  225445 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c6904a1539983ab61b05abbaa0001209617c96ddc1921ff83557531b68191f66"
	I1229 07:14:50.016194  225445 logs.go:123] Gathering logs for kubelet ...
	I1229 07:14:50.016237  225445 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1229 07:14:50.140324  225445 logs.go:123] Gathering logs for describe nodes ...
	I1229 07:14:50.140411  225445 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1229 07:14:50.210120  225445 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1229 07:14:50.210321  225445 logs.go:123] Gathering logs for kube-apiserver [3a9c461f6549cedfcb902d493a98dfdfd45573c28ec554e6b1d2ce904ee1fd67] ...
	I1229 07:14:50.210339  225445 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3a9c461f6549cedfcb902d493a98dfdfd45573c28ec554e6b1d2ce904ee1fd67"
	I1229 07:14:50.261264  225445 logs.go:123] Gathering logs for etcd [02694b727a7ef5615b676832bc306017d23da15eff0a3aa9c1e32630ffe9e1bd] ...
	I1229 07:14:50.261300  225445 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 02694b727a7ef5615b676832bc306017d23da15eff0a3aa9c1e32630ffe9e1bd"
	I1229 07:14:50.301092  225445 logs.go:123] Gathering logs for kube-scheduler [bc46a059c0b20ae0cdb359909a2896f904772ffa6178a77a2cc0269f181bd298] ...
	I1229 07:14:50.301120  225445 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 bc46a059c0b20ae0cdb359909a2896f904772ffa6178a77a2cc0269f181bd298"
	W1229 07:14:50.330316  225445 logs.go:138] Found kube-scheduler [bc46a059c0b20ae0cdb359909a2896f904772ffa6178a77a2cc0269f181bd298] problem: E1229 07:14:06.198316       1 run.go:72] "command failed" err="failed to create listener: failed to listen on 127.0.0.1:10259: listen tcp 127.0.0.1:10259: bind: address already in use"
	I1229 07:14:50.330340  225445 logs.go:123] Gathering logs for kube-scheduler [1a2a5ec730671fe2fec7f6b801e54afe17547e25677f0f6dc85052e44f7374d4] ...
	I1229 07:14:50.330354  225445 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 1a2a5ec730671fe2fec7f6b801e54afe17547e25677f0f6dc85052e44f7374d4"
	I1229 07:14:50.408422  225445 logs.go:123] Gathering logs for CRI-O ...
	I1229 07:14:50.408458  225445 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1229 07:14:50.496177  225445 logs.go:123] Gathering logs for container status ...
	I1229 07:14:50.496212  225445 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1229 07:14:50.535512  225445 logs.go:123] Gathering logs for dmesg ...
	I1229 07:14:50.535543  225445 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1229 07:14:50.551208  225445 logs.go:123] Gathering logs for kube-apiserver [864abb65f432c26fa136421795c1a96c0e9342e9a1e658790ff31d6cfc64ee5e] ...
	I1229 07:14:50.551251  225445 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 864abb65f432c26fa136421795c1a96c0e9342e9a1e658790ff31d6cfc64ee5e"
	I1229 07:14:50.593904  225445 out.go:374] Setting ErrFile to fd 2...
	I1229 07:14:50.593930  225445 out.go:408] TERM=,COLORTERM=, which probably does not support color
	W1229 07:14:50.593986  225445 out.go:285] X Problems detected in kube-scheduler [bc46a059c0b20ae0cdb359909a2896f904772ffa6178a77a2cc0269f181bd298]:
	W1229 07:14:50.594002  225445 out.go:285]   E1229 07:14:06.198316       1 run.go:72] "command failed" err="failed to create listener: failed to listen on 127.0.0.1:10259: listen tcp 127.0.0.1:10259: bind: address already in use"
	I1229 07:14:50.594010  225445 out.go:374] Setting ErrFile to fd 2...
	I1229 07:14:50.594016  225445 out.go:408] TERM=,COLORTERM=, which probably does not support color
	W1229 07:14:52.434165  241214 pod_ready.go:104] pod "coredns-5dd5756b68-pnstl" is not "Ready", error: <nil>
	W1229 07:14:54.436704  241214 pod_ready.go:104] pod "coredns-5dd5756b68-pnstl" is not "Ready", error: <nil>
	I1229 07:14:54.012919  245459 cli_runner.go:164] Run: docker network inspect no-preload-122332 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1229 07:14:54.035947  245459 ssh_runner.go:195] Run: grep 192.168.94.1	host.minikube.internal$ /etc/hosts
	I1229 07:14:54.041248  245459 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.94.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1229 07:14:54.055334  245459 kubeadm.go:884] updating cluster {Name:no-preload-122332 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766979815-22353@sha256:20dad5895b49b986a1253c0faab60865204843ac97fd3a6e6210da5896244409 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:no-preload-122332 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false} ...
	I1229 07:14:54.055450  245459 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime crio
	I1229 07:14:54.055483  245459 ssh_runner.go:195] Run: sudo crictl images --output json
	I1229 07:14:54.082944  245459 crio.go:557] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.35.0". assuming images are not preloaded.
	I1229 07:14:54.082966  245459 cache_images.go:90] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.35.0 registry.k8s.io/kube-controller-manager:v1.35.0 registry.k8s.io/kube-scheduler:v1.35.0 registry.k8s.io/kube-proxy:v1.35.0 registry.k8s.io/pause:3.10.1 registry.k8s.io/etcd:3.6.6-0 registry.k8s.io/coredns/coredns:v1.13.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1229 07:14:54.083029  245459 image.go:138] retrieving image: registry.k8s.io/kube-proxy:v1.35.0
	I1229 07:14:54.083059  245459 image.go:138] retrieving image: registry.k8s.io/coredns/coredns:v1.13.1
	I1229 07:14:54.083073  245459 image.go:138] retrieving image: registry.k8s.io/kube-controller-manager:v1.35.0
	I1229 07:14:54.083091  245459 image.go:138] retrieving image: registry.k8s.io/kube-apiserver:v1.35.0
	I1229 07:14:54.083105  245459 image.go:138] retrieving image: registry.k8s.io/kube-scheduler:v1.35.0
	I1229 07:14:54.083053  245459 image.go:138] retrieving image: registry.k8s.io/etcd:3.6.6-0
	I1229 07:14:54.083027  245459 image.go:138] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1229 07:14:54.083071  245459 image.go:138] retrieving image: registry.k8s.io/pause:3.10.1
	I1229 07:14:54.084395  245459 image.go:181] daemon lookup for registry.k8s.io/kube-proxy:v1.35.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.35.0
	I1229 07:14:54.084407  245459 image.go:181] daemon lookup for registry.k8s.io/etcd:3.6.6-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.6.6-0
	I1229 07:14:54.084395  245459 image.go:181] daemon lookup for registry.k8s.io/pause:3.10.1: Error response from daemon: No such image: registry.k8s.io/pause:3.10.1
	I1229 07:14:54.084397  245459 image.go:181] daemon lookup for registry.k8s.io/kube-scheduler:v1.35.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.35.0
	I1229 07:14:54.084397  245459 image.go:181] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1229 07:14:54.084455  245459 image.go:181] daemon lookup for registry.k8s.io/kube-apiserver:v1.35.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.35.0
	I1229 07:14:54.084458  245459 image.go:181] daemon lookup for registry.k8s.io/kube-controller-manager:v1.35.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.35.0
	I1229 07:14:54.084397  245459 image.go:181] daemon lookup for registry.k8s.io/coredns/coredns:v1.13.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.13.1
	I1229 07:14:54.221549  245459 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.35.0
	I1229 07:14:54.229003  245459 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.6.6-0
	I1229 07:14:54.235095  245459 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.35.0
	I1229 07:14:54.236198  245459 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.35.0
	I1229 07:14:54.239326  245459 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.13.1
	I1229 07:14:54.255782  245459 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.10.1
	I1229 07:14:54.267443  245459 cache_images.go:118] "registry.k8s.io/kube-controller-manager:v1.35.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.35.0" does not exist at hash "2c9a4b058bd7e6ec479c38f9e8a1dac2f5ee5b0a3ebda6dfac968e8720229508" in container runtime
	I1229 07:14:54.267497  245459 cri.go:226] Removing image: registry.k8s.io/kube-controller-manager:v1.35.0
	I1229 07:14:54.267544  245459 ssh_runner.go:195] Run: which crictl
	I1229 07:14:54.273949  245459 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.35.0
	I1229 07:14:54.280402  245459 cache_images.go:118] "registry.k8s.io/etcd:3.6.6-0" needs transfer: "registry.k8s.io/etcd:3.6.6-0" does not exist at hash "0a108f7189562e99793bdecab61fdf1a7c9d913af3385de9da17fb9d6ff430e2" in container runtime
	I1229 07:14:54.280444  245459 cri.go:226] Removing image: registry.k8s.io/etcd:3.6.6-0
	I1229 07:14:54.280487  245459 ssh_runner.go:195] Run: which crictl
	I1229 07:14:54.332094  245459 cache_images.go:118] "registry.k8s.io/kube-scheduler:v1.35.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.35.0" does not exist at hash "550794e3b12ac21ec7fd940bdfb45f7c8dae3c52acd8c70e580b882de20c3dcc" in container runtime
	I1229 07:14:54.332135  245459 cri.go:226] Removing image: registry.k8s.io/kube-scheduler:v1.35.0
	I1229 07:14:54.332137  245459 cache_images.go:118] "registry.k8s.io/kube-apiserver:v1.35.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.35.0" does not exist at hash "5c6acd67e9cd10eb60a246cd233db251ef62ea97e6572f897e873f0cb648f499" in container runtime
	I1229 07:14:54.332150  245459 cache_images.go:118] "registry.k8s.io/coredns/coredns:v1.13.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.13.1" does not exist at hash "aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139" in container runtime
	I1229 07:14:54.332168  245459 cri.go:226] Removing image: registry.k8s.io/kube-apiserver:v1.35.0
	I1229 07:14:54.332187  245459 cri.go:226] Removing image: registry.k8s.io/coredns/coredns:v1.13.1
	I1229 07:14:54.332193  245459 ssh_runner.go:195] Run: which crictl
	I1229 07:14:54.332205  245459 cache_images.go:118] "registry.k8s.io/pause:3.10.1" needs transfer: "registry.k8s.io/pause:3.10.1" does not exist at hash "cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f" in container runtime
	I1229 07:14:54.332214  245459 ssh_runner.go:195] Run: which crictl
	I1229 07:14:54.332237  245459 ssh_runner.go:195] Run: which crictl
	I1229 07:14:54.332246  245459 cri.go:226] Removing image: registry.k8s.io/pause:3.10.1
	I1229 07:14:54.332280  245459 ssh_runner.go:195] Run: which crictl
	I1229 07:14:54.332288  245459 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.35.0
	I1229 07:14:54.332308  245459 cache_images.go:118] "registry.k8s.io/kube-proxy:v1.35.0" needs transfer: "registry.k8s.io/kube-proxy:v1.35.0" does not exist at hash "32652ff1bbe6b6df16a3dc621fcad4e6e185a2852bffc5847aab79e36c54bfb8" in container runtime
	I1229 07:14:54.332334  245459 cri.go:226] Removing image: registry.k8s.io/kube-proxy:v1.35.0
	I1229 07:14:54.332341  245459 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.6.6-0
	I1229 07:14:54.332365  245459 ssh_runner.go:195] Run: which crictl
	I1229 07:14:54.338127  245459 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.35.0
	I1229 07:14:54.338153  245459 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I1229 07:14:54.338671  245459 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.13.1
	I1229 07:14:54.338747  245459 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.35.0
	I1229 07:14:54.369754  245459 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.35.0
	I1229 07:14:54.372134  245459 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.6.6-0
	I1229 07:14:54.372134  245459 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.35.0
	I1229 07:14:54.376767  245459 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I1229 07:14:54.376879  245459 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.35.0
	I1229 07:14:54.376954  245459 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.13.1
	I1229 07:14:54.377031  245459 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.35.0
	I1229 07:14:54.408860  245459 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.35.0
	I1229 07:14:54.411446  245459 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.35.0
	I1229 07:14:54.411466  245459 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.6.6-0
	I1229 07:14:54.416828  245459 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I1229 07:14:54.418351  245459 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.13.1
	I1229 07:14:54.418417  245459 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.35.0
	I1229 07:14:54.418458  245459 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.35.0
	I1229 07:14:54.450693  245459 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.35.0
	I1229 07:14:54.457783  245459 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22353-9207/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.6-0
	I1229 07:14:54.457839  245459 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22353-9207/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.35.0
	I1229 07:14:54.457867  245459 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22353-9207/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1
	I1229 07:14:54.457887  245459 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.6.6-0
	I1229 07:14:54.457918  245459 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.35.0
	I1229 07:14:54.457927  245459 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.10.1
	I1229 07:14:54.465328  245459 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22353-9207/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.13.1
	I1229 07:14:54.465432  245459 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.13.1
	I1229 07:14:54.465559  245459 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22353-9207/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.35.0
	I1229 07:14:54.465565  245459 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22353-9207/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.35.0
	I1229 07:14:54.465650  245459 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.35.0
	I1229 07:14:54.465671  245459 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.35.0
	I1229 07:14:54.484007  245459 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22353-9207/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.35.0
	I1229 07:14:54.484055  245459 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-controller-manager_v1.35.0: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.35.0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-controller-manager_v1.35.0': No such file or directory
	I1229 07:14:54.484104  245459 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-9207/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.35.0 --> /var/lib/minikube/images/kube-controller-manager_v1.35.0 (23144960 bytes)
	I1229 07:14:54.484133  245459 ssh_runner.go:352] existence check for /var/lib/minikube/images/etcd_3.6.6-0: stat -c "%s %y" /var/lib/minikube/images/etcd_3.6.6-0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/etcd_3.6.6-0': No such file or directory
	I1229 07:14:54.484160  245459 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-9207/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.6-0 --> /var/lib/minikube/images/etcd_3.6.6-0 (23653376 bytes)
	I1229 07:14:54.484110  245459 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.35.0
	I1229 07:14:54.484170  245459 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.10.1: stat -c "%s %y" /var/lib/minikube/images/pause_3.10.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/pause_3.10.1': No such file or directory
	I1229 07:14:54.484213  245459 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.13.1: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.13.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/coredns_v1.13.1': No such file or directory
	I1229 07:14:54.484234  245459 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-9207/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1 --> /var/lib/minikube/images/pause_3.10.1 (321024 bytes)
	I1229 07:14:54.484285  245459 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-scheduler_v1.35.0: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.35.0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-scheduler_v1.35.0': No such file or directory
	I1229 07:14:54.484239  245459 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-9207/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.13.1 --> /var/lib/minikube/images/coredns_v1.13.1 (23562752 bytes)
	I1229 07:14:54.484327  245459 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-apiserver_v1.35.0: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.35.0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-apiserver_v1.35.0': No such file or directory
	I1229 07:14:54.484341  245459 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-9207/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.35.0 --> /var/lib/minikube/images/kube-apiserver_v1.35.0 (27696640 bytes)
	I1229 07:14:54.484311  245459 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-9207/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.35.0 --> /var/lib/minikube/images/kube-scheduler_v1.35.0 (17248256 bytes)
	I1229 07:14:54.493421  245459 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-proxy_v1.35.0: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.35.0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-proxy_v1.35.0': No such file or directory
	I1229 07:14:54.493449  245459 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-9207/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.35.0 --> /var/lib/minikube/images/kube-proxy_v1.35.0 (25791488 bytes)
	I1229 07:14:54.581788  245459 crio.go:275] Loading image: /var/lib/minikube/images/pause_3.10.1
	I1229 07:14:54.581864  245459 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/pause_3.10.1
	I1229 07:14:55.067346  245459 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1229 07:14:55.323126  245459 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22353-9207/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1 from cache
	I1229 07:14:55.323159  245459 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.35.0
	I1229 07:14:55.323168  245459 cache_images.go:118] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I1229 07:14:55.323196  245459 cri.go:226] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1229 07:14:55.323215  245459 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.35.0
	I1229 07:14:55.323246  245459 ssh_runner.go:195] Run: which crictl
	I1229 07:14:56.709236  245459 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.35.0: (1.385969585s)
	I1229 07:14:56.709266  245459 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22353-9207/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.35.0 from cache
	I1229 07:14:56.709273  245459 ssh_runner.go:235] Completed: which crictl: (1.386006105s)
	I1229 07:14:56.709288  245459 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.35.0
	I1229 07:14:56.709328  245459 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1229 07:14:56.709329  245459 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.35.0
	I1229 07:14:56.736290  245459 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1229 07:14:57.527383  245459 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22353-9207/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.35.0 from cache
	I1229 07:14:57.527424  245459 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.6.6-0
	I1229 07:14:57.527480  245459 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.6.6-0
	I1229 07:14:57.527485  245459 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	W1229 07:14:56.932390  241214 pod_ready.go:104] pod "coredns-5dd5756b68-pnstl" is not "Ready", error: <nil>
	W1229 07:14:58.932686  241214 pod_ready.go:104] pod "coredns-5dd5756b68-pnstl" is not "Ready", error: <nil>
	W1229 07:15:00.934504  241214 pod_ready.go:104] pod "coredns-5dd5756b68-pnstl" is not "Ready", error: <nil>
	I1229 07:14:58.743101  245459 ssh_runner.go:235] Completed: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.215584622s)
	I1229 07:14:58.743169  245459 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22353-9207/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I1229 07:14:58.743167  245459 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.6.6-0: (1.215661248s)
	I1229 07:14:58.743191  245459 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22353-9207/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.6-0 from cache
	I1229 07:14:58.743234  245459 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.13.1
	I1229 07:14:58.743275  245459 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I1229 07:14:58.743280  245459 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.13.1
	I1229 07:14:59.913084  245459 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.13.1: (1.16978422s)
	I1229 07:14:59.913110  245459 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22353-9207/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.13.1 from cache
	I1229 07:14:59.913112  245459 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: (1.169814917s)
	I1229 07:14:59.913133  245459 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.35.0
	I1229 07:14:59.913157  245459 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I1229 07:14:59.913168  245459 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.35.0
	I1229 07:14:59.913184  245459 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-9207/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (9060352 bytes)
	I1229 07:15:01.299073  245459 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.35.0: (1.385876515s)
	I1229 07:15:01.299106  245459 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22353-9207/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.35.0 from cache
	I1229 07:15:01.299125  245459 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.35.0
	I1229 07:15:01.299171  245459 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.35.0
	I1229 07:15:02.364232  245459 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.35.0: (1.065025379s)
	I1229 07:15:02.364259  245459 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22353-9207/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.35.0 from cache
	I1229 07:15:02.364288  245459 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I1229 07:15:02.364342  245459 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I1229 07:15:02.901195  245459 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22353-9207/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I1229 07:15:02.901252  245459 cache_images.go:125] Successfully loaded all cached images
	I1229 07:15:02.901266  245459 cache_images.go:94] duration metric: took 8.818282444s to LoadCachedImages
	I1229 07:15:02.901278  245459 kubeadm.go:935] updating node { 192.168.94.2 8443 v1.35.0 crio true true} ...
	I1229 07:15:02.901360  245459 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=no-preload-122332 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.94.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0 ClusterName:no-preload-122332 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1229 07:15:02.901423  245459 ssh_runner.go:195] Run: crio config
	I1229 07:15:02.946475  245459 cni.go:84] Creating CNI manager for ""
	I1229 07:15:02.946504  245459 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1229 07:15:02.946527  245459 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1229 07:15:02.946551  245459 kubeadm.go:197] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.94.2 APIServerPort:8443 KubernetesVersion:v1.35.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-122332 NodeName:no-preload-122332 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.94.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.94.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/
kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1229 07:15:02.946715  245459 kubeadm.go:203] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.94.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-122332"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.94.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.94.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1229 07:15:02.946773  245459 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0
	I1229 07:15:02.955426  245459 binaries.go:54] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.35.0: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.35.0': No such file or directory
	
	Initiating transfer...
	I1229 07:15:02.955485  245459 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.35.0
	I1229 07:15:02.963767  245459 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.35.0/bin/linux/amd64/kubectl.sha256
	I1229 07:15:02.963815  245459 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.35.0/bin/linux/amd64/kubeadm.sha256
	I1229 07:15:02.963815  245459 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.35.0/bin/linux/amd64/kubelet.sha256
	I1229 07:15:02.963860  245459 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0/kubectl
	I1229 07:15:02.963900  245459 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0/kubeadm
	I1229 07:15:02.963914  245459 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1229 07:15:02.968099  245459 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.35.0/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.35.0/kubectl': No such file or directory
	I1229 07:15:02.968137  245459 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-9207/.minikube/cache/linux/amd64/v1.35.0/kubectl --> /var/lib/minikube/binaries/v1.35.0/kubectl (58597560 bytes)
	I1229 07:15:02.969449  245459 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.35.0/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.35.0/kubeadm': No such file or directory
	I1229 07:15:02.969475  245459 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-9207/.minikube/cache/linux/amd64/v1.35.0/kubeadm --> /var/lib/minikube/binaries/v1.35.0/kubeadm (72368312 bytes)
	I1229 07:15:02.985772  245459 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0/kubelet
	I1229 07:15:03.031174  245459 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.35.0/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.35.0/kubelet': No such file or directory
	I1229 07:15:03.031210  245459 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-9207/.minikube/cache/linux/amd64/v1.35.0/kubelet --> /var/lib/minikube/binaries/v1.35.0/kubelet (58110244 bytes)
	I1229 07:15:03.485571  245459 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1229 07:15:03.494174  245459 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I1229 07:15:03.506958  245459 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1229 07:15:00.599298  225445 api_server.go:299] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1229 07:15:00.599755  225445 api_server.go:315] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1229 07:15:00.599816  225445 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1229 07:15:00.599864  225445 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1229 07:15:00.630594  225445 cri.go:96] found id: "864abb65f432c26fa136421795c1a96c0e9342e9a1e658790ff31d6cfc64ee5e"
	I1229 07:15:00.630618  225445 cri.go:96] found id: "3a9c461f6549cedfcb902d493a98dfdfd45573c28ec554e6b1d2ce904ee1fd67"
	I1229 07:15:00.630625  225445 cri.go:96] found id: ""
	I1229 07:15:00.630634  225445 logs.go:282] 2 containers: [864abb65f432c26fa136421795c1a96c0e9342e9a1e658790ff31d6cfc64ee5e 3a9c461f6549cedfcb902d493a98dfdfd45573c28ec554e6b1d2ce904ee1fd67]
	I1229 07:15:00.630698  225445 ssh_runner.go:195] Run: which crictl
	I1229 07:15:00.635205  225445 ssh_runner.go:195] Run: which crictl
	I1229 07:15:00.639105  225445 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1229 07:15:00.639168  225445 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1229 07:15:00.668518  225445 cri.go:96] found id: "02694b727a7ef5615b676832bc306017d23da15eff0a3aa9c1e32630ffe9e1bd"
	I1229 07:15:00.668542  225445 cri.go:96] found id: ""
	I1229 07:15:00.668551  225445 logs.go:282] 1 containers: [02694b727a7ef5615b676832bc306017d23da15eff0a3aa9c1e32630ffe9e1bd]
	I1229 07:15:00.668624  225445 ssh_runner.go:195] Run: which crictl
	I1229 07:15:00.672911  225445 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1229 07:15:00.672977  225445 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1229 07:15:00.703605  225445 cri.go:96] found id: ""
	I1229 07:15:00.703647  225445 logs.go:282] 0 containers: []
	W1229 07:15:00.703655  225445 logs.go:284] No container was found matching "coredns"
	I1229 07:15:00.703660  225445 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1229 07:15:00.703704  225445 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1229 07:15:00.730689  225445 cri.go:96] found id: "bcf9a8233b7fe61acd5d7b0071cecd58e00a26cec4247f4ead31d75316b79078"
	I1229 07:15:00.730709  225445 cri.go:96] found id: "1a2a5ec730671fe2fec7f6b801e54afe17547e25677f0f6dc85052e44f7374d4"
	I1229 07:15:00.730712  225445 cri.go:96] found id: "83f0b45b14d0f5c0f2428e774b2c22cc1cbf1cdc631be13cb8017de1791ed9c1"
	I1229 07:15:00.730715  225445 cri.go:96] found id: ""
	I1229 07:15:00.730724  225445 logs.go:282] 3 containers: [bcf9a8233b7fe61acd5d7b0071cecd58e00a26cec4247f4ead31d75316b79078 1a2a5ec730671fe2fec7f6b801e54afe17547e25677f0f6dc85052e44f7374d4 83f0b45b14d0f5c0f2428e774b2c22cc1cbf1cdc631be13cb8017de1791ed9c1]
	I1229 07:15:00.730801  225445 ssh_runner.go:195] Run: which crictl
	I1229 07:15:00.735191  225445 ssh_runner.go:195] Run: which crictl
	I1229 07:15:00.739491  225445 ssh_runner.go:195] Run: which crictl
	I1229 07:15:00.743740  225445 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1229 07:15:00.743796  225445 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1229 07:15:00.774983  225445 cri.go:96] found id: ""
	I1229 07:15:00.775012  225445 logs.go:282] 0 containers: []
	W1229 07:15:00.775023  225445 logs.go:284] No container was found matching "kube-proxy"
	I1229 07:15:00.775031  225445 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1229 07:15:00.775083  225445 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1229 07:15:00.804277  225445 cri.go:96] found id: "e9d5b34dd02d51c0010757fb5a8250c74f693ed88fa87d76d4b6b6a6938ab9de"
	I1229 07:15:00.804302  225445 cri.go:96] found id: "c6904a1539983ab61b05abbaa0001209617c96ddc1921ff83557531b68191f66"
	I1229 07:15:00.804308  225445 cri.go:96] found id: ""
	I1229 07:15:00.804317  225445 logs.go:282] 2 containers: [e9d5b34dd02d51c0010757fb5a8250c74f693ed88fa87d76d4b6b6a6938ab9de c6904a1539983ab61b05abbaa0001209617c96ddc1921ff83557531b68191f66]
	I1229 07:15:00.804370  225445 ssh_runner.go:195] Run: which crictl
	I1229 07:15:00.808514  225445 ssh_runner.go:195] Run: which crictl
	I1229 07:15:00.812116  225445 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1229 07:15:00.812191  225445 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1229 07:15:00.841069  225445 cri.go:96] found id: ""
	I1229 07:15:00.841090  225445 logs.go:282] 0 containers: []
	W1229 07:15:00.841098  225445 logs.go:284] No container was found matching "kindnet"
	I1229 07:15:00.841103  225445 cri.go:61] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1229 07:15:00.841164  225445 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=storage-provisioner
	I1229 07:15:00.870655  225445 cri.go:96] found id: ""
	I1229 07:15:00.870680  225445 logs.go:282] 0 containers: []
	W1229 07:15:00.870690  225445 logs.go:284] No container was found matching "storage-provisioner"
	I1229 07:15:00.870700  225445 logs.go:123] Gathering logs for dmesg ...
	I1229 07:15:00.870714  225445 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1229 07:15:00.883900  225445 logs.go:123] Gathering logs for kube-apiserver [864abb65f432c26fa136421795c1a96c0e9342e9a1e658790ff31d6cfc64ee5e] ...
	I1229 07:15:00.883926  225445 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 864abb65f432c26fa136421795c1a96c0e9342e9a1e658790ff31d6cfc64ee5e"
	I1229 07:15:00.919064  225445 logs.go:123] Gathering logs for kube-apiserver [3a9c461f6549cedfcb902d493a98dfdfd45573c28ec554e6b1d2ce904ee1fd67] ...
	I1229 07:15:00.919098  225445 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3a9c461f6549cedfcb902d493a98dfdfd45573c28ec554e6b1d2ce904ee1fd67"
	I1229 07:15:00.955095  225445 logs.go:123] Gathering logs for etcd [02694b727a7ef5615b676832bc306017d23da15eff0a3aa9c1e32630ffe9e1bd] ...
	I1229 07:15:00.955123  225445 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 02694b727a7ef5615b676832bc306017d23da15eff0a3aa9c1e32630ffe9e1bd"
	I1229 07:15:00.990776  225445 logs.go:123] Gathering logs for kube-scheduler [83f0b45b14d0f5c0f2428e774b2c22cc1cbf1cdc631be13cb8017de1791ed9c1] ...
	I1229 07:15:00.990801  225445 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 83f0b45b14d0f5c0f2428e774b2c22cc1cbf1cdc631be13cb8017de1791ed9c1"
	I1229 07:15:01.026346  225445 logs.go:123] Gathering logs for kube-controller-manager [e9d5b34dd02d51c0010757fb5a8250c74f693ed88fa87d76d4b6b6a6938ab9de] ...
	I1229 07:15:01.026378  225445 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e9d5b34dd02d51c0010757fb5a8250c74f693ed88fa87d76d4b6b6a6938ab9de"
	I1229 07:15:01.054756  225445 logs.go:123] Gathering logs for kube-controller-manager [c6904a1539983ab61b05abbaa0001209617c96ddc1921ff83557531b68191f66] ...
	I1229 07:15:01.054785  225445 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c6904a1539983ab61b05abbaa0001209617c96ddc1921ff83557531b68191f66"
	I1229 07:15:01.084122  225445 logs.go:123] Gathering logs for describe nodes ...
	I1229 07:15:01.084152  225445 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1229 07:15:01.156209  225445 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1229 07:15:01.156248  225445 logs.go:123] Gathering logs for kube-scheduler [bcf9a8233b7fe61acd5d7b0071cecd58e00a26cec4247f4ead31d75316b79078] ...
	I1229 07:15:01.156265  225445 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 bcf9a8233b7fe61acd5d7b0071cecd58e00a26cec4247f4ead31d75316b79078"
	W1229 07:15:01.188644  225445 logs.go:138] Found kube-scheduler [bcf9a8233b7fe61acd5d7b0071cecd58e00a26cec4247f4ead31d75316b79078] problem: E1229 07:14:51.301899       1 run.go:72] "command failed" err="failed to create listener: failed to listen on 127.0.0.1:10259: listen tcp 127.0.0.1:10259: bind: address already in use"
	I1229 07:15:01.188691  225445 logs.go:123] Gathering logs for kube-scheduler [1a2a5ec730671fe2fec7f6b801e54afe17547e25677f0f6dc85052e44f7374d4] ...
	I1229 07:15:01.188709  225445 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 1a2a5ec730671fe2fec7f6b801e54afe17547e25677f0f6dc85052e44f7374d4"
	I1229 07:15:01.266991  225445 logs.go:123] Gathering logs for CRI-O ...
	I1229 07:15:01.267026  225445 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1229 07:15:01.335632  225445 logs.go:123] Gathering logs for container status ...
	I1229 07:15:01.335672  225445 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1229 07:15:01.369818  225445 logs.go:123] Gathering logs for kubelet ...
	I1229 07:15:01.369853  225445 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1229 07:15:01.462080  225445 out.go:374] Setting ErrFile to fd 2...
	I1229 07:15:01.462107  225445 out.go:408] TERM=,COLORTERM=, which probably does not support color
	W1229 07:15:01.462167  225445 out.go:285] X Problems detected in kube-scheduler [bcf9a8233b7fe61acd5d7b0071cecd58e00a26cec4247f4ead31d75316b79078]:
	W1229 07:15:01.462182  225445 out.go:285]   E1229 07:14:51.301899       1 run.go:72] "command failed" err="failed to create listener: failed to listen on 127.0.0.1:10259: listen tcp 127.0.0.1:10259: bind: address already in use"
	I1229 07:15:01.462188  225445 out.go:374] Setting ErrFile to fd 2...
	I1229 07:15:01.462195  225445 out.go:408] TERM=,COLORTERM=, which probably does not support color
	W1229 07:15:03.433485  241214 pod_ready.go:104] pod "coredns-5dd5756b68-pnstl" is not "Ready", error: <nil>
	W1229 07:15:05.433731  241214 pod_ready.go:104] pod "coredns-5dd5756b68-pnstl" is not "Ready", error: <nil>
	I1229 07:15:03.630717  245459 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2213 bytes)
	I1229 07:15:03.644828  245459 ssh_runner.go:195] Run: grep 192.168.94.2	control-plane.minikube.internal$ /etc/hosts
	I1229 07:15:03.648791  245459 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.94.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1229 07:15:03.659287  245459 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1229 07:15:03.741781  245459 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1229 07:15:03.768679  245459 certs.go:69] Setting up /home/jenkins/minikube-integration/22353-9207/.minikube/profiles/no-preload-122332 for IP: 192.168.94.2
	I1229 07:15:03.768704  245459 certs.go:195] generating shared ca certs ...
	I1229 07:15:03.768723  245459 certs.go:227] acquiring lock for ca certs: {Name:mk9ea2caa33885d3eff30cd6b31b0954826457bd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1229 07:15:03.768858  245459 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22353-9207/.minikube/ca.key
	I1229 07:15:03.768905  245459 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22353-9207/.minikube/proxy-client-ca.key
	I1229 07:15:03.768920  245459 certs.go:257] generating profile certs ...
	I1229 07:15:03.768981  245459 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22353-9207/.minikube/profiles/no-preload-122332/client.key
	I1229 07:15:03.769002  245459 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22353-9207/.minikube/profiles/no-preload-122332/client.crt with IP's: []
	I1229 07:15:03.837813  245459 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22353-9207/.minikube/profiles/no-preload-122332/client.crt ...
	I1229 07:15:03.837840  245459 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22353-9207/.minikube/profiles/no-preload-122332/client.crt: {Name:mkd414613fec1a2dd800ddc9ca6bc6a4705cfab2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1229 07:15:03.837999  245459 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22353-9207/.minikube/profiles/no-preload-122332/client.key ...
	I1229 07:15:03.838010  245459 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22353-9207/.minikube/profiles/no-preload-122332/client.key: {Name:mk6f029bf999c498ef9ce3ce68c7c0381a32c859 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1229 07:15:03.838087  245459 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22353-9207/.minikube/profiles/no-preload-122332/apiserver.key.8c20c595
	I1229 07:15:03.838103  245459 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22353-9207/.minikube/profiles/no-preload-122332/apiserver.crt.8c20c595 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.94.2]
	I1229 07:15:03.868115  245459 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22353-9207/.minikube/profiles/no-preload-122332/apiserver.crt.8c20c595 ...
	I1229 07:15:03.868139  245459 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22353-9207/.minikube/profiles/no-preload-122332/apiserver.crt.8c20c595: {Name:mk0aa58ec87a70bc621ab6f481dd4ea712bbcdbb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1229 07:15:03.868287  245459 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22353-9207/.minikube/profiles/no-preload-122332/apiserver.key.8c20c595 ...
	I1229 07:15:03.868300  245459 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22353-9207/.minikube/profiles/no-preload-122332/apiserver.key.8c20c595: {Name:mkf61453ffeb635bf79fcbe951ac5845a6244320 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1229 07:15:03.868369  245459 certs.go:382] copying /home/jenkins/minikube-integration/22353-9207/.minikube/profiles/no-preload-122332/apiserver.crt.8c20c595 -> /home/jenkins/minikube-integration/22353-9207/.minikube/profiles/no-preload-122332/apiserver.crt
	I1229 07:15:03.868440  245459 certs.go:386] copying /home/jenkins/minikube-integration/22353-9207/.minikube/profiles/no-preload-122332/apiserver.key.8c20c595 -> /home/jenkins/minikube-integration/22353-9207/.minikube/profiles/no-preload-122332/apiserver.key
	I1229 07:15:03.868491  245459 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22353-9207/.minikube/profiles/no-preload-122332/proxy-client.key
	I1229 07:15:03.868505  245459 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22353-9207/.minikube/profiles/no-preload-122332/proxy-client.crt with IP's: []
	I1229 07:15:03.964388  245459 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22353-9207/.minikube/profiles/no-preload-122332/proxy-client.crt ...
	I1229 07:15:03.964415  245459 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22353-9207/.minikube/profiles/no-preload-122332/proxy-client.crt: {Name:mk23a95ba3379465e793c7c74c3ec80fed5ae7b4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1229 07:15:03.964557  245459 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22353-9207/.minikube/profiles/no-preload-122332/proxy-client.key ...
	I1229 07:15:03.964570  245459 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22353-9207/.minikube/profiles/no-preload-122332/proxy-client.key: {Name:mk352b6374d05a023835df7477e511e85a67fab8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1229 07:15:03.964749  245459 certs.go:484] found cert: /home/jenkins/minikube-integration/22353-9207/.minikube/certs/12733.pem (1338 bytes)
	W1229 07:15:03.964786  245459 certs.go:480] ignoring /home/jenkins/minikube-integration/22353-9207/.minikube/certs/12733_empty.pem, impossibly tiny 0 bytes
	I1229 07:15:03.964798  245459 certs.go:484] found cert: /home/jenkins/minikube-integration/22353-9207/.minikube/certs/ca-key.pem (1675 bytes)
	I1229 07:15:03.964823  245459 certs.go:484] found cert: /home/jenkins/minikube-integration/22353-9207/.minikube/certs/ca.pem (1082 bytes)
	I1229 07:15:03.964848  245459 certs.go:484] found cert: /home/jenkins/minikube-integration/22353-9207/.minikube/certs/cert.pem (1123 bytes)
	I1229 07:15:03.964873  245459 certs.go:484] found cert: /home/jenkins/minikube-integration/22353-9207/.minikube/certs/key.pem (1675 bytes)
	I1229 07:15:03.964948  245459 certs.go:484] found cert: /home/jenkins/minikube-integration/22353-9207/.minikube/files/etc/ssl/certs/127332.pem (1708 bytes)
	I1229 07:15:03.965509  245459 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-9207/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1229 07:15:03.983731  245459 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-9207/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1229 07:15:04.001766  245459 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-9207/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1229 07:15:04.019067  245459 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-9207/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1229 07:15:04.036086  245459 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-9207/.minikube/profiles/no-preload-122332/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1229 07:15:04.052492  245459 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-9207/.minikube/profiles/no-preload-122332/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1229 07:15:04.069080  245459 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-9207/.minikube/profiles/no-preload-122332/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1229 07:15:04.086290  245459 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-9207/.minikube/profiles/no-preload-122332/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1229 07:15:04.103177  245459 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-9207/.minikube/certs/12733.pem --> /usr/share/ca-certificates/12733.pem (1338 bytes)
	I1229 07:15:04.123209  245459 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-9207/.minikube/files/etc/ssl/certs/127332.pem --> /usr/share/ca-certificates/127332.pem (1708 bytes)
	I1229 07:15:04.140734  245459 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-9207/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1229 07:15:04.157885  245459 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (722 bytes)
	I1229 07:15:04.170000  245459 ssh_runner.go:195] Run: openssl version
	I1229 07:15:04.176125  245459 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/12733.pem
	I1229 07:15:04.183418  245459 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/12733.pem /etc/ssl/certs/12733.pem
	I1229 07:15:04.190458  245459 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12733.pem
	I1229 07:15:04.194486  245459 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 29 06:49 /usr/share/ca-certificates/12733.pem
	I1229 07:15:04.194536  245459 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12733.pem
	I1229 07:15:04.229647  245459 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1229 07:15:04.237298  245459 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/12733.pem /etc/ssl/certs/51391683.0
	I1229 07:15:04.245013  245459 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/127332.pem
	I1229 07:15:04.252185  245459 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/127332.pem /etc/ssl/certs/127332.pem
	I1229 07:15:04.259492  245459 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/127332.pem
	I1229 07:15:04.262992  245459 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 29 06:49 /usr/share/ca-certificates/127332.pem
	I1229 07:15:04.263033  245459 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/127332.pem
	I1229 07:15:04.296758  245459 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1229 07:15:04.304343  245459 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/127332.pem /etc/ssl/certs/3ec20f2e.0
	I1229 07:15:04.311720  245459 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1229 07:15:04.319013  245459 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1229 07:15:04.326130  245459 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1229 07:15:04.329968  245459 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 29 06:46 /usr/share/ca-certificates/minikubeCA.pem
	I1229 07:15:04.330024  245459 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1229 07:15:04.365777  245459 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1229 07:15:04.373840  245459 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1229 07:15:04.381407  245459 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1229 07:15:04.385104  245459 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1229 07:15:04.385167  245459 kubeadm.go:401] StartCluster: {Name:no-preload-122332 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766979815-22353@sha256:20dad5895b49b986a1253c0faab60865204843ac97fd3a6e6210da5896244409 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:no-preload-122332 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: So
cketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1229 07:15:04.385248  245459 cri.go:61] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1229 07:15:04.385292  245459 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1229 07:15:04.412702  245459 cri.go:96] found id: ""
	I1229 07:15:04.412772  245459 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1229 07:15:04.420913  245459 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1229 07:15:04.429486  245459 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1229 07:15:04.429543  245459 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1229 07:15:04.438362  245459 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1229 07:15:04.438379  245459 kubeadm.go:158] found existing configuration files:
	
	I1229 07:15:04.438413  245459 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1229 07:15:04.445947  245459 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1229 07:15:04.446001  245459 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1229 07:15:04.453407  245459 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1229 07:15:04.460635  245459 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1229 07:15:04.460679  245459 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1229 07:15:04.467824  245459 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1229 07:15:04.475197  245459 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1229 07:15:04.475260  245459 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1229 07:15:04.482620  245459 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1229 07:15:04.490555  245459 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1229 07:15:04.490607  245459 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1229 07:15:04.498848  245459 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1229 07:15:04.594562  245459 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1045-gcp\n", err: exit status 1
	I1229 07:15:04.651558  245459 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	W1229 07:15:07.433883  241214 pod_ready.go:104] pod "coredns-5dd5756b68-pnstl" is not "Ready", error: <nil>
	W1229 07:15:09.933334  241214 pod_ready.go:104] pod "coredns-5dd5756b68-pnstl" is not "Ready", error: <nil>
	I1229 07:15:12.498044  245459 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0
	I1229 07:15:12.498107  245459 kubeadm.go:319] [preflight] Running pre-flight checks
	I1229 07:15:12.498206  245459 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1229 07:15:12.498322  245459 kubeadm.go:319] KERNEL_VERSION: 6.8.0-1045-gcp
	I1229 07:15:12.498381  245459 kubeadm.go:319] OS: Linux
	I1229 07:15:12.498454  245459 kubeadm.go:319] CGROUPS_CPU: enabled
	I1229 07:15:12.498521  245459 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1229 07:15:12.498573  245459 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1229 07:15:12.498631  245459 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1229 07:15:12.498676  245459 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1229 07:15:12.498717  245459 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1229 07:15:12.498764  245459 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1229 07:15:12.498804  245459 kubeadm.go:319] CGROUPS_IO: enabled
	I1229 07:15:12.498870  245459 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1229 07:15:12.498961  245459 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1229 07:15:12.499070  245459 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1229 07:15:12.499136  245459 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1229 07:15:12.500731  245459 out.go:252]   - Generating certificates and keys ...
	I1229 07:15:12.500835  245459 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1229 07:15:12.500908  245459 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1229 07:15:12.500982  245459 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1229 07:15:12.501033  245459 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1229 07:15:12.501101  245459 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1229 07:15:12.501169  245459 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1229 07:15:12.501300  245459 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1229 07:15:12.501461  245459 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [localhost no-preload-122332] and IPs [192.168.94.2 127.0.0.1 ::1]
	I1229 07:15:12.501539  245459 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1229 07:15:12.501674  245459 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [localhost no-preload-122332] and IPs [192.168.94.2 127.0.0.1 ::1]
	I1229 07:15:12.501767  245459 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1229 07:15:12.501831  245459 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1229 07:15:12.501876  245459 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1229 07:15:12.501927  245459 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1229 07:15:12.501977  245459 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1229 07:15:12.502026  245459 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1229 07:15:12.502072  245459 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1229 07:15:12.502138  245459 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1229 07:15:12.502194  245459 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1229 07:15:12.502298  245459 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1229 07:15:12.502372  245459 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1229 07:15:12.504411  245459 out.go:252]   - Booting up control plane ...
	I1229 07:15:12.504496  245459 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1229 07:15:12.504563  245459 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1229 07:15:12.504620  245459 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1229 07:15:12.504711  245459 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1229 07:15:12.504794  245459 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1229 07:15:12.504895  245459 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1229 07:15:12.504972  245459 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1229 07:15:12.505011  245459 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1229 07:15:12.505125  245459 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1229 07:15:12.505286  245459 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1229 07:15:12.505364  245459 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 501.796195ms
	I1229 07:15:12.505494  245459 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1229 07:15:12.505565  245459 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.94.2:8443/livez
	I1229 07:15:12.505651  245459 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1229 07:15:12.505745  245459 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1229 07:15:12.505815  245459 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 510.728293ms
	I1229 07:15:12.505872  245459 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 2.184700007s
	I1229 07:15:12.505935  245459 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 4.00121426s
	I1229 07:15:12.506096  245459 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1229 07:15:12.506265  245459 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1229 07:15:12.506360  245459 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1229 07:15:12.506575  245459 kubeadm.go:319] [mark-control-plane] Marking the node no-preload-122332 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1229 07:15:12.506633  245459 kubeadm.go:319] [bootstrap-token] Using token: n59rak.5imj7ctdwsn26hut
	I1229 07:15:12.507814  245459 out.go:252]   - Configuring RBAC rules ...
	I1229 07:15:12.507947  245459 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1229 07:15:12.508029  245459 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1229 07:15:12.508177  245459 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1229 07:15:12.508318  245459 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1229 07:15:12.508432  245459 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1229 07:15:12.508517  245459 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1229 07:15:12.508621  245459 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1229 07:15:12.508695  245459 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1229 07:15:12.508765  245459 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1229 07:15:12.508775  245459 kubeadm.go:319] 
	I1229 07:15:12.508864  245459 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1229 07:15:12.508871  245459 kubeadm.go:319] 
	I1229 07:15:12.508989  245459 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1229 07:15:12.509004  245459 kubeadm.go:319] 
	I1229 07:15:12.509030  245459 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1229 07:15:12.509089  245459 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1229 07:15:12.509136  245459 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1229 07:15:12.509145  245459 kubeadm.go:319] 
	I1229 07:15:12.509200  245459 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1229 07:15:12.509206  245459 kubeadm.go:319] 
	I1229 07:15:12.509273  245459 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1229 07:15:12.509285  245459 kubeadm.go:319] 
	I1229 07:15:12.509336  245459 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1229 07:15:12.509407  245459 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1229 07:15:12.509468  245459 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1229 07:15:12.509473  245459 kubeadm.go:319] 
	I1229 07:15:12.509563  245459 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1229 07:15:12.509658  245459 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1229 07:15:12.509666  245459 kubeadm.go:319] 
	I1229 07:15:12.509736  245459 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token n59rak.5imj7ctdwsn26hut \
	I1229 07:15:12.509829  245459 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:d97bd7e33b97d9ccbd4ec53c25ea13f0ec6aabee3d7ac32cc2829ba2c3b93811 \
	I1229 07:15:12.509852  245459 kubeadm.go:319] 	--control-plane 
	I1229 07:15:12.509857  245459 kubeadm.go:319] 
	I1229 07:15:12.509927  245459 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1229 07:15:12.509935  245459 kubeadm.go:319] 
	I1229 07:15:12.510048  245459 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token n59rak.5imj7ctdwsn26hut \
	I1229 07:15:12.510159  245459 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:d97bd7e33b97d9ccbd4ec53c25ea13f0ec6aabee3d7ac32cc2829ba2c3b93811 
	I1229 07:15:12.510170  245459 cni.go:84] Creating CNI manager for ""
	I1229 07:15:12.510176  245459 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1229 07:15:12.511411  245459 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1229 07:15:12.512524  245459 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1229 07:15:12.517100  245459 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.35.0/kubectl ...
	I1229 07:15:12.517122  245459 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2620 bytes)
	I1229 07:15:12.530368  245459 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1229 07:15:12.729377  245459 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1229 07:15:12.729493  245459 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1229 07:15:12.729564  245459 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes no-preload-122332 minikube.k8s.io/updated_at=2025_12_29T07_15_12_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=306c26d738eaa3534a776d7c684ed563998a48b8 minikube.k8s.io/name=no-preload-122332 minikube.k8s.io/primary=true
	I1229 07:15:12.806649  245459 ops.go:34] apiserver oom_adj: -16
	I1229 07:15:12.806736  245459 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1229 07:15:13.306968  245459 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1229 07:15:11.464342  225445 api_server.go:299] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1229 07:15:11.464751  225445 api_server.go:315] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1229 07:15:11.464806  225445 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1229 07:15:11.464856  225445 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1229 07:15:11.492541  225445 cri.go:96] found id: "864abb65f432c26fa136421795c1a96c0e9342e9a1e658790ff31d6cfc64ee5e"
	I1229 07:15:11.492567  225445 cri.go:96] found id: "3a9c461f6549cedfcb902d493a98dfdfd45573c28ec554e6b1d2ce904ee1fd67"
	I1229 07:15:11.492576  225445 cri.go:96] found id: ""
	I1229 07:15:11.492585  225445 logs.go:282] 2 containers: [864abb65f432c26fa136421795c1a96c0e9342e9a1e658790ff31d6cfc64ee5e 3a9c461f6549cedfcb902d493a98dfdfd45573c28ec554e6b1d2ce904ee1fd67]
	I1229 07:15:11.492648  225445 ssh_runner.go:195] Run: which crictl
	I1229 07:15:11.497133  225445 ssh_runner.go:195] Run: which crictl
	I1229 07:15:11.500792  225445 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1229 07:15:11.500862  225445 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1229 07:15:11.529062  225445 cri.go:96] found id: "02694b727a7ef5615b676832bc306017d23da15eff0a3aa9c1e32630ffe9e1bd"
	I1229 07:15:11.529088  225445 cri.go:96] found id: ""
	I1229 07:15:11.529098  225445 logs.go:282] 1 containers: [02694b727a7ef5615b676832bc306017d23da15eff0a3aa9c1e32630ffe9e1bd]
	I1229 07:15:11.529155  225445 ssh_runner.go:195] Run: which crictl
	I1229 07:15:11.534990  225445 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1229 07:15:11.535056  225445 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1229 07:15:11.566743  225445 cri.go:96] found id: ""
	I1229 07:15:11.566769  225445 logs.go:282] 0 containers: []
	W1229 07:15:11.566780  225445 logs.go:284] No container was found matching "coredns"
	I1229 07:15:11.566787  225445 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1229 07:15:11.566855  225445 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1229 07:15:11.596946  225445 cri.go:96] found id: "bcf9a8233b7fe61acd5d7b0071cecd58e00a26cec4247f4ead31d75316b79078"
	I1229 07:15:11.596971  225445 cri.go:96] found id: "1a2a5ec730671fe2fec7f6b801e54afe17547e25677f0f6dc85052e44f7374d4"
	I1229 07:15:11.596978  225445 cri.go:96] found id: "83f0b45b14d0f5c0f2428e774b2c22cc1cbf1cdc631be13cb8017de1791ed9c1"
	I1229 07:15:11.596982  225445 cri.go:96] found id: ""
	I1229 07:15:11.596991  225445 logs.go:282] 3 containers: [bcf9a8233b7fe61acd5d7b0071cecd58e00a26cec4247f4ead31d75316b79078 1a2a5ec730671fe2fec7f6b801e54afe17547e25677f0f6dc85052e44f7374d4 83f0b45b14d0f5c0f2428e774b2c22cc1cbf1cdc631be13cb8017de1791ed9c1]
	I1229 07:15:11.597047  225445 ssh_runner.go:195] Run: which crictl
	I1229 07:15:11.601285  225445 ssh_runner.go:195] Run: which crictl
	I1229 07:15:11.605011  225445 ssh_runner.go:195] Run: which crictl
	I1229 07:15:11.608892  225445 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1229 07:15:11.608954  225445 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1229 07:15:11.636667  225445 cri.go:96] found id: ""
	I1229 07:15:11.636688  225445 logs.go:282] 0 containers: []
	W1229 07:15:11.636696  225445 logs.go:284] No container was found matching "kube-proxy"
	I1229 07:15:11.636701  225445 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1229 07:15:11.636747  225445 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1229 07:15:11.677966  225445 cri.go:96] found id: "e9d5b34dd02d51c0010757fb5a8250c74f693ed88fa87d76d4b6b6a6938ab9de"
	I1229 07:15:11.677992  225445 cri.go:96] found id: "c6904a1539983ab61b05abbaa0001209617c96ddc1921ff83557531b68191f66"
	I1229 07:15:11.677998  225445 cri.go:96] found id: ""
	I1229 07:15:11.678007  225445 logs.go:282] 2 containers: [e9d5b34dd02d51c0010757fb5a8250c74f693ed88fa87d76d4b6b6a6938ab9de c6904a1539983ab61b05abbaa0001209617c96ddc1921ff83557531b68191f66]
	I1229 07:15:11.678065  225445 ssh_runner.go:195] Run: which crictl
	I1229 07:15:11.682709  225445 ssh_runner.go:195] Run: which crictl
	I1229 07:15:11.686715  225445 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1229 07:15:11.686777  225445 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1229 07:15:11.715970  225445 cri.go:96] found id: ""
	I1229 07:15:11.715998  225445 logs.go:282] 0 containers: []
	W1229 07:15:11.716008  225445 logs.go:284] No container was found matching "kindnet"
	I1229 07:15:11.716016  225445 cri.go:61] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1229 07:15:11.716074  225445 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=storage-provisioner
	I1229 07:15:11.755350  225445 cri.go:96] found id: ""
	I1229 07:15:11.755378  225445 logs.go:282] 0 containers: []
	W1229 07:15:11.755391  225445 logs.go:284] No container was found matching "storage-provisioner"
	I1229 07:15:11.755403  225445 logs.go:123] Gathering logs for CRI-O ...
	I1229 07:15:11.755417  225445 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1229 07:15:11.822637  225445 logs.go:123] Gathering logs for container status ...
	I1229 07:15:11.822672  225445 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1229 07:15:11.857279  225445 logs.go:123] Gathering logs for kubelet ...
	I1229 07:15:11.857308  225445 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1229 07:15:11.962906  225445 logs.go:123] Gathering logs for describe nodes ...
	I1229 07:15:11.962941  225445 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1229 07:15:12.022279  225445 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1229 07:15:12.022308  225445 logs.go:123] Gathering logs for kube-apiserver [864abb65f432c26fa136421795c1a96c0e9342e9a1e658790ff31d6cfc64ee5e] ...
	I1229 07:15:12.022320  225445 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 864abb65f432c26fa136421795c1a96c0e9342e9a1e658790ff31d6cfc64ee5e"
	I1229 07:15:12.058675  225445 logs.go:123] Gathering logs for kube-scheduler [bcf9a8233b7fe61acd5d7b0071cecd58e00a26cec4247f4ead31d75316b79078] ...
	I1229 07:15:12.058718  225445 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 bcf9a8233b7fe61acd5d7b0071cecd58e00a26cec4247f4ead31d75316b79078"
	W1229 07:15:12.086092  225445 logs.go:138] Found kube-scheduler [bcf9a8233b7fe61acd5d7b0071cecd58e00a26cec4247f4ead31d75316b79078] problem: E1229 07:14:51.301899       1 run.go:72] "command failed" err="failed to create listener: failed to listen on 127.0.0.1:10259: listen tcp 127.0.0.1:10259: bind: address already in use"
	I1229 07:15:12.086119  225445 logs.go:123] Gathering logs for kube-scheduler [1a2a5ec730671fe2fec7f6b801e54afe17547e25677f0f6dc85052e44f7374d4] ...
	I1229 07:15:12.086132  225445 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 1a2a5ec730671fe2fec7f6b801e54afe17547e25677f0f6dc85052e44f7374d4"
	I1229 07:15:12.148550  225445 logs.go:123] Gathering logs for dmesg ...
	I1229 07:15:12.148585  225445 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1229 07:15:12.162760  225445 logs.go:123] Gathering logs for kube-apiserver [3a9c461f6549cedfcb902d493a98dfdfd45573c28ec554e6b1d2ce904ee1fd67] ...
	I1229 07:15:12.162787  225445 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3a9c461f6549cedfcb902d493a98dfdfd45573c28ec554e6b1d2ce904ee1fd67"
	I1229 07:15:12.194638  225445 logs.go:123] Gathering logs for etcd [02694b727a7ef5615b676832bc306017d23da15eff0a3aa9c1e32630ffe9e1bd] ...
	I1229 07:15:12.194673  225445 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 02694b727a7ef5615b676832bc306017d23da15eff0a3aa9c1e32630ffe9e1bd"
	I1229 07:15:12.229590  225445 logs.go:123] Gathering logs for kube-scheduler [83f0b45b14d0f5c0f2428e774b2c22cc1cbf1cdc631be13cb8017de1791ed9c1] ...
	I1229 07:15:12.229618  225445 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 83f0b45b14d0f5c0f2428e774b2c22cc1cbf1cdc631be13cb8017de1791ed9c1"
	I1229 07:15:12.258090  225445 logs.go:123] Gathering logs for kube-controller-manager [e9d5b34dd02d51c0010757fb5a8250c74f693ed88fa87d76d4b6b6a6938ab9de] ...
	I1229 07:15:12.258118  225445 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e9d5b34dd02d51c0010757fb5a8250c74f693ed88fa87d76d4b6b6a6938ab9de"
	I1229 07:15:12.285030  225445 logs.go:123] Gathering logs for kube-controller-manager [c6904a1539983ab61b05abbaa0001209617c96ddc1921ff83557531b68191f66] ...
	I1229 07:15:12.285053  225445 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c6904a1539983ab61b05abbaa0001209617c96ddc1921ff83557531b68191f66"
	I1229 07:15:12.313633  225445 out.go:374] Setting ErrFile to fd 2...
	I1229 07:15:12.313656  225445 out.go:408] TERM=,COLORTERM=, which probably does not support color
	W1229 07:15:12.313718  225445 out.go:285] X Problems detected in kube-scheduler [bcf9a8233b7fe61acd5d7b0071cecd58e00a26cec4247f4ead31d75316b79078]:
	W1229 07:15:12.313732  225445 out.go:285]   E1229 07:14:51.301899       1 run.go:72] "command failed" err="failed to create listener: failed to listen on 127.0.0.1:10259: listen tcp 127.0.0.1:10259: bind: address already in use"
	I1229 07:15:12.313736  225445 out.go:374] Setting ErrFile to fd 2...
	I1229 07:15:12.313741  225445 out.go:408] TERM=,COLORTERM=, which probably does not support color
	W1229 07:15:11.933775  241214 pod_ready.go:104] pod "coredns-5dd5756b68-pnstl" is not "Ready", error: <nil>
	W1229 07:15:14.433392  241214 pod_ready.go:104] pod "coredns-5dd5756b68-pnstl" is not "Ready", error: <nil>
	I1229 07:15:15.932655  241214 pod_ready.go:94] pod "coredns-5dd5756b68-pnstl" is "Ready"
	I1229 07:15:15.932682  241214 pod_ready.go:86] duration metric: took 39.005533371s for pod "coredns-5dd5756b68-pnstl" in "kube-system" namespace to be "Ready" or be gone ...
	I1229 07:15:15.935615  241214 pod_ready.go:83] waiting for pod "etcd-old-k8s-version-876718" in "kube-system" namespace to be "Ready" or be gone ...
	I1229 07:15:15.939504  241214 pod_ready.go:94] pod "etcd-old-k8s-version-876718" is "Ready"
	I1229 07:15:15.939521  241214 pod_ready.go:86] duration metric: took 3.884552ms for pod "etcd-old-k8s-version-876718" in "kube-system" namespace to be "Ready" or be gone ...
	I1229 07:15:15.943368  241214 pod_ready.go:83] waiting for pod "kube-apiserver-old-k8s-version-876718" in "kube-system" namespace to be "Ready" or be gone ...
	I1229 07:15:15.948475  241214 pod_ready.go:94] pod "kube-apiserver-old-k8s-version-876718" is "Ready"
	I1229 07:15:15.948497  241214 pod_ready.go:86] duration metric: took 5.11088ms for pod "kube-apiserver-old-k8s-version-876718" in "kube-system" namespace to be "Ready" or be gone ...
	I1229 07:15:15.951294  241214 pod_ready.go:83] waiting for pod "kube-controller-manager-old-k8s-version-876718" in "kube-system" namespace to be "Ready" or be gone ...
	I1229 07:15:16.131126  241214 pod_ready.go:94] pod "kube-controller-manager-old-k8s-version-876718" is "Ready"
	I1229 07:15:16.131156  241214 pod_ready.go:86] duration metric: took 179.84269ms for pod "kube-controller-manager-old-k8s-version-876718" in "kube-system" namespace to be "Ready" or be gone ...
	I1229 07:15:16.330850  241214 pod_ready.go:83] waiting for pod "kube-proxy-2v9kr" in "kube-system" namespace to be "Ready" or be gone ...
	I1229 07:15:13.807434  245459 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1229 07:15:14.307053  245459 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1229 07:15:14.807023  245459 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1229 07:15:15.307113  245459 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1229 07:15:15.806957  245459 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1229 07:15:16.307413  245459 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1229 07:15:16.807208  245459 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1229 07:15:16.875084  245459 kubeadm.go:1114] duration metric: took 4.145669474s to wait for elevateKubeSystemPrivileges
	I1229 07:15:16.875133  245459 kubeadm.go:403] duration metric: took 12.489967583s to StartCluster
	I1229 07:15:16.875155  245459 settings.go:142] acquiring lock: {Name:mkc0a676e8e9f981283a1b55a28ae5261bf35d87 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1229 07:15:16.875240  245459 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22353-9207/kubeconfig
	I1229 07:15:16.876897  245459 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22353-9207/kubeconfig: {Name:mkdd37af1bd6d0f9cc034a64c07c8d33ee5c10ac Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1229 07:15:16.877094  245459 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1229 07:15:16.877106  245459 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1229 07:15:16.877167  245459 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1229 07:15:16.877281  245459 addons.go:70] Setting storage-provisioner=true in profile "no-preload-122332"
	I1229 07:15:16.877302  245459 addons.go:239] Setting addon storage-provisioner=true in "no-preload-122332"
	I1229 07:15:16.877304  245459 addons.go:70] Setting default-storageclass=true in profile "no-preload-122332"
	I1229 07:15:16.877326  245459 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "no-preload-122332"
	I1229 07:15:16.877326  245459 config.go:182] Loaded profile config "no-preload-122332": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1229 07:15:16.877339  245459 host.go:66] Checking if "no-preload-122332" exists ...
	I1229 07:15:16.877705  245459 cli_runner.go:164] Run: docker container inspect no-preload-122332 --format={{.State.Status}}
	I1229 07:15:16.877850  245459 cli_runner.go:164] Run: docker container inspect no-preload-122332 --format={{.State.Status}}
	I1229 07:15:16.879393  245459 out.go:179] * Verifying Kubernetes components...
	I1229 07:15:16.880601  245459 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1229 07:15:16.902200  245459 addons.go:239] Setting addon default-storageclass=true in "no-preload-122332"
	I1229 07:15:16.902261  245459 host.go:66] Checking if "no-preload-122332" exists ...
	I1229 07:15:16.902672  245459 cli_runner.go:164] Run: docker container inspect no-preload-122332 --format={{.State.Status}}
	I1229 07:15:16.903154  245459 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1229 07:15:16.904611  245459 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1229 07:15:16.904635  245459 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1229 07:15:16.904697  245459 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-122332
	I1229 07:15:16.931251  245459 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1229 07:15:16.931278  245459 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1229 07:15:16.931348  245459 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-122332
	I1229 07:15:16.937725  245459 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33063 SSHKeyPath:/home/jenkins/minikube-integration/22353-9207/.minikube/machines/no-preload-122332/id_rsa Username:docker}
	I1229 07:15:16.956103  245459 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33063 SSHKeyPath:/home/jenkins/minikube-integration/22353-9207/.minikube/machines/no-preload-122332/id_rsa Username:docker}
	I1229 07:15:16.969554  245459 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.94.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.35.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1229 07:15:17.017385  245459 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1229 07:15:17.052632  245459 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1229 07:15:17.067932  245459 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1229 07:15:17.125695  245459 start.go:987] {"host.minikube.internal": 192.168.94.1} host record injected into CoreDNS's ConfigMap
	I1229 07:15:17.127886  245459 node_ready.go:35] waiting up to 6m0s for node "no-preload-122332" to be "Ready" ...
	I1229 07:15:17.365855  245459 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1229 07:15:16.730892  241214 pod_ready.go:94] pod "kube-proxy-2v9kr" is "Ready"
	I1229 07:15:16.730923  241214 pod_ready.go:86] duration metric: took 400.042744ms for pod "kube-proxy-2v9kr" in "kube-system" namespace to be "Ready" or be gone ...
	I1229 07:15:16.931966  241214 pod_ready.go:83] waiting for pod "kube-scheduler-old-k8s-version-876718" in "kube-system" namespace to be "Ready" or be gone ...
	I1229 07:15:17.331306  241214 pod_ready.go:94] pod "kube-scheduler-old-k8s-version-876718" is "Ready"
	I1229 07:15:17.331331  241214 pod_ready.go:86] duration metric: took 399.339839ms for pod "kube-scheduler-old-k8s-version-876718" in "kube-system" namespace to be "Ready" or be gone ...
	I1229 07:15:17.331342  241214 pod_ready.go:40] duration metric: took 40.408734279s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1229 07:15:17.385823  241214 start.go:625] kubectl: 1.35.0, cluster: 1.28.0 (minor skew: 7)
	I1229 07:15:17.387294  241214 out.go:203] 
	W1229 07:15:17.388729  241214 out.go:285] ! /usr/local/bin/kubectl is version 1.35.0, which may have incompatibilities with Kubernetes 1.28.0.
	I1229 07:15:17.389634  241214 out.go:179]   - Want kubectl v1.28.0? Try 'minikube kubectl -- get pods -A'
	I1229 07:15:17.393320  241214 out.go:179] * Done! kubectl is now configured to use "old-k8s-version-876718" cluster and "default" namespace by default
	I1229 07:15:17.367294  245459 addons.go:530] duration metric: took 490.128613ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I1229 07:15:17.630299  245459 kapi.go:214] "coredns" deployment in "kube-system" namespace and "no-preload-122332" context rescaled to 1 replicas
	W1229 07:15:19.131291  245459 node_ready.go:57] node "no-preload-122332" has "Ready":"False" status (will retry)
	W1229 07:15:21.131616  245459 node_ready.go:57] node "no-preload-122332" has "Ready":"False" status (will retry)
	I1229 07:15:22.314644  225445 api_server.go:299] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	W1229 07:15:23.631413  245459 node_ready.go:57] node "no-preload-122332" has "Ready":"False" status (will retry)
	W1229 07:15:25.631599  245459 node_ready.go:57] node "no-preload-122332" has "Ready":"False" status (will retry)
	W1229 07:15:28.131467  245459 node_ready.go:57] node "no-preload-122332" has "Ready":"False" status (will retry)
	I1229 07:15:27.315664  225445 api_server.go:315] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1229 07:15:27.315722  225445 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1229 07:15:27.315775  225445 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1229 07:15:27.343539  225445 cri.go:96] found id: "8b80982715281e14912ea726d2a5a182323febb349dfab3bcb2a610db7fa1d11"
	I1229 07:15:27.343559  225445 cri.go:96] found id: "864abb65f432c26fa136421795c1a96c0e9342e9a1e658790ff31d6cfc64ee5e"
	I1229 07:15:27.343564  225445 cri.go:96] found id: "3a9c461f6549cedfcb902d493a98dfdfd45573c28ec554e6b1d2ce904ee1fd67"
	I1229 07:15:27.343569  225445 cri.go:96] found id: ""
	I1229 07:15:27.343577  225445 logs.go:282] 3 containers: [8b80982715281e14912ea726d2a5a182323febb349dfab3bcb2a610db7fa1d11 864abb65f432c26fa136421795c1a96c0e9342e9a1e658790ff31d6cfc64ee5e 3a9c461f6549cedfcb902d493a98dfdfd45573c28ec554e6b1d2ce904ee1fd67]
	I1229 07:15:27.343639  225445 ssh_runner.go:195] Run: which crictl
	I1229 07:15:27.347759  225445 ssh_runner.go:195] Run: which crictl
	I1229 07:15:27.351644  225445 ssh_runner.go:195] Run: which crictl
	I1229 07:15:27.355264  225445 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1229 07:15:27.355319  225445 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1229 07:15:27.381569  225445 cri.go:96] found id: "02694b727a7ef5615b676832bc306017d23da15eff0a3aa9c1e32630ffe9e1bd"
	I1229 07:15:27.381592  225445 cri.go:96] found id: ""
	I1229 07:15:27.381601  225445 logs.go:282] 1 containers: [02694b727a7ef5615b676832bc306017d23da15eff0a3aa9c1e32630ffe9e1bd]
	I1229 07:15:27.381643  225445 ssh_runner.go:195] Run: which crictl
	I1229 07:15:27.385479  225445 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1229 07:15:27.385538  225445 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1229 07:15:27.412489  225445 cri.go:96] found id: ""
	I1229 07:15:27.412509  225445 logs.go:282] 0 containers: []
	W1229 07:15:27.412522  225445 logs.go:284] No container was found matching "coredns"
	I1229 07:15:27.412538  225445 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1229 07:15:27.412597  225445 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1229 07:15:27.439607  225445 cri.go:96] found id: "bcf9a8233b7fe61acd5d7b0071cecd58e00a26cec4247f4ead31d75316b79078"
	I1229 07:15:27.439626  225445 cri.go:96] found id: "1a2a5ec730671fe2fec7f6b801e54afe17547e25677f0f6dc85052e44f7374d4"
	I1229 07:15:27.439629  225445 cri.go:96] found id: "83f0b45b14d0f5c0f2428e774b2c22cc1cbf1cdc631be13cb8017de1791ed9c1"
	I1229 07:15:27.439633  225445 cri.go:96] found id: ""
	I1229 07:15:27.439640  225445 logs.go:282] 3 containers: [bcf9a8233b7fe61acd5d7b0071cecd58e00a26cec4247f4ead31d75316b79078 1a2a5ec730671fe2fec7f6b801e54afe17547e25677f0f6dc85052e44f7374d4 83f0b45b14d0f5c0f2428e774b2c22cc1cbf1cdc631be13cb8017de1791ed9c1]
	I1229 07:15:27.439692  225445 ssh_runner.go:195] Run: which crictl
	I1229 07:15:27.443554  225445 ssh_runner.go:195] Run: which crictl
	I1229 07:15:27.447225  225445 ssh_runner.go:195] Run: which crictl
	I1229 07:15:27.450775  225445 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1229 07:15:27.450822  225445 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1229 07:15:27.477556  225445 cri.go:96] found id: ""
	I1229 07:15:27.477578  225445 logs.go:282] 0 containers: []
	W1229 07:15:27.477588  225445 logs.go:284] No container was found matching "kube-proxy"
	I1229 07:15:27.477594  225445 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1229 07:15:27.477647  225445 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1229 07:15:27.503963  225445 cri.go:96] found id: "a67832fbf27ab39d26d7cd60ce6b9b5a496df64418a8f0557221862df96d6685"
	I1229 07:15:27.503985  225445 cri.go:96] found id: "e9d5b34dd02d51c0010757fb5a8250c74f693ed88fa87d76d4b6b6a6938ab9de"
	I1229 07:15:27.503989  225445 cri.go:96] found id: "c6904a1539983ab61b05abbaa0001209617c96ddc1921ff83557531b68191f66"
	I1229 07:15:27.503993  225445 cri.go:96] found id: ""
	I1229 07:15:27.504000  225445 logs.go:282] 3 containers: [a67832fbf27ab39d26d7cd60ce6b9b5a496df64418a8f0557221862df96d6685 e9d5b34dd02d51c0010757fb5a8250c74f693ed88fa87d76d4b6b6a6938ab9de c6904a1539983ab61b05abbaa0001209617c96ddc1921ff83557531b68191f66]
	I1229 07:15:27.504053  225445 ssh_runner.go:195] Run: which crictl
	I1229 07:15:27.507958  225445 ssh_runner.go:195] Run: which crictl
	I1229 07:15:27.511470  225445 ssh_runner.go:195] Run: which crictl
	I1229 07:15:27.514939  225445 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1229 07:15:27.514985  225445 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1229 07:15:27.541451  225445 cri.go:96] found id: ""
	I1229 07:15:27.541470  225445 logs.go:282] 0 containers: []
	W1229 07:15:27.541478  225445 logs.go:284] No container was found matching "kindnet"
	I1229 07:15:27.541483  225445 cri.go:61] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1229 07:15:27.541521  225445 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=storage-provisioner
	I1229 07:15:27.568143  225445 cri.go:96] found id: ""
	I1229 07:15:27.568170  225445 logs.go:282] 0 containers: []
	W1229 07:15:27.568178  225445 logs.go:284] No container was found matching "storage-provisioner"
	I1229 07:15:27.568198  225445 logs.go:123] Gathering logs for CRI-O ...
	I1229 07:15:27.568214  225445 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1229 07:15:27.637286  225445 logs.go:123] Gathering logs for kube-apiserver [8b80982715281e14912ea726d2a5a182323febb349dfab3bcb2a610db7fa1d11] ...
	I1229 07:15:27.637320  225445 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 8b80982715281e14912ea726d2a5a182323febb349dfab3bcb2a610db7fa1d11"
	I1229 07:15:27.667307  225445 logs.go:123] Gathering logs for etcd [02694b727a7ef5615b676832bc306017d23da15eff0a3aa9c1e32630ffe9e1bd] ...
	I1229 07:15:27.667336  225445 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 02694b727a7ef5615b676832bc306017d23da15eff0a3aa9c1e32630ffe9e1bd"
	I1229 07:15:27.701378  225445 logs.go:123] Gathering logs for kube-scheduler [bcf9a8233b7fe61acd5d7b0071cecd58e00a26cec4247f4ead31d75316b79078] ...
	I1229 07:15:27.701406  225445 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 bcf9a8233b7fe61acd5d7b0071cecd58e00a26cec4247f4ead31d75316b79078"
	W1229 07:15:27.728319  225445 logs.go:138] Found kube-scheduler [bcf9a8233b7fe61acd5d7b0071cecd58e00a26cec4247f4ead31d75316b79078] problem: E1229 07:14:51.301899       1 run.go:72] "command failed" err="failed to create listener: failed to listen on 127.0.0.1:10259: listen tcp 127.0.0.1:10259: bind: address already in use"
	I1229 07:15:27.728344  225445 logs.go:123] Gathering logs for kube-controller-manager [a67832fbf27ab39d26d7cd60ce6b9b5a496df64418a8f0557221862df96d6685] ...
	I1229 07:15:27.728357  225445 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a67832fbf27ab39d26d7cd60ce6b9b5a496df64418a8f0557221862df96d6685"
	I1229 07:15:27.754049  225445 logs.go:123] Gathering logs for kube-controller-manager [e9d5b34dd02d51c0010757fb5a8250c74f693ed88fa87d76d4b6b6a6938ab9de] ...
	I1229 07:15:27.754078  225445 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e9d5b34dd02d51c0010757fb5a8250c74f693ed88fa87d76d4b6b6a6938ab9de"
	I1229 07:15:27.782641  225445 logs.go:123] Gathering logs for describe nodes ...
	I1229 07:15:27.782667  225445 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1229 07:15:29.630488  245459 node_ready.go:49] node "no-preload-122332" is "Ready"
	I1229 07:15:29.630514  245459 node_ready.go:38] duration metric: took 12.502596991s for node "no-preload-122332" to be "Ready" ...
	I1229 07:15:29.630531  245459 api_server.go:52] waiting for apiserver process to appear ...
	I1229 07:15:29.630585  245459 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1229 07:15:29.642388  245459 api_server.go:72] duration metric: took 12.765239409s to wait for apiserver process to appear ...
	I1229 07:15:29.642416  245459 api_server.go:88] waiting for apiserver healthz status ...
	I1229 07:15:29.642432  245459 api_server.go:299] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1229 07:15:29.647723  245459 api_server.go:325] https://192.168.94.2:8443/healthz returned 200:
	ok
	I1229 07:15:29.648702  245459 api_server.go:141] control plane version: v1.35.0
	I1229 07:15:29.648729  245459 api_server.go:131] duration metric: took 6.306192ms to wait for apiserver health ...
	I1229 07:15:29.648739  245459 system_pods.go:43] waiting for kube-system pods to appear ...
	I1229 07:15:29.651479  245459 system_pods.go:59] 8 kube-system pods found
	I1229 07:15:29.651508  245459 system_pods.go:61] "coredns-7d764666f9-6rcr2" [51ba32ec-f0c4-4dbd-b555-a3a3f8f02319] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1229 07:15:29.651515  245459 system_pods.go:61] "etcd-no-preload-122332" [5a8423b5-2e58-4a29-86c5-e8ea350f48c0] Running
	I1229 07:15:29.651519  245459 system_pods.go:61] "kindnet-rq99f" [bb2b7600-b85c-4a5b-aa87-b495394b1749] Running
	I1229 07:15:29.651523  245459 system_pods.go:61] "kube-apiserver-no-preload-122332" [1186072e-56b1-4fd6-b028-b99efba982c8] Running
	I1229 07:15:29.651530  245459 system_pods.go:61] "kube-controller-manager-no-preload-122332" [ac595152-44f9-4812-843b-29329fd7c659] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1229 07:15:29.651534  245459 system_pods.go:61] "kube-proxy-qvww2" [01123e19-62cc-4666-8d46-8e51a274f6c9] Running
	I1229 07:15:29.651539  245459 system_pods.go:61] "kube-scheduler-no-preload-122332" [69d66c3a-fc72-44e8-8d5a-3a4914e8705b] Running
	I1229 07:15:29.651544  245459 system_pods.go:61] "storage-provisioner" [37396a97-f1db-4026-af7d-551f0fec188f] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1229 07:15:29.651549  245459 system_pods.go:74] duration metric: took 2.805522ms to wait for pod list to return data ...
	I1229 07:15:29.651558  245459 default_sa.go:34] waiting for default service account to be created ...
	I1229 07:15:29.653677  245459 default_sa.go:45] found service account: "default"
	I1229 07:15:29.653694  245459 default_sa.go:55] duration metric: took 2.130551ms for default service account to be created ...
	I1229 07:15:29.653701  245459 system_pods.go:116] waiting for k8s-apps to be running ...
	I1229 07:15:29.656186  245459 system_pods.go:86] 8 kube-system pods found
	I1229 07:15:29.656210  245459 system_pods.go:89] "coredns-7d764666f9-6rcr2" [51ba32ec-f0c4-4dbd-b555-a3a3f8f02319] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1229 07:15:29.656228  245459 system_pods.go:89] "etcd-no-preload-122332" [5a8423b5-2e58-4a29-86c5-e8ea350f48c0] Running
	I1229 07:15:29.656236  245459 system_pods.go:89] "kindnet-rq99f" [bb2b7600-b85c-4a5b-aa87-b495394b1749] Running
	I1229 07:15:29.656248  245459 system_pods.go:89] "kube-apiserver-no-preload-122332" [1186072e-56b1-4fd6-b028-b99efba982c8] Running
	I1229 07:15:29.656261  245459 system_pods.go:89] "kube-controller-manager-no-preload-122332" [ac595152-44f9-4812-843b-29329fd7c659] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1229 07:15:29.656270  245459 system_pods.go:89] "kube-proxy-qvww2" [01123e19-62cc-4666-8d46-8e51a274f6c9] Running
	I1229 07:15:29.656280  245459 system_pods.go:89] "kube-scheduler-no-preload-122332" [69d66c3a-fc72-44e8-8d5a-3a4914e8705b] Running
	I1229 07:15:29.656288  245459 system_pods.go:89] "storage-provisioner" [37396a97-f1db-4026-af7d-551f0fec188f] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1229 07:15:29.656315  245459 retry.go:84] will retry after 300ms: missing components: kube-dns
	I1229 07:15:29.942308  245459 system_pods.go:86] 8 kube-system pods found
	I1229 07:15:29.942336  245459 system_pods.go:89] "coredns-7d764666f9-6rcr2" [51ba32ec-f0c4-4dbd-b555-a3a3f8f02319] Running
	I1229 07:15:29.942342  245459 system_pods.go:89] "etcd-no-preload-122332" [5a8423b5-2e58-4a29-86c5-e8ea350f48c0] Running
	I1229 07:15:29.942348  245459 system_pods.go:89] "kindnet-rq99f" [bb2b7600-b85c-4a5b-aa87-b495394b1749] Running
	I1229 07:15:29.942352  245459 system_pods.go:89] "kube-apiserver-no-preload-122332" [1186072e-56b1-4fd6-b028-b99efba982c8] Running
	I1229 07:15:29.942358  245459 system_pods.go:89] "kube-controller-manager-no-preload-122332" [ac595152-44f9-4812-843b-29329fd7c659] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1229 07:15:29.942362  245459 system_pods.go:89] "kube-proxy-qvww2" [01123e19-62cc-4666-8d46-8e51a274f6c9] Running
	I1229 07:15:29.942366  245459 system_pods.go:89] "kube-scheduler-no-preload-122332" [69d66c3a-fc72-44e8-8d5a-3a4914e8705b] Running
	I1229 07:15:29.942370  245459 system_pods.go:89] "storage-provisioner" [37396a97-f1db-4026-af7d-551f0fec188f] Running
	I1229 07:15:29.942378  245459 system_pods.go:126] duration metric: took 288.67144ms to wait for k8s-apps to be running ...
	I1229 07:15:29.942385  245459 system_svc.go:44] waiting for kubelet service to be running ....
	I1229 07:15:29.942427  245459 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1229 07:15:29.955949  245459 system_svc.go:56] duration metric: took 13.546148ms WaitForService to wait for kubelet
	I1229 07:15:29.955984  245459 kubeadm.go:587] duration metric: took 13.078855435s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1229 07:15:29.956010  245459 node_conditions.go:102] verifying NodePressure condition ...
	I1229 07:15:29.959094  245459 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1229 07:15:29.959116  245459 node_conditions.go:123] node cpu capacity is 8
	I1229 07:15:29.959131  245459 node_conditions.go:105] duration metric: took 3.11556ms to run NodePressure ...
	I1229 07:15:29.959158  245459 start.go:242] waiting for startup goroutines ...
	I1229 07:15:29.959173  245459 start.go:247] waiting for cluster config update ...
	I1229 07:15:29.959201  245459 start.go:256] writing updated cluster config ...
	I1229 07:15:29.959519  245459 ssh_runner.go:195] Run: rm -f paused
	I1229 07:15:29.964589  245459 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1229 07:15:29.968050  245459 pod_ready.go:83] waiting for pod "coredns-7d764666f9-6rcr2" in "kube-system" namespace to be "Ready" or be gone ...
	I1229 07:15:29.972354  245459 pod_ready.go:94] pod "coredns-7d764666f9-6rcr2" is "Ready"
	I1229 07:15:29.972375  245459 pod_ready.go:86] duration metric: took 4.297745ms for pod "coredns-7d764666f9-6rcr2" in "kube-system" namespace to be "Ready" or be gone ...
	I1229 07:15:29.974335  245459 pod_ready.go:83] waiting for pod "etcd-no-preload-122332" in "kube-system" namespace to be "Ready" or be gone ...
	I1229 07:15:29.978013  245459 pod_ready.go:94] pod "etcd-no-preload-122332" is "Ready"
	I1229 07:15:29.978032  245459 pod_ready.go:86] duration metric: took 3.678385ms for pod "etcd-no-preload-122332" in "kube-system" namespace to be "Ready" or be gone ...
	I1229 07:15:29.979873  245459 pod_ready.go:83] waiting for pod "kube-apiserver-no-preload-122332" in "kube-system" namespace to be "Ready" or be gone ...
	I1229 07:15:29.983341  245459 pod_ready.go:94] pod "kube-apiserver-no-preload-122332" is "Ready"
	I1229 07:15:29.983359  245459 pod_ready.go:86] duration metric: took 3.46823ms for pod "kube-apiserver-no-preload-122332" in "kube-system" namespace to be "Ready" or be gone ...
	I1229 07:15:29.985118  245459 pod_ready.go:83] waiting for pod "kube-controller-manager-no-preload-122332" in "kube-system" namespace to be "Ready" or be gone ...
	I1229 07:15:30.768896  245459 pod_ready.go:94] pod "kube-controller-manager-no-preload-122332" is "Ready"
	I1229 07:15:30.768925  245459 pod_ready.go:86] duration metric: took 783.78845ms for pod "kube-controller-manager-no-preload-122332" in "kube-system" namespace to be "Ready" or be gone ...
	I1229 07:15:30.969297  245459 pod_ready.go:83] waiting for pod "kube-proxy-qvww2" in "kube-system" namespace to be "Ready" or be gone ...
	I1229 07:15:31.368619  245459 pod_ready.go:94] pod "kube-proxy-qvww2" is "Ready"
	I1229 07:15:31.368642  245459 pod_ready.go:86] duration metric: took 399.316593ms for pod "kube-proxy-qvww2" in "kube-system" namespace to be "Ready" or be gone ...
	I1229 07:15:31.568944  245459 pod_ready.go:83] waiting for pod "kube-scheduler-no-preload-122332" in "kube-system" namespace to be "Ready" or be gone ...
	I1229 07:15:31.968630  245459 pod_ready.go:94] pod "kube-scheduler-no-preload-122332" is "Ready"
	I1229 07:15:31.968657  245459 pod_ready.go:86] duration metric: took 399.681927ms for pod "kube-scheduler-no-preload-122332" in "kube-system" namespace to be "Ready" or be gone ...
	I1229 07:15:31.968671  245459 pod_ready.go:40] duration metric: took 2.004052843s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1229 07:15:32.015113  245459 start.go:625] kubectl: 1.35.0, cluster: 1.35.0 (minor skew: 0)
	I1229 07:15:32.016913  245459 out.go:179] * Done! kubectl is now configured to use "no-preload-122332" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Dec 29 07:14:54 old-k8s-version-876718 crio[581]: time="2025-12-29T07:14:54.214772047Z" level=info msg="Created container 9d62d4e5a2727d58ed4b3c8405a2b5330cd761d5a2c4e9aa2d1157dc7249f99d: kubernetes-dashboard/kubernetes-dashboard-8694d4445c-bfg2s/kubernetes-dashboard" id=80e0f23f-ca41-4cbf-a9c6-d031753609fd name=/runtime.v1.RuntimeService/CreateContainer
	Dec 29 07:14:54 old-k8s-version-876718 crio[581]: time="2025-12-29T07:14:54.215348646Z" level=info msg="Starting container: 9d62d4e5a2727d58ed4b3c8405a2b5330cd761d5a2c4e9aa2d1157dc7249f99d" id=668d5f43-79ef-40f3-8f66-4b89cbe5f865 name=/runtime.v1.RuntimeService/StartContainer
	Dec 29 07:14:54 old-k8s-version-876718 crio[581]: time="2025-12-29T07:14:54.217068343Z" level=info msg="Started container" PID=1760 containerID=9d62d4e5a2727d58ed4b3c8405a2b5330cd761d5a2c4e9aa2d1157dc7249f99d description=kubernetes-dashboard/kubernetes-dashboard-8694d4445c-bfg2s/kubernetes-dashboard id=668d5f43-79ef-40f3-8f66-4b89cbe5f865 name=/runtime.v1.RuntimeService/StartContainer sandboxID=c7a686f95cd2efe1677e49f57f8079986cb576a6e2e1017007868f4a846348f4
	Dec 29 07:15:06 old-k8s-version-876718 crio[581]: time="2025-12-29T07:15:06.920534725Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=53abb409-d5dd-444e-a09a-cc295cbebccc name=/runtime.v1.ImageService/ImageStatus
	Dec 29 07:15:06 old-k8s-version-876718 crio[581]: time="2025-12-29T07:15:06.921428709Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=93465f7b-0e62-42d7-8bce-5619bb058b6a name=/runtime.v1.ImageService/ImageStatus
	Dec 29 07:15:06 old-k8s-version-876718 crio[581]: time="2025-12-29T07:15:06.922429967Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=9a1a4ae6-0cce-423d-97c3-1e844626b155 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 29 07:15:06 old-k8s-version-876718 crio[581]: time="2025-12-29T07:15:06.922574052Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 29 07:15:06 old-k8s-version-876718 crio[581]: time="2025-12-29T07:15:06.92711518Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 29 07:15:06 old-k8s-version-876718 crio[581]: time="2025-12-29T07:15:06.927350882Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/358467f876ebda4dc98b2a58ee54a2d105544a149b8439a201f686350f83f461/merged/etc/passwd: no such file or directory"
	Dec 29 07:15:06 old-k8s-version-876718 crio[581]: time="2025-12-29T07:15:06.927393449Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/358467f876ebda4dc98b2a58ee54a2d105544a149b8439a201f686350f83f461/merged/etc/group: no such file or directory"
	Dec 29 07:15:06 old-k8s-version-876718 crio[581]: time="2025-12-29T07:15:06.927670123Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 29 07:15:06 old-k8s-version-876718 crio[581]: time="2025-12-29T07:15:06.952331517Z" level=info msg="Created container 1580265780bb72872432923c6589598a07efda6af2d5ede23afbf8a4ff201291: kube-system/storage-provisioner/storage-provisioner" id=9a1a4ae6-0cce-423d-97c3-1e844626b155 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 29 07:15:06 old-k8s-version-876718 crio[581]: time="2025-12-29T07:15:06.952875969Z" level=info msg="Starting container: 1580265780bb72872432923c6589598a07efda6af2d5ede23afbf8a4ff201291" id=a4f38460-1867-419d-b4e5-cd100b9ee64e name=/runtime.v1.RuntimeService/StartContainer
	Dec 29 07:15:06 old-k8s-version-876718 crio[581]: time="2025-12-29T07:15:06.954630557Z" level=info msg="Started container" PID=1783 containerID=1580265780bb72872432923c6589598a07efda6af2d5ede23afbf8a4ff201291 description=kube-system/storage-provisioner/storage-provisioner id=a4f38460-1867-419d-b4e5-cd100b9ee64e name=/runtime.v1.RuntimeService/StartContainer sandboxID=4b7e9a850b29e52b48cd76092abef8f4ac926e2341d6e98a4421700eba006433
	Dec 29 07:15:11 old-k8s-version-876718 crio[581]: time="2025-12-29T07:15:11.816038313Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=d40568c0-f508-4f9a-b0ca-2fce8005217c name=/runtime.v1.ImageService/ImageStatus
	Dec 29 07:15:11 old-k8s-version-876718 crio[581]: time="2025-12-29T07:15:11.817138726Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=32949e6d-08f9-4189-8eec-f715c9abf720 name=/runtime.v1.ImageService/ImageStatus
	Dec 29 07:15:11 old-k8s-version-876718 crio[581]: time="2025-12-29T07:15:11.818257066Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-crtg5/dashboard-metrics-scraper" id=98c0fb52-2225-4cb4-97e3-bf71bdd792a9 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 29 07:15:11 old-k8s-version-876718 crio[581]: time="2025-12-29T07:15:11.818423339Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 29 07:15:11 old-k8s-version-876718 crio[581]: time="2025-12-29T07:15:11.82515048Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 29 07:15:11 old-k8s-version-876718 crio[581]: time="2025-12-29T07:15:11.825644405Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 29 07:15:11 old-k8s-version-876718 crio[581]: time="2025-12-29T07:15:11.859401715Z" level=info msg="Created container 066fa833e37233849566c1e1480b105296402328ef58b83df823298bf2eb8f4e: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-crtg5/dashboard-metrics-scraper" id=98c0fb52-2225-4cb4-97e3-bf71bdd792a9 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 29 07:15:11 old-k8s-version-876718 crio[581]: time="2025-12-29T07:15:11.86004188Z" level=info msg="Starting container: 066fa833e37233849566c1e1480b105296402328ef58b83df823298bf2eb8f4e" id=8d63c492-bd2a-4f58-a80f-b5e1028bf08b name=/runtime.v1.RuntimeService/StartContainer
	Dec 29 07:15:11 old-k8s-version-876718 crio[581]: time="2025-12-29T07:15:11.862394835Z" level=info msg="Started container" PID=1799 containerID=066fa833e37233849566c1e1480b105296402328ef58b83df823298bf2eb8f4e description=kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-crtg5/dashboard-metrics-scraper id=8d63c492-bd2a-4f58-a80f-b5e1028bf08b name=/runtime.v1.RuntimeService/StartContainer sandboxID=010a1e3a7f346f7621d89fae77040ebba110b7d244254feed8eb905d42eabb66
	Dec 29 07:15:11 old-k8s-version-876718 crio[581]: time="2025-12-29T07:15:11.937693385Z" level=info msg="Removing container: b8f6036c257e401b2cb337b3060c8ff3b35cd180ef95916218798d2ecc64f2e3" id=4421796b-28d7-4c51-ab1a-0df137c78ad7 name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 29 07:15:11 old-k8s-version-876718 crio[581]: time="2025-12-29T07:15:11.947936553Z" level=info msg="Removed container b8f6036c257e401b2cb337b3060c8ff3b35cd180ef95916218798d2ecc64f2e3: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-crtg5/dashboard-metrics-scraper" id=4421796b-28d7-4c51-ab1a-0df137c78ad7 name=/runtime.v1.RuntimeService/RemoveContainer
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED              STATE               NAME                        ATTEMPT             POD ID              POD                                              NAMESPACE
	066fa833e3723       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           21 seconds ago       Exited              dashboard-metrics-scraper   2                   010a1e3a7f346       dashboard-metrics-scraper-5f989dc9cf-crtg5       kubernetes-dashboard
	1580265780bb7       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           26 seconds ago       Running             storage-provisioner         1                   4b7e9a850b29e       storage-provisioner                              kube-system
	9d62d4e5a2727       docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029   39 seconds ago       Running             kubernetes-dashboard        0                   c7a686f95cd2e       kubernetes-dashboard-8694d4445c-bfg2s            kubernetes-dashboard
	31a800f24afba       56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c                                           57 seconds ago       Running             busybox                     1                   a0d5a7e778c7e       busybox                                          default
	ae7be12ff50cb       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc                                           57 seconds ago       Running             coredns                     0                   2e01748f89f3a       coredns-5dd5756b68-pnstl                         kube-system
	eccdd751d5c90       4921d7a6dffa922dd679732ba4797085c4f39e9a53bee8b6fdb1d463e8571251                                           57 seconds ago       Running             kindnet-cni                 0                   032867e7b8ecb       kindnet-kgr4x                                    kube-system
	604c0d1f5c7a0       ea1030da44aa18666a7bf15fddd2a38c3143c3277159cb8bdd95f45c8ce62d7a                                           57 seconds ago       Running             kube-proxy                  0                   44cf0ff67c1a3       kube-proxy-2v9kr                                 kube-system
	ffdc68478751c       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           57 seconds ago       Exited              storage-provisioner         0                   4b7e9a850b29e       storage-provisioner                              kube-system
	96d9acdaa9e81       f6f496300a2ae7a6727ccf3080d66d2fd22b6cfc271df5351c976c23a28bb157                                           About a minute ago   Running             kube-scheduler              0                   c400430fdb9b7       kube-scheduler-old-k8s-version-876718            kube-system
	bacf752453b6e       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9                                           About a minute ago   Running             etcd                        0                   127d072e7e307       etcd-old-k8s-version-876718                      kube-system
	69931aee6620e       bb5e0dde9054c02d6badee88547be7e7bb7b7b818d277c8a61b4b29484bbff95                                           About a minute ago   Running             kube-apiserver              0                   784883888af1d       kube-apiserver-old-k8s-version-876718            kube-system
	176fbe8370904       4be79c38a4bab6e1252a35697500e8a0d9c5c7c771d9fcc1935c9a7f6cdf4c62                                           About a minute ago   Running             kube-controller-manager     0                   293f6383c4e5a       kube-controller-manager-old-k8s-version-876718   kube-system
	
	
	==> coredns [ae7be12ff50cb259b5279dc02c3c2df281a1f08343c6bdd43a0534b08ec9a6b6] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 25cf5af2951e282c4b0e961a02fb5d3e57c974501832fee92eec17b5135b9ec9d9e87d2ac94e6d117a5ed3dd54e8800aa7b4479706eb54497145ccdb80397d1b
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] 127.0.0.1:35966 - 65488 "HINFO IN 2699774808872182651.1480664370928045377. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.018340458s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[WARNING] plugin/kubernetes: Kubernetes API connection failure: Get "https://10.96.0.1:443/version": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> describe nodes <==
	Name:               old-k8s-version-876718
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=old-k8s-version-876718
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=306c26d738eaa3534a776d7c684ed563998a48b8
	                    minikube.k8s.io/name=old-k8s-version-876718
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_29T07_13_32_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 29 Dec 2025 07:13:28 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  old-k8s-version-876718
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 29 Dec 2025 07:15:26 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 29 Dec 2025 07:15:06 +0000   Mon, 29 Dec 2025 07:13:26 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 29 Dec 2025 07:15:06 +0000   Mon, 29 Dec 2025 07:13:26 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 29 Dec 2025 07:15:06 +0000   Mon, 29 Dec 2025 07:13:26 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 29 Dec 2025 07:15:06 +0000   Mon, 29 Dec 2025 07:13:56 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.103.2
	  Hostname:    old-k8s-version-876718
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863360Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863360Ki
	  pods:               110
	System Info:
	  Machine ID:                 44f990608ba801cb32708aeb6951f96d
	  System UUID:                89c29f88-abf1-4b86-a174-1e64c8cd0857
	  Boot ID:                    38b59c2f-2e15-4538-b2f7-46a8b6545a02
	  Kernel Version:             6.8.0-1045-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.35.0
	  Kubelet Version:            v1.28.0
	  Kube-Proxy Version:         v1.28.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                              ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         94s
	  kube-system                 coredns-5dd5756b68-pnstl                          100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     109s
	  kube-system                 etcd-old-k8s-version-876718                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         2m2s
	  kube-system                 kindnet-kgr4x                                     100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      109s
	  kube-system                 kube-apiserver-old-k8s-version-876718             250m (3%)     0 (0%)      0 (0%)           0 (0%)         2m2s
	  kube-system                 kube-controller-manager-old-k8s-version-876718    200m (2%)     0 (0%)      0 (0%)           0 (0%)         2m2s
	  kube-system                 kube-proxy-2v9kr                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         109s
	  kube-system                 kube-scheduler-old-k8s-version-876718             100m (1%)     0 (0%)      0 (0%)           0 (0%)         2m2s
	  kube-system                 storage-provisioner                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         108s
	  kubernetes-dashboard        dashboard-metrics-scraper-5f989dc9cf-crtg5        0 (0%)        0 (0%)      0 (0%)           0 (0%)         46s
	  kubernetes-dashboard        kubernetes-dashboard-8694d4445c-bfg2s             0 (0%)        0 (0%)      0 (0%)           0 (0%)         46s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 109s                 kube-proxy       
	  Normal  Starting                 57s                  kube-proxy       
	  Normal  Starting                 2m7s                 kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  2m7s (x9 over 2m7s)  kubelet          Node old-k8s-version-876718 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m7s (x8 over 2m7s)  kubelet          Node old-k8s-version-876718 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m7s (x7 over 2m7s)  kubelet          Node old-k8s-version-876718 status is now: NodeHasSufficientPID
	  Normal  NodeHasNoDiskPressure    2m2s                 kubelet          Node old-k8s-version-876718 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  2m2s                 kubelet          Node old-k8s-version-876718 status is now: NodeHasSufficientMemory
	  Normal  NodeHasSufficientPID     2m2s                 kubelet          Node old-k8s-version-876718 status is now: NodeHasSufficientPID
	  Normal  Starting                 2m2s                 kubelet          Starting kubelet.
	  Normal  RegisteredNode           109s                 node-controller  Node old-k8s-version-876718 event: Registered Node old-k8s-version-876718 in Controller
	  Normal  NodeReady                97s                  kubelet          Node old-k8s-version-876718 status is now: NodeReady
	  Normal  Starting                 61s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  61s (x9 over 61s)    kubelet          Node old-k8s-version-876718 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    61s (x8 over 61s)    kubelet          Node old-k8s-version-876718 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     61s (x7 over 61s)    kubelet          Node old-k8s-version-876718 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           46s                  node-controller  Node old-k8s-version-876718 event: Registered Node old-k8s-version-876718 in Controller
	
	
	==> dmesg <==
	[Dec29 06:17] MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details.
	[  +0.001714] TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details.
	[  +0.001000] MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details.
	[  +0.084008] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge
	[  +0.380672] i8042: Warning: Keylock active
	[  +0.013979] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.493300] block sda: the capability attribute has been deprecated.
	[  +0.084603] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.024785] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +4.737934] kauditd_printk_skb: 47 callbacks suppressed
	
	
	==> etcd [bacf752453b6e31e76322e28d8bd8e4495c2626f31b52d8c86de2430551e0205] <==
	{"level":"info","ts":"2025-12-29T07:14:33.370838Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-12-29T07:14:33.370851Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f23060b075c4c089 switched to configuration voters=(17451554867067011209)"}
	{"level":"info","ts":"2025-12-29T07:14:33.370997Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"3336683c081d149d","local-member-id":"f23060b075c4c089","added-peer-id":"f23060b075c4c089","added-peer-peer-urls":["https://192.168.103.2:2380"]}
	{"level":"info","ts":"2025-12-29T07:14:33.370862Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-12-29T07:14:33.371098Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"3336683c081d149d","local-member-id":"f23060b075c4c089","cluster-version":"3.5"}
	{"level":"info","ts":"2025-12-29T07:14:33.371128Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2025-12-29T07:14:33.37342Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2025-12-29T07:14:33.373585Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.103.2:2380"}
	{"level":"info","ts":"2025-12-29T07:14:33.373641Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.103.2:2380"}
	{"level":"info","ts":"2025-12-29T07:14:33.373712Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"f23060b075c4c089","initial-advertise-peer-urls":["https://192.168.103.2:2380"],"listen-peer-urls":["https://192.168.103.2:2380"],"advertise-client-urls":["https://192.168.103.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.103.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2025-12-29T07:14:33.373741Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2025-12-29T07:14:34.461064Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f23060b075c4c089 is starting a new election at term 2"}
	{"level":"info","ts":"2025-12-29T07:14:34.461112Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f23060b075c4c089 became pre-candidate at term 2"}
	{"level":"info","ts":"2025-12-29T07:14:34.461157Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f23060b075c4c089 received MsgPreVoteResp from f23060b075c4c089 at term 2"}
	{"level":"info","ts":"2025-12-29T07:14:34.461175Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f23060b075c4c089 became candidate at term 3"}
	{"level":"info","ts":"2025-12-29T07:14:34.461183Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f23060b075c4c089 received MsgVoteResp from f23060b075c4c089 at term 3"}
	{"level":"info","ts":"2025-12-29T07:14:34.461231Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f23060b075c4c089 became leader at term 3"}
	{"level":"info","ts":"2025-12-29T07:14:34.461246Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: f23060b075c4c089 elected leader f23060b075c4c089 at term 3"}
	{"level":"info","ts":"2025-12-29T07:14:34.462324Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"f23060b075c4c089","local-member-attributes":"{Name:old-k8s-version-876718 ClientURLs:[https://192.168.103.2:2379]}","request-path":"/0/members/f23060b075c4c089/attributes","cluster-id":"3336683c081d149d","publish-timeout":"7s"}
	{"level":"info","ts":"2025-12-29T07:14:34.462398Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-12-29T07:14:34.462543Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-12-29T07:14:34.462459Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-12-29T07:14:34.46257Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-12-29T07:14:34.463807Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2025-12-29T07:14:34.46381Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.103.2:2379"}
	
	
	==> kernel <==
	 07:15:33 up 58 min,  0 user,  load average: 1.86, 2.58, 1.89
	Linux old-k8s-version-876718 6.8.0-1045-gcp #48~22.04.1-Ubuntu SMP Tue Nov 25 13:07:56 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [eccdd751d5c90dc102d5991e820df94c667027233d147fc5276fe889a9653468] <==
	I1229 07:14:36.405724       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1229 07:14:36.406037       1 main.go:139] hostIP = 192.168.103.2
	podIP = 192.168.103.2
	I1229 07:14:36.406253       1 main.go:148] setting mtu 1500 for CNI 
	I1229 07:14:36.406283       1 main.go:178] kindnetd IP family: "ipv4"
	I1229 07:14:36.406309       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-12-29T07:14:36Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1229 07:14:36.606465       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1229 07:14:36.606530       1 controller.go:381] "Waiting for informer caches to sync"
	I1229 07:14:36.606546       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1229 07:14:36.606704       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1229 07:14:37.002398       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1229 07:14:37.002452       1 metrics.go:72] Registering metrics
	I1229 07:14:37.002536       1 controller.go:711] "Syncing nftables rules"
	I1229 07:14:46.606900       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1229 07:14:46.606980       1 main.go:301] handling current node
	I1229 07:14:56.607477       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1229 07:14:56.607511       1 main.go:301] handling current node
	I1229 07:15:06.607122       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1229 07:15:06.607156       1 main.go:301] handling current node
	I1229 07:15:16.606431       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1229 07:15:16.606474       1 main.go:301] handling current node
	I1229 07:15:26.609318       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1229 07:15:26.609367       1 main.go:301] handling current node
	
	
	==> kube-apiserver [69931aee6620ecef0e707aa69dde3c1c55637a74c6d0b2b17435ae34321b5fda] <==
	I1229 07:14:35.360418       1 handler_discovery.go:404] Starting ResourceDiscoveryManager
	I1229 07:14:35.407866       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I1229 07:14:35.417459       1 shared_informer.go:318] Caches are synced for node_authorizer
	I1229 07:14:35.456206       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I1229 07:14:35.456401       1 shared_informer.go:318] Caches are synced for configmaps
	I1229 07:14:35.456416       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I1229 07:14:35.456761       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1229 07:14:35.457121       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I1229 07:14:35.457237       1 aggregator.go:166] initial CRD sync complete...
	I1229 07:14:35.457249       1 autoregister_controller.go:141] Starting autoregister controller
	I1229 07:14:35.457257       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1229 07:14:35.457264       1 cache.go:39] Caches are synced for autoregister controller
	I1229 07:14:35.461130       1 apf_controller.go:377] Running API Priority and Fairness config worker
	I1229 07:14:35.461144       1 apf_controller.go:380] Running API Priority and Fairness periodic rebalancing process
	I1229 07:14:36.271586       1 controller.go:624] quota admission added evaluator for: namespaces
	I1229 07:14:36.301915       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I1229 07:14:36.318236       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1229 07:14:36.324784       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1229 07:14:36.331912       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I1229 07:14:36.359052       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1229 07:14:36.364013       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.105.219.198"}
	I1229 07:14:36.377553       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.106.34.49"}
	I1229 07:14:47.680135       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	I1229 07:14:47.829270       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1229 07:14:48.080275       1 controller.go:624] quota admission added evaluator for: endpoints
	
	
	==> kube-controller-manager [176fbe8370904a1abad1e6ed78d46681127fa2c11cbc919f309fe0a96e3bf559] <==
	I1229 07:14:47.832395       1 shared_informer.go:318] Caches are synced for certificate-csrsigning-kubelet-serving
	I1229 07:14:47.938556       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="246.35682ms"
	I1229 07:14:47.938656       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="55.521µs"
	I1229 07:14:47.940514       1 event.go:307] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kubernetes-dashboard-8694d4445c-bfg2s"
	I1229 07:14:47.940546       1 event.go:307] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: dashboard-metrics-scraper-5f989dc9cf-crtg5"
	I1229 07:14:47.948454       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="256.258325ms"
	I1229 07:14:47.948897       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="256.667654ms"
	I1229 07:14:47.955279       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="6.218925ms"
	I1229 07:14:47.955376       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="53.627µs"
	I1229 07:14:47.956154       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="7.65015ms"
	I1229 07:14:47.956264       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="66.538µs"
	I1229 07:14:47.960385       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="157.17µs"
	I1229 07:14:47.979753       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="64.47µs"
	I1229 07:14:48.200095       1 shared_informer.go:318] Caches are synced for garbage collector
	I1229 07:14:48.217480       1 shared_informer.go:318] Caches are synced for garbage collector
	I1229 07:14:48.217508       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	I1229 07:14:51.892621       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="99.417µs"
	I1229 07:14:52.900934       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="104.655µs"
	I1229 07:14:53.898209       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="70.108µs"
	I1229 07:14:55.006120       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="61.911636ms"
	I1229 07:14:55.006254       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="89.837µs"
	I1229 07:15:11.948150       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="63.703µs"
	I1229 07:15:15.684770       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="7.118027ms"
	I1229 07:15:15.684860       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="49.275µs"
	I1229 07:15:18.858374       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="133.883µs"
	
	
	==> kube-proxy [604c0d1f5c7a0df5b8eb5cb40329d966a9ac5cc854e5051c0596c0c5eb5f91ed] <==
	I1229 07:14:36.245283       1 server_others.go:69] "Using iptables proxy"
	I1229 07:14:36.253880       1 node.go:141] Successfully retrieved node IP: 192.168.103.2
	I1229 07:14:36.273658       1 server.go:632] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1229 07:14:36.276069       1 server_others.go:152] "Using iptables Proxier"
	I1229 07:14:36.276112       1 server_others.go:421] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I1229 07:14:36.276122       1 server_others.go:438] "Defaulting to no-op detect-local"
	I1229 07:14:36.276169       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1229 07:14:36.276443       1 server.go:846] "Version info" version="v1.28.0"
	I1229 07:14:36.276462       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1229 07:14:36.277121       1 config.go:315] "Starting node config controller"
	I1229 07:14:36.277139       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1229 07:14:36.277365       1 config.go:97] "Starting endpoint slice config controller"
	I1229 07:14:36.278106       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1229 07:14:36.278255       1 config.go:188] "Starting service config controller"
	I1229 07:14:36.278267       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1229 07:14:36.377522       1 shared_informer.go:318] Caches are synced for node config
	I1229 07:14:36.378856       1 shared_informer.go:318] Caches are synced for service config
	I1229 07:14:36.378871       1 shared_informer.go:318] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [96d9acdaa9e812fcd678cb5aa4c56ffc81629c3f8f930d7c429c5c520e7684c8] <==
	I1229 07:14:33.907100       1 serving.go:348] Generated self-signed cert in-memory
	I1229 07:14:35.430321       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.28.0"
	I1229 07:14:35.430350       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1229 07:14:35.433947       1 requestheader_controller.go:169] Starting RequestHeaderAuthRequestController
	I1229 07:14:35.433971       1 shared_informer.go:311] Waiting for caches to sync for RequestHeaderAuthRequestController
	I1229 07:14:35.433991       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1229 07:14:35.434016       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1229 07:14:35.434017       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1229 07:14:35.434038       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
	I1229 07:14:35.435060       1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
	I1229 07:14:35.435110       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I1229 07:14:35.534215       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
	I1229 07:14:35.537316       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1229 07:14:35.537330       1 shared_informer.go:318] Caches are synced for RequestHeaderAuthRequestController
	
	
	==> kubelet <==
	Dec 29 07:14:48 old-k8s-version-876718 kubelet[741]: E1229 07:14:48.092822     741 projected.go:198] Error preparing data for projected volume kube-api-access-ksqf7 for pod kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-crtg5: configmap "kube-root-ca.crt" not found
	Dec 29 07:14:48 old-k8s-version-876718 kubelet[741]: E1229 07:14:48.092895     741 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/99331fe4-6ed1-40f6-a042-2fe358572968-kube-api-access-ksqf7 podName:99331fe4-6ed1-40f6-a042-2fe358572968 nodeName:}" failed. No retries permitted until 2025-12-29 07:14:48.592872375 +0000 UTC m=+15.869252498 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-ksqf7" (UniqueName: "kubernetes.io/projected/99331fe4-6ed1-40f6-a042-2fe358572968-kube-api-access-ksqf7") pod "dashboard-metrics-scraper-5f989dc9cf-crtg5" (UID: "99331fe4-6ed1-40f6-a042-2fe358572968") : configmap "kube-root-ca.crt" not found
	Dec 29 07:14:48 old-k8s-version-876718 kubelet[741]: E1229 07:14:48.093888     741 projected.go:292] Couldn't get configMap kubernetes-dashboard/kube-root-ca.crt: configmap "kube-root-ca.crt" not found
	Dec 29 07:14:48 old-k8s-version-876718 kubelet[741]: E1229 07:14:48.093925     741 projected.go:198] Error preparing data for projected volume kube-api-access-5btrb for pod kubernetes-dashboard/kubernetes-dashboard-8694d4445c-bfg2s: configmap "kube-root-ca.crt" not found
	Dec 29 07:14:48 old-k8s-version-876718 kubelet[741]: E1229 07:14:48.093973     741 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/afa0a6d0-35c1-415f-837e-8217b89f54fc-kube-api-access-5btrb podName:afa0a6d0-35c1-415f-837e-8217b89f54fc nodeName:}" failed. No retries permitted until 2025-12-29 07:14:48.59395877 +0000 UTC m=+15.870338894 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-5btrb" (UniqueName: "kubernetes.io/projected/afa0a6d0-35c1-415f-837e-8217b89f54fc-kube-api-access-5btrb") pod "kubernetes-dashboard-8694d4445c-bfg2s" (UID: "afa0a6d0-35c1-415f-837e-8217b89f54fc") : configmap "kube-root-ca.crt" not found
	Dec 29 07:14:51 old-k8s-version-876718 kubelet[741]: I1229 07:14:51.878872     741 scope.go:117] "RemoveContainer" containerID="b9ed89bf5f182630b48cc8fede54d1c3d86cd5a1df609989dc7ac13c1606f58b"
	Dec 29 07:14:52 old-k8s-version-876718 kubelet[741]: I1229 07:14:52.882664     741 scope.go:117] "RemoveContainer" containerID="b9ed89bf5f182630b48cc8fede54d1c3d86cd5a1df609989dc7ac13c1606f58b"
	Dec 29 07:14:52 old-k8s-version-876718 kubelet[741]: I1229 07:14:52.882934     741 scope.go:117] "RemoveContainer" containerID="b8f6036c257e401b2cb337b3060c8ff3b35cd180ef95916218798d2ecc64f2e3"
	Dec 29 07:14:52 old-k8s-version-876718 kubelet[741]: E1229 07:14:52.883339     741 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-crtg5_kubernetes-dashboard(99331fe4-6ed1-40f6-a042-2fe358572968)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-crtg5" podUID="99331fe4-6ed1-40f6-a042-2fe358572968"
	Dec 29 07:14:53 old-k8s-version-876718 kubelet[741]: I1229 07:14:53.885491     741 scope.go:117] "RemoveContainer" containerID="b8f6036c257e401b2cb337b3060c8ff3b35cd180ef95916218798d2ecc64f2e3"
	Dec 29 07:14:53 old-k8s-version-876718 kubelet[741]: E1229 07:14:53.885863     741 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-crtg5_kubernetes-dashboard(99331fe4-6ed1-40f6-a042-2fe358572968)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-crtg5" podUID="99331fe4-6ed1-40f6-a042-2fe358572968"
	Dec 29 07:14:54 old-k8s-version-876718 kubelet[741]: I1229 07:14:54.944305     741 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-bfg2s" podStartSLOduration=2.649624286 podCreationTimestamp="2025-12-29 07:14:47 +0000 UTC" firstStartedPulling="2025-12-29 07:14:48.879242852 +0000 UTC m=+16.155622989" lastFinishedPulling="2025-12-29 07:14:54.173847699 +0000 UTC m=+21.450227827" observedRunningTime="2025-12-29 07:14:54.944090784 +0000 UTC m=+22.220470928" watchObservedRunningTime="2025-12-29 07:14:54.944229124 +0000 UTC m=+22.220609248"
	Dec 29 07:14:58 old-k8s-version-876718 kubelet[741]: I1229 07:14:58.848328     741 scope.go:117] "RemoveContainer" containerID="b8f6036c257e401b2cb337b3060c8ff3b35cd180ef95916218798d2ecc64f2e3"
	Dec 29 07:14:58 old-k8s-version-876718 kubelet[741]: E1229 07:14:58.848595     741 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-crtg5_kubernetes-dashboard(99331fe4-6ed1-40f6-a042-2fe358572968)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-crtg5" podUID="99331fe4-6ed1-40f6-a042-2fe358572968"
	Dec 29 07:15:06 old-k8s-version-876718 kubelet[741]: I1229 07:15:06.920074     741 scope.go:117] "RemoveContainer" containerID="ffdc68478751c4ef8ecfb26589718e753fec507bdd303d88a626d88adc6b76b9"
	Dec 29 07:15:11 old-k8s-version-876718 kubelet[741]: I1229 07:15:11.815377     741 scope.go:117] "RemoveContainer" containerID="b8f6036c257e401b2cb337b3060c8ff3b35cd180ef95916218798d2ecc64f2e3"
	Dec 29 07:15:11 old-k8s-version-876718 kubelet[741]: I1229 07:15:11.936422     741 scope.go:117] "RemoveContainer" containerID="b8f6036c257e401b2cb337b3060c8ff3b35cd180ef95916218798d2ecc64f2e3"
	Dec 29 07:15:11 old-k8s-version-876718 kubelet[741]: I1229 07:15:11.936650     741 scope.go:117] "RemoveContainer" containerID="066fa833e37233849566c1e1480b105296402328ef58b83df823298bf2eb8f4e"
	Dec 29 07:15:11 old-k8s-version-876718 kubelet[741]: E1229 07:15:11.937013     741 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-crtg5_kubernetes-dashboard(99331fe4-6ed1-40f6-a042-2fe358572968)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-crtg5" podUID="99331fe4-6ed1-40f6-a042-2fe358572968"
	Dec 29 07:15:18 old-k8s-version-876718 kubelet[741]: I1229 07:15:18.848361     741 scope.go:117] "RemoveContainer" containerID="066fa833e37233849566c1e1480b105296402328ef58b83df823298bf2eb8f4e"
	Dec 29 07:15:18 old-k8s-version-876718 kubelet[741]: E1229 07:15:18.848652     741 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-crtg5_kubernetes-dashboard(99331fe4-6ed1-40f6-a042-2fe358572968)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-crtg5" podUID="99331fe4-6ed1-40f6-a042-2fe358572968"
	Dec 29 07:15:29 old-k8s-version-876718 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Dec 29 07:15:29 old-k8s-version-876718 systemd[1]: kubelet.service: Deactivated successfully.
	Dec 29 07:15:29 old-k8s-version-876718 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 29 07:15:29 old-k8s-version-876718 systemd[1]: kubelet.service: Consumed 1.566s CPU time.
	
	
	==> kubernetes-dashboard [9d62d4e5a2727d58ed4b3c8405a2b5330cd761d5a2c4e9aa2d1157dc7249f99d] <==
	2025/12/29 07:14:54 Starting overwatch
	2025/12/29 07:14:54 Using namespace: kubernetes-dashboard
	2025/12/29 07:14:54 Using in-cluster config to connect to apiserver
	2025/12/29 07:14:54 Using secret token for csrf signing
	2025/12/29 07:14:54 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/12/29 07:14:54 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/12/29 07:14:54 Successful initial request to the apiserver, version: v1.28.0
	2025/12/29 07:14:54 Generating JWE encryption key
	2025/12/29 07:14:54 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/12/29 07:14:54 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/12/29 07:14:54 Initializing JWE encryption key from synchronized object
	2025/12/29 07:14:54 Creating in-cluster Sidecar client
	2025/12/29 07:14:54 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/12/29 07:14:54 Serving insecurely on HTTP port: 9090
	2025/12/29 07:15:24 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	
	
	==> storage-provisioner [1580265780bb72872432923c6589598a07efda6af2d5ede23afbf8a4ff201291] <==
	I1229 07:15:06.965798       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1229 07:15:06.973514       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1229 07:15:06.973563       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1229 07:15:24.370193       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1229 07:15:24.370392       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-876718_aa15927a-3006-42c9-92b8-345f1f431730!
	I1229 07:15:24.370359       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"9eb83101-af4b-4f08-89af-4c2a64d6d770", APIVersion:"v1", ResourceVersion:"653", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-876718_aa15927a-3006-42c9-92b8-345f1f431730 became leader
	I1229 07:15:24.470571       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-876718_aa15927a-3006-42c9-92b8-345f1f431730!
	
	
	==> storage-provisioner [ffdc68478751c4ef8ecfb26589718e753fec507bdd303d88a626d88adc6b76b9] <==
	I1229 07:14:36.182777       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1229 07:15:06.184543       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-876718 -n old-k8s-version-876718
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-876718 -n old-k8s-version-876718: exit status 2 (333.866917ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:270: (dbg) Run:  kubectl --context old-k8s-version-876718 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:294: <<< TestStartStop/group/old-k8s-version/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:295: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/old-k8s-version/serial/Pause (5.57s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (2.61s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p no-preload-122332 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable metrics-server -p no-preload-122332 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 11 (478.913516ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-29T07:15:41Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable metrics-server -p no-preload-122332 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 11
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context no-preload-122332 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:213: (dbg) Non-zero exit: kubectl --context no-preload-122332 describe deploy/metrics-server -n kube-system: exit status 1 (149.50789ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): deployments.apps "metrics-server" not found

                                                
                                                
** /stderr **
start_stop_delete_test.go:215: failed to get info on auto-pause deployments. args "kubectl --context no-preload-122332 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:219: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/no-preload/serial/EnableAddonWhileActive]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/no-preload/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect no-preload-122332
helpers_test.go:244: (dbg) docker inspect no-preload-122332:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "9aa41434eb0f8df6141a3e51dfd50ea3fd6a2b2f3e194918c950f2f55445a90f",
	        "Created": "2025-12-29T07:14:49.513032226Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 246044,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-29T07:14:49.567583051Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:a2c772d8c9235c5e8a66a3d0af7f5184436aa141d74f90a5659fe0d594ea14d5",
	        "ResolvConfPath": "/var/lib/docker/containers/9aa41434eb0f8df6141a3e51dfd50ea3fd6a2b2f3e194918c950f2f55445a90f/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/9aa41434eb0f8df6141a3e51dfd50ea3fd6a2b2f3e194918c950f2f55445a90f/hostname",
	        "HostsPath": "/var/lib/docker/containers/9aa41434eb0f8df6141a3e51dfd50ea3fd6a2b2f3e194918c950f2f55445a90f/hosts",
	        "LogPath": "/var/lib/docker/containers/9aa41434eb0f8df6141a3e51dfd50ea3fd6a2b2f3e194918c950f2f55445a90f/9aa41434eb0f8df6141a3e51dfd50ea3fd6a2b2f3e194918c950f2f55445a90f-json.log",
	        "Name": "/no-preload-122332",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "no-preload-122332:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "no-preload-122332",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "9aa41434eb0f8df6141a3e51dfd50ea3fd6a2b2f3e194918c950f2f55445a90f",
	                "LowerDir": "/var/lib/docker/overlay2/e2357c0b79397c13786788b28fea63035db3d475bb6e264a508668d9a8bb0046-init/diff:/var/lib/docker/overlay2/3c9e424a3298c821d4195922375c499065668db3230de879006c717dc268a220/diff",
	                "MergedDir": "/var/lib/docker/overlay2/e2357c0b79397c13786788b28fea63035db3d475bb6e264a508668d9a8bb0046/merged",
	                "UpperDir": "/var/lib/docker/overlay2/e2357c0b79397c13786788b28fea63035db3d475bb6e264a508668d9a8bb0046/diff",
	                "WorkDir": "/var/lib/docker/overlay2/e2357c0b79397c13786788b28fea63035db3d475bb6e264a508668d9a8bb0046/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "no-preload-122332",
	                "Source": "/var/lib/docker/volumes/no-preload-122332/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-122332",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766979815-22353@sha256:20dad5895b49b986a1253c0faab60865204843ac97fd3a6e6210da5896244409",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-122332",
	                "name.minikube.sigs.k8s.io": "no-preload-122332",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "7a20772e1460709d6d1a36b50747866e28d2ac087d737e538610dacc6d171fcf",
	            "SandboxKey": "/var/run/docker/netns/7a20772e1460",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33063"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33064"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33067"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33065"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33066"
	                    }
	                ]
	            },
	            "Networks": {
	                "no-preload-122332": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.94.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "18727729929e09903a8602637fce4f42992b3e819228d475208a35800e81902c",
	                    "EndpointID": "0b788c76e48de6dccec19d5d1450d4e89e968b883a5a934546a6faa47ed4887a",
	                    "Gateway": "192.168.94.1",
	                    "IPAddress": "192.168.94.2",
	                    "MacAddress": "8e:fb:62:16:98:1d",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "no-preload-122332",
	                        "9aa41434eb0f"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-122332 -n no-preload-122332
helpers_test.go:253: <<< TestStartStop/group/no-preload/serial/EnableAddonWhileActive FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/no-preload/serial/EnableAddonWhileActive]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-122332 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-amd64 -p no-preload-122332 logs -n 25: (1.104441505s)
helpers_test.go:261: TestStartStop/group/no-preload/serial/EnableAddonWhileActive logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────
────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │          PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────
────┤
	│ start   │ -p cert-expiration-452455 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio                                                                                                                                        │ cert-expiration-452455    │ jenkins │ v1.37.0 │ 29 Dec 25 07:12 UTC │ 29 Dec 25 07:12 UTC │
	│ delete  │ -p missing-upgrade-967138                                                                                                                                                                                                                     │ missing-upgrade-967138    │ jenkins │ v1.37.0 │ 29 Dec 25 07:12 UTC │ 29 Dec 25 07:12 UTC │
	│ start   │ -p force-systemd-flag-074338 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                                                                                                                   │ force-systemd-flag-074338 │ jenkins │ v1.37.0 │ 29 Dec 25 07:12 UTC │ 29 Dec 25 07:12 UTC │
	│ stop    │ -p kubernetes-upgrade-174577 --alsologtostderr                                                                                                                                                                                                │ kubernetes-upgrade-174577 │ jenkins │ v1.37.0 │ 29 Dec 25 07:12 UTC │ 29 Dec 25 07:12 UTC │
	│ start   │ -p kubernetes-upgrade-174577 --memory=3072 --kubernetes-version=v1.35.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                                                                                      │ kubernetes-upgrade-174577 │ jenkins │ v1.37.0 │ 29 Dec 25 07:12 UTC │                     │
	│ ssh     │ force-systemd-flag-074338 ssh cat /etc/crio/crio.conf.d/02-crio.conf                                                                                                                                                                          │ force-systemd-flag-074338 │ jenkins │ v1.37.0 │ 29 Dec 25 07:12 UTC │ 29 Dec 25 07:12 UTC │
	│ delete  │ -p force-systemd-flag-074338                                                                                                                                                                                                                  │ force-systemd-flag-074338 │ jenkins │ v1.37.0 │ 29 Dec 25 07:12 UTC │ 29 Dec 25 07:12 UTC │
	│ start   │ -p cert-options-001954 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio                     │ cert-options-001954       │ jenkins │ v1.37.0 │ 29 Dec 25 07:12 UTC │ 29 Dec 25 07:13 UTC │
	│ ssh     │ cert-options-001954 ssh openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt                                                                                                                                                   │ cert-options-001954       │ jenkins │ v1.37.0 │ 29 Dec 25 07:13 UTC │ 29 Dec 25 07:13 UTC │
	│ ssh     │ -p cert-options-001954 -- sudo cat /etc/kubernetes/admin.conf                                                                                                                                                                                 │ cert-options-001954       │ jenkins │ v1.37.0 │ 29 Dec 25 07:13 UTC │ 29 Dec 25 07:13 UTC │
	│ delete  │ -p cert-options-001954                                                                                                                                                                                                                        │ cert-options-001954       │ jenkins │ v1.37.0 │ 29 Dec 25 07:13 UTC │ 29 Dec 25 07:13 UTC │
	│ start   │ -p old-k8s-version-876718 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-876718    │ jenkins │ v1.37.0 │ 29 Dec 25 07:13 UTC │ 29 Dec 25 07:13 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-876718 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-876718    │ jenkins │ v1.37.0 │ 29 Dec 25 07:14 UTC │                     │
	│ stop    │ -p old-k8s-version-876718 --alsologtostderr -v=3                                                                                                                                                                                              │ old-k8s-version-876718    │ jenkins │ v1.37.0 │ 29 Dec 25 07:14 UTC │ 29 Dec 25 07:14 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-876718 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-876718    │ jenkins │ v1.37.0 │ 29 Dec 25 07:14 UTC │ 29 Dec 25 07:14 UTC │
	│ start   │ -p old-k8s-version-876718 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-876718    │ jenkins │ v1.37.0 │ 29 Dec 25 07:14 UTC │ 29 Dec 25 07:15 UTC │
	│ delete  │ -p stopped-upgrade-518014                                                                                                                                                                                                                     │ stopped-upgrade-518014    │ jenkins │ v1.37.0 │ 29 Dec 25 07:14 UTC │ 29 Dec 25 07:14 UTC │
	│ start   │ -p no-preload-122332 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0                                                                                       │ no-preload-122332         │ jenkins │ v1.37.0 │ 29 Dec 25 07:14 UTC │ 29 Dec 25 07:15 UTC │
	│ image   │ old-k8s-version-876718 image list --format=json                                                                                                                                                                                               │ old-k8s-version-876718    │ jenkins │ v1.37.0 │ 29 Dec 25 07:15 UTC │ 29 Dec 25 07:15 UTC │
	│ pause   │ -p old-k8s-version-876718 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-876718    │ jenkins │ v1.37.0 │ 29 Dec 25 07:15 UTC │                     │
	│ delete  │ -p old-k8s-version-876718                                                                                                                                                                                                                     │ old-k8s-version-876718    │ jenkins │ v1.37.0 │ 29 Dec 25 07:15 UTC │ 29 Dec 25 07:15 UTC │
	│ delete  │ -p old-k8s-version-876718                                                                                                                                                                                                                     │ old-k8s-version-876718    │ jenkins │ v1.37.0 │ 29 Dec 25 07:15 UTC │ 29 Dec 25 07:15 UTC │
	│ start   │ -p embed-certs-739827 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0                                                                                        │ embed-certs-739827        │ jenkins │ v1.37.0 │ 29 Dec 25 07:15 UTC │                     │
	│ addons  │ enable metrics-server -p no-preload-122332 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-122332         │ jenkins │ v1.37.0 │ 29 Dec 25 07:15 UTC │                     │
	│ start   │ -p cert-expiration-452455 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio                                                                                                                                     │ cert-expiration-452455    │ jenkins │ v1.37.0 │ 29 Dec 25 07:15 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────
────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/29 07:15:41
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1229 07:15:41.841072  253540 out.go:360] Setting OutFile to fd 1 ...
	I1229 07:15:41.841421  253540 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1229 07:15:41.841426  253540 out.go:374] Setting ErrFile to fd 2...
	I1229 07:15:41.841432  253540 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1229 07:15:41.841770  253540 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22353-9207/.minikube/bin
	I1229 07:15:41.842455  253540 out.go:368] Setting JSON to false
	I1229 07:15:41.843771  253540 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":3494,"bootTime":1766989048,"procs":330,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1229 07:15:41.843854  253540 start.go:143] virtualization: kvm guest
	I1229 07:15:41.943825  253540 out.go:179] * [cert-expiration-452455] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1229 07:15:42.077251  253540 notify.go:221] Checking for updates...
	I1229 07:15:42.077274  253540 out.go:179]   - MINIKUBE_LOCATION=22353
	I1229 07:15:42.109882  253540 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1229 07:15:42.111866  253540 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22353-9207/kubeconfig
	I1229 07:15:42.114188  253540 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22353-9207/.minikube
	I1229 07:15:42.121177  253540 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1229 07:15:42.142841  253540 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1229 07:15:42.145793  253540 config.go:182] Loaded profile config "cert-expiration-452455": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1229 07:15:42.146575  253540 driver.go:422] Setting default libvirt URI to qemu:///system
	I1229 07:15:42.170847  253540 docker.go:124] docker version: linux-29.1.3:Docker Engine - Community
	I1229 07:15:42.170919  253540 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1229 07:15:42.237351  253540 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:66 OomKillDisable:false NGoroutines:76 SystemTime:2025-12-29 07:15:42.226858235 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:dea7da592f5d1d2b7755e3a161be07f43fad8f75 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1229 07:15:42.237501  253540 docker.go:319] overlay module found
	I1229 07:15:42.239778  253540 out.go:179] * Using the docker driver based on existing profile
	I1229 07:15:42.241005  253540 start.go:309] selected driver: docker
	I1229 07:15:42.241012  253540 start.go:928] validating driver "docker" against &{Name:cert-expiration-452455 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766979815-22353@sha256:20dad5895b49b986a1253c0faab60865204843ac97fd3a6e6210da5896244409 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:cert-expiration-452455 Namespace:default APIServerHAVIP: A
PIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:3m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: So
cketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1229 07:15:42.241107  253540 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1229 07:15:42.241826  253540 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1229 07:15:42.313339  253540 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:72 OomKillDisable:false NGoroutines:87 SystemTime:2025-12-29 07:15:42.302079444 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:dea7da592f5d1d2b7755e3a161be07f43fad8f75 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1229 07:15:42.313657  253540 cni.go:84] Creating CNI manager for ""
	I1229 07:15:42.313725  253540 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1229 07:15:42.313777  253540 start.go:353] cluster config:
	{Name:cert-expiration-452455 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766979815-22353@sha256:20dad5895b49b986a1253c0faab60865204843ac97fd3a6e6210da5896244409 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:cert-expiration-452455 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.loca
l ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:8760h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0
GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1229 07:15:42.315838  253540 out.go:179] * Starting "cert-expiration-452455" primary control-plane node in "cert-expiration-452455" cluster
	I1229 07:15:42.317127  253540 cache.go:134] Beginning downloading kic base image for docker with crio
	I1229 07:15:42.319109  253540 out.go:179] * Pulling base image v0.0.48-1766979815-22353 ...
	I1229 07:15:42.320308  253540 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime crio
	I1229 07:15:42.320338  253540 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22353-9207/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-cri-o-overlay-amd64.tar.lz4
	I1229 07:15:42.320346  253540 cache.go:65] Caching tarball of preloaded images
	I1229 07:15:42.320389  253540 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766979815-22353@sha256:20dad5895b49b986a1253c0faab60865204843ac97fd3a6e6210da5896244409 in local docker daemon
	I1229 07:15:42.320451  253540 preload.go:251] Found /home/jenkins/minikube-integration/22353-9207/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1229 07:15:42.320461  253540 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0 on crio
	I1229 07:15:42.320580  253540 profile.go:143] Saving config to /home/jenkins/minikube-integration/22353-9207/.minikube/profiles/cert-expiration-452455/config.json ...
	I1229 07:15:42.347230  253540 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766979815-22353@sha256:20dad5895b49b986a1253c0faab60865204843ac97fd3a6e6210da5896244409 in local docker daemon, skipping pull
	I1229 07:15:42.347245  253540 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766979815-22353@sha256:20dad5895b49b986a1253c0faab60865204843ac97fd3a6e6210da5896244409 exists in daemon, skipping load
	I1229 07:15:42.347266  253540 cache.go:243] Successfully downloaded all kic artifacts
	I1229 07:15:42.347300  253540 start.go:360] acquireMachinesLock for cert-expiration-452455: {Name:mkc83e864de2fd093b9623ec79522fb17372b8bd Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1229 07:15:42.347379  253540 start.go:364] duration metric: took 59.509µs to acquireMachinesLock for "cert-expiration-452455"
	I1229 07:15:42.347396  253540 start.go:96] Skipping create...Using existing machine configuration
	I1229 07:15:42.347400  253540 fix.go:54] fixHost starting: 
	I1229 07:15:42.347606  253540 cli_runner.go:164] Run: docker container inspect cert-expiration-452455 --format={{.State.Status}}
	I1229 07:15:42.369045  253540 fix.go:112] recreateIfNeeded on cert-expiration-452455: state=Running err=<nil>
	W1229 07:15:42.369066  253540 fix.go:138] unexpected machine state, will restart: <nil>
	
	
	==> CRI-O <==
	Dec 29 07:15:29 no-preload-122332 crio[776]: time="2025-12-29T07:15:29.740064617Z" level=info msg="Starting container: 80d15df2db23f47bc65c5c1655a8fdcd102bec0c0063457ebdeffc9181b71f1d" id=a835e3f4-8d7b-4204-af63-2190f9332f8e name=/runtime.v1.RuntimeService/StartContainer
	Dec 29 07:15:29 no-preload-122332 crio[776]: time="2025-12-29T07:15:29.74217074Z" level=info msg="Started container" PID=2831 containerID=80d15df2db23f47bc65c5c1655a8fdcd102bec0c0063457ebdeffc9181b71f1d description=kube-system/coredns-7d764666f9-6rcr2/coredns id=a835e3f4-8d7b-4204-af63-2190f9332f8e name=/runtime.v1.RuntimeService/StartContainer sandboxID=bfb33c61ae5e57720facedf5ddc06cbcece997837a1d5cd988faacb1c1ce1347
	Dec 29 07:15:32 no-preload-122332 crio[776]: time="2025-12-29T07:15:32.525303295Z" level=info msg="Running pod sandbox: default/busybox/POD" id=e8f3aafc-9b9a-4611-9145-d1d649c91c9e name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 29 07:15:32 no-preload-122332 crio[776]: time="2025-12-29T07:15:32.525396993Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 29 07:15:32 no-preload-122332 crio[776]: time="2025-12-29T07:15:32.530277346Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:a47aa103a5cb945e7154d20beb8f5de51e5e78f588ab5efdf8f42627ece39f63 UID:64807d8c-0a89-4e9c-a816-fff1e31fce8f NetNS:/var/run/netns/8749067e-2414-4298-b4a6-41c991bab40d Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc0010082b0}] Aliases:map[]}"
	Dec 29 07:15:32 no-preload-122332 crio[776]: time="2025-12-29T07:15:32.530306304Z" level=info msg="Adding pod default_busybox to CNI network \"kindnet\" (type=ptp)"
	Dec 29 07:15:32 no-preload-122332 crio[776]: time="2025-12-29T07:15:32.549550747Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:a47aa103a5cb945e7154d20beb8f5de51e5e78f588ab5efdf8f42627ece39f63 UID:64807d8c-0a89-4e9c-a816-fff1e31fce8f NetNS:/var/run/netns/8749067e-2414-4298-b4a6-41c991bab40d Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc0010082b0}] Aliases:map[]}"
	Dec 29 07:15:32 no-preload-122332 crio[776]: time="2025-12-29T07:15:32.549753273Z" level=info msg="Checking pod default_busybox for CNI network kindnet (type=ptp)"
	Dec 29 07:15:32 no-preload-122332 crio[776]: time="2025-12-29T07:15:32.550752585Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Dec 29 07:15:32 no-preload-122332 crio[776]: time="2025-12-29T07:15:32.552003503Z" level=info msg="Ran pod sandbox a47aa103a5cb945e7154d20beb8f5de51e5e78f588ab5efdf8f42627ece39f63 with infra container: default/busybox/POD" id=e8f3aafc-9b9a-4611-9145-d1d649c91c9e name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 29 07:15:32 no-preload-122332 crio[776]: time="2025-12-29T07:15:32.553338838Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=8c4242c4-acad-4f56-ba73-9ee82cc41ced name=/runtime.v1.ImageService/ImageStatus
	Dec 29 07:15:32 no-preload-122332 crio[776]: time="2025-12-29T07:15:32.553606218Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=8c4242c4-acad-4f56-ba73-9ee82cc41ced name=/runtime.v1.ImageService/ImageStatus
	Dec 29 07:15:32 no-preload-122332 crio[776]: time="2025-12-29T07:15:32.553670118Z" level=info msg="Neither image nor artfiact gcr.io/k8s-minikube/busybox:1.28.4-glibc found" id=8c4242c4-acad-4f56-ba73-9ee82cc41ced name=/runtime.v1.ImageService/ImageStatus
	Dec 29 07:15:32 no-preload-122332 crio[776]: time="2025-12-29T07:15:32.554470414Z" level=info msg="Pulling image: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=4ddfeb7d-f101-41b1-8866-96e413a69450 name=/runtime.v1.ImageService/PullImage
	Dec 29 07:15:32 no-preload-122332 crio[776]: time="2025-12-29T07:15:32.554752501Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Dec 29 07:15:33 no-preload-122332 crio[776]: time="2025-12-29T07:15:33.777648424Z" level=info msg="Pulled image: gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998" id=4ddfeb7d-f101-41b1-8866-96e413a69450 name=/runtime.v1.ImageService/PullImage
	Dec 29 07:15:33 no-preload-122332 crio[776]: time="2025-12-29T07:15:33.778438466Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=f6e8494f-374d-43b8-a7cc-bd302664d091 name=/runtime.v1.ImageService/ImageStatus
	Dec 29 07:15:33 no-preload-122332 crio[776]: time="2025-12-29T07:15:33.780309053Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=23202991-ece2-453a-8107-b26fd81d44fa name=/runtime.v1.ImageService/ImageStatus
	Dec 29 07:15:33 no-preload-122332 crio[776]: time="2025-12-29T07:15:33.783861114Z" level=info msg="Creating container: default/busybox/busybox" id=57d9ea84-80d3-4d68-9a5c-80f7dc693bd1 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 29 07:15:33 no-preload-122332 crio[776]: time="2025-12-29T07:15:33.783998378Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 29 07:15:33 no-preload-122332 crio[776]: time="2025-12-29T07:15:33.788188962Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 29 07:15:33 no-preload-122332 crio[776]: time="2025-12-29T07:15:33.788817894Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 29 07:15:33 no-preload-122332 crio[776]: time="2025-12-29T07:15:33.818707743Z" level=info msg="Created container 1f5115c5cedfdfe759aa68e0cf48a083f1faeff9365f505490b283577eec7289: default/busybox/busybox" id=57d9ea84-80d3-4d68-9a5c-80f7dc693bd1 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 29 07:15:33 no-preload-122332 crio[776]: time="2025-12-29T07:15:33.819392888Z" level=info msg="Starting container: 1f5115c5cedfdfe759aa68e0cf48a083f1faeff9365f505490b283577eec7289" id=82efaa3e-b429-44b2-b479-55ac85f23d7d name=/runtime.v1.RuntimeService/StartContainer
	Dec 29 07:15:33 no-preload-122332 crio[776]: time="2025-12-29T07:15:33.821600434Z" level=info msg="Started container" PID=2903 containerID=1f5115c5cedfdfe759aa68e0cf48a083f1faeff9365f505490b283577eec7289 description=default/busybox/busybox id=82efaa3e-b429-44b2-b479-55ac85f23d7d name=/runtime.v1.RuntimeService/StartContainer sandboxID=a47aa103a5cb945e7154d20beb8f5de51e5e78f588ab5efdf8f42627ece39f63
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	1f5115c5cedfd       gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998   9 seconds ago       Running             busybox                   0                   a47aa103a5cb9       busybox                                     default
	80d15df2db23f       aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139                                      13 seconds ago      Running             coredns                   0                   bfb33c61ae5e5       coredns-7d764666f9-6rcr2                    kube-system
	b533e73acb848       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      13 seconds ago      Running             storage-provisioner       0                   8d91806809979       storage-provisioner                         kube-system
	5b9948d7ca2c7       docker.io/kindest/kindnetd@sha256:7c22558dc06a570d46ea6e8a73b23cdc754eb81f7c08d3441a3171ad359ffc27    24 seconds ago      Running             kindnet-cni               0                   a4398399cec16       kindnet-rq99f                               kube-system
	68c31ee027806       32652ff1bbe6b6df16a3dc621fcad4e6e185a2852bffc5847aab79e36c54bfb8                                      25 seconds ago      Running             kube-proxy                0                   105cc74153d78       kube-proxy-qvww2                            kube-system
	c391663458e81       550794e3b12ac21ec7fd940bdfb45f7c8dae3c52acd8c70e580b882de20c3dcc                                      35 seconds ago      Running             kube-scheduler            0                   19ae5c57d6ac8       kube-scheduler-no-preload-122332            kube-system
	77f5f1533c33e       0a108f7189562e99793bdecab61fdf1a7c9d913af3385de9da17fb9d6ff430e2                                      35 seconds ago      Running             etcd                      0                   2977eeda3ac25       etcd-no-preload-122332                      kube-system
	99453e3531eed       5c6acd67e9cd10eb60a246cd233db251ef62ea97e6572f897e873f0cb648f499                                      35 seconds ago      Running             kube-apiserver            0                   6a79d546811e1       kube-apiserver-no-preload-122332            kube-system
	ef7eca62daf23       2c9a4b058bd7e6ec479c38f9e8a1dac2f5ee5b0a3ebda6dfac968e8720229508                                      35 seconds ago      Running             kube-controller-manager   0                   75af98b2f7f95       kube-controller-manager-no-preload-122332   kube-system
	
	
	==> coredns [80d15df2db23f47bc65c5c1655a8fdcd102bec0c0063457ebdeffc9181b71f1d] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = c7556d8fdf49c5e32a9077be8cfb9fc6947bb07e663a10d55b192eb63ad1f2bd9793e8e5f5a36fc9abb1957831eec5c997fd9821790e3990ae9531bf41ecea37
	CoreDNS-1.13.1
	linux/amd64, go1.25.2, 1db4568
	[INFO] 127.0.0.1:58881 - 15544 "HINFO IN 45233229585164914.3718826352080826028. udp 55 false 512" NXDOMAIN qr,rd,ra 130 0.018481415s
	
	
	==> describe nodes <==
	Name:               no-preload-122332
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=no-preload-122332
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=306c26d738eaa3534a776d7c684ed563998a48b8
	                    minikube.k8s.io/name=no-preload-122332
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_29T07_15_12_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 29 Dec 2025 07:15:09 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-122332
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 29 Dec 2025 07:15:41 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 29 Dec 2025 07:15:42 +0000   Mon, 29 Dec 2025 07:15:07 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 29 Dec 2025 07:15:42 +0000   Mon, 29 Dec 2025 07:15:07 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 29 Dec 2025 07:15:42 +0000   Mon, 29 Dec 2025 07:15:07 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 29 Dec 2025 07:15:42 +0000   Mon, 29 Dec 2025 07:15:29 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.94.2
	  Hostname:    no-preload-122332
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863360Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863360Ki
	  pods:               110
	System Info:
	  Machine ID:                 44f990608ba801cb32708aeb6951f96d
	  System UUID:                da04b11d-c694-431a-acb9-a897f234eb76
	  Boot ID:                    38b59c2f-2e15-4538-b2f7-46a8b6545a02
	  Kernel Version:             6.8.0-1045-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.35.0
	  Kubelet Version:            v1.35.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         11s
	  kube-system                 coredns-7d764666f9-6rcr2                     100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     26s
	  kube-system                 etcd-no-preload-122332                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         32s
	  kube-system                 kindnet-rq99f                                100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      26s
	  kube-system                 kube-apiserver-no-preload-122332             250m (3%)     0 (0%)      0 (0%)           0 (0%)         32s
	  kube-system                 kube-controller-manager-no-preload-122332    200m (2%)     0 (0%)      0 (0%)           0 (0%)         32s
	  kube-system                 kube-proxy-qvww2                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         26s
	  kube-system                 kube-scheduler-no-preload-122332             100m (1%)     0 (0%)      0 (0%)           0 (0%)         33s
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         26s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason          Age   From             Message
	  ----    ------          ----  ----             -------
	  Normal  RegisteredNode  27s   node-controller  Node no-preload-122332 event: Registered Node no-preload-122332 in Controller
	
	
	==> dmesg <==
	[Dec29 06:17] MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details.
	[  +0.001714] TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details.
	[  +0.001000] MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details.
	[  +0.084008] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge
	[  +0.380672] i8042: Warning: Keylock active
	[  +0.013979] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.493300] block sda: the capability attribute has been deprecated.
	[  +0.084603] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.024785] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +4.737934] kauditd_printk_skb: 47 callbacks suppressed
	
	
	==> etcd [77f5f1533c33effcb599962e08ceef55baf7822e8f13e2a301894d78838b2d4c] <==
	{"level":"info","ts":"2025-12-29T07:15:08.358064Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1693","msg":"dfc97eb0aae75b33 has received 1 MsgVoteResp votes and 0 vote rejections"}
	{"level":"info","ts":"2025-12-29T07:15:08.358083Z","logger":"raft","caller":"v3@v3.6.0/raft.go:970","msg":"dfc97eb0aae75b33 became leader at term 2"}
	{"level":"info","ts":"2025-12-29T07:15:08.358094Z","logger":"raft","caller":"v3@v3.6.0/node.go:370","msg":"raft.node: dfc97eb0aae75b33 elected leader dfc97eb0aae75b33 at term 2"}
	{"level":"info","ts":"2025-12-29T07:15:08.358691Z","caller":"etcdserver/server.go:2420","msg":"setting up initial cluster version using v3 API","cluster-version":"3.6"}
	{"level":"info","ts":"2025-12-29T07:15:08.359209Z","caller":"embed/serve.go:138","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-12-29T07:15:08.359229Z","caller":"etcdserver/server.go:1820","msg":"published local member to cluster through raft","local-member-id":"dfc97eb0aae75b33","local-member-attributes":"{Name:no-preload-122332 ClientURLs:[https://192.168.94.2:2379]}","cluster-id":"da400bbece288f5a","publish-timeout":"7s"}
	{"level":"info","ts":"2025-12-29T07:15:08.359238Z","caller":"embed/serve.go:138","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-12-29T07:15:08.359469Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-12-29T07:15:08.359493Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-12-29T07:15:08.359492Z","caller":"membership/cluster.go:682","msg":"set initial cluster version","cluster-id":"da400bbece288f5a","local-member-id":"dfc97eb0aae75b33","cluster-version":"3.6"}
	{"level":"info","ts":"2025-12-29T07:15:08.359578Z","caller":"api/capability.go:76","msg":"enabled capabilities for version","cluster-version":"3.6"}
	{"level":"info","ts":"2025-12-29T07:15:08.359620Z","caller":"etcdserver/server.go:2440","msg":"cluster version is updated","cluster-version":"3.6"}
	{"level":"info","ts":"2025-12-29T07:15:08.359657Z","caller":"version/monitor.go:116","msg":"cluster version differs from storage version.","cluster-version":"3.6.0","storage-version":"3.5.0"}
	{"level":"info","ts":"2025-12-29T07:15:08.359804Z","caller":"schema/migration.go:65","msg":"updated storage version","new-storage-version":"3.6.0"}
	{"level":"info","ts":"2025-12-29T07:15:08.360343Z","caller":"v3rpc/health.go:63","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-12-29T07:15:08.360373Z","caller":"v3rpc/health.go:63","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-12-29T07:15:08.363761Z","caller":"embed/serve.go:283","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2025-12-29T07:15:08.363835Z","caller":"embed/serve.go:283","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.94.2:2379"}
	{"level":"warn","ts":"2025-12-29T07:15:09.619565Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"128.182776ms","expected-duration":"100ms","prefix":"","request":"header:<ID:6571767156368883996 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/events/default/no-preload-122332.18859e52b6ff71e8\" mod_revision:84 > success:<request_put:<key:\"/registry/events/default/no-preload-122332.18859e52b6ff71e8\" value_size:609 lease:6571767156368883936 >> failure:<request_range:<key:\"/registry/events/default/no-preload-122332.18859e52b6ff71e8\" > >>","response":"size:14"}
	{"level":"info","ts":"2025-12-29T07:15:09.619657Z","caller":"traceutil/trace.go:172","msg":"trace[1252524988] linearizableReadLoop","detail":"{readStateIndex:92; appliedIndex:91; }","duration":"109.065935ms","start":"2025-12-29T07:15:09.510582Z","end":"2025-12-29T07:15:09.619647Z","steps":["trace[1252524988] 'read index received'  (duration: 23.892µs)","trace[1252524988] 'applied index is now lower than readState.Index'  (duration: 109.041189ms)"],"step_count":2}
	{"level":"warn","ts":"2025-12-29T07:15:09.619720Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"109.134978ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/leases/kube-system/apiserver-42mff5v6kj5njtomxdlqi2xhoi\" limit:1 ","response":"range_response_count:0 size:4"}
	{"level":"info","ts":"2025-12-29T07:15:09.619704Z","caller":"traceutil/trace.go:172","msg":"trace[1990253926] transaction","detail":"{read_only:false; response_revision:87; number_of_response:1; }","duration":"183.923637ms","start":"2025-12-29T07:15:09.435754Z","end":"2025-12-29T07:15:09.619678Z","steps":["trace[1990253926] 'process raft request'  (duration: 55.300671ms)","trace[1990253926] 'compare'  (duration: 128.083545ms)"],"step_count":2}
	{"level":"info","ts":"2025-12-29T07:15:09.619742Z","caller":"traceutil/trace.go:172","msg":"trace[1846100131] range","detail":"{range_begin:/registry/leases/kube-system/apiserver-42mff5v6kj5njtomxdlqi2xhoi; range_end:; response_count:0; response_revision:87; }","duration":"109.162117ms","start":"2025-12-29T07:15:09.510574Z","end":"2025-12-29T07:15:09.619736Z","steps":["trace[1846100131] 'agreement among raft nodes before linearized reading'  (duration: 109.104679ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-29T07:15:09.819665Z","caller":"traceutil/trace.go:172","msg":"trace[1834156086] transaction","detail":"{read_only:false; response_revision:90; number_of_response:1; }","duration":"133.472846ms","start":"2025-12-29T07:15:09.686178Z","end":"2025-12-29T07:15:09.819651Z","steps":["trace[1834156086] 'process raft request'  (duration: 61.329425ms)","trace[1834156086] 'compare'  (duration: 72.055695ms)"],"step_count":2}
	{"level":"info","ts":"2025-12-29T07:15:41.929745Z","caller":"traceutil/trace.go:172","msg":"trace[378071254] transaction","detail":"{read_only:false; response_revision:475; number_of_response:1; }","duration":"127.764857ms","start":"2025-12-29T07:15:41.801964Z","end":"2025-12-29T07:15:41.929728Z","steps":["trace[378071254] 'process raft request'  (duration: 127.663522ms)"],"step_count":1}
	
	
	==> kernel <==
	 07:15:43 up 58 min,  0 user,  load average: 1.80, 2.54, 1.89
	Linux no-preload-122332 6.8.0-1045-gcp #48~22.04.1-Ubuntu SMP Tue Nov 25 13:07:56 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [5b9948d7ca2c710075e1fda4a31b98880fc584d190ea25b9a9637da4430286cc] <==
	I1229 07:15:18.982341       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1229 07:15:18.982603       1 main.go:139] hostIP = 192.168.94.2
	podIP = 192.168.94.2
	I1229 07:15:18.982722       1 main.go:148] setting mtu 1500 for CNI 
	I1229 07:15:18.982742       1 main.go:178] kindnetd IP family: "ipv4"
	I1229 07:15:18.982760       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-12-29T07:15:19Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1229 07:15:19.185051       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1229 07:15:19.279980       1 controller.go:381] "Waiting for informer caches to sync"
	I1229 07:15:19.280004       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1229 07:15:19.280151       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1229 07:15:19.584844       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1229 07:15:19.584921       1 metrics.go:72] Registering metrics
	I1229 07:15:19.584984       1 controller.go:711] "Syncing nftables rules"
	I1229 07:15:29.185442       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1229 07:15:29.185490       1 main.go:301] handling current node
	I1229 07:15:39.185735       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1229 07:15:39.185775       1 main.go:301] handling current node
	
	
	==> kube-apiserver [99453e3531eed109358cd73aaf1e4ffc6f52f72f8c92bcd508c3e69a8c57f914] <==
	I1229 07:15:09.305414       1 controller.go:667] quota admission added evaluator for: namespaces
	E1229 07:15:09.308750       1 controller.go:201] "Failed to ensure lease exists, will retry" err="namespaces \"kube-system\" not found" interval="200ms"
	I1229 07:15:09.308780       1 cidrallocator.go:302] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1229 07:15:09.308856       1 default_servicecidr_controller.go:231] Setting default ServiceCIDR condition Ready to True
	I1229 07:15:09.315691       1 cidrallocator.go:278] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1229 07:15:09.315851       1 default_servicecidr_controller.go:231] Setting default ServiceCIDR condition Ready to True
	I1229 07:15:09.621613       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1229 07:15:10.208059       1 storage_scheduling.go:123] created PriorityClass system-node-critical with value 2000001000
	I1229 07:15:10.211722       1 storage_scheduling.go:123] created PriorityClass system-cluster-critical with value 2000000000
	I1229 07:15:10.211736       1 storage_scheduling.go:139] all system priority classes are created successfully or already exist.
	I1229 07:15:10.621722       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1229 07:15:10.653159       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1229 07:15:10.711012       1 alloc.go:329] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1229 07:15:10.717469       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.94.2]
	I1229 07:15:10.718440       1 controller.go:667] quota admission added evaluator for: endpoints
	I1229 07:15:10.722371       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1229 07:15:11.248519       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1229 07:15:11.902464       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1229 07:15:11.910805       1 alloc.go:329] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1229 07:15:11.920822       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1229 07:15:17.053540       1 cidrallocator.go:278] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1229 07:15:17.059514       1 cidrallocator.go:278] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1229 07:15:17.150394       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1229 07:15:17.248793       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	E1229 07:15:41.304100       1 conn.go:339] Error on socket receive: read tcp 192.168.94.2:8443->192.168.94.1:43824: use of closed network connection
	
	
	==> kube-controller-manager [ef7eca62daf230f1acb2eef8da3d44bbbefe5996884ce5077ba71ef4d58d07f7] <==
	I1229 07:15:16.059777       1 shared_informer.go:377] "Caches are synced"
	I1229 07:15:16.059825       1 shared_informer.go:377] "Caches are synced"
	I1229 07:15:16.059848       1 shared_informer.go:377] "Caches are synced"
	I1229 07:15:16.059884       1 shared_informer.go:377] "Caches are synced"
	I1229 07:15:16.059918       1 shared_informer.go:377] "Caches are synced"
	I1229 07:15:16.059931       1 shared_informer.go:377] "Caches are synced"
	I1229 07:15:16.060008       1 shared_informer.go:377] "Caches are synced"
	I1229 07:15:16.060024       1 shared_informer.go:377] "Caches are synced"
	I1229 07:15:16.060039       1 shared_informer.go:377] "Caches are synced"
	I1229 07:15:16.060203       1 shared_informer.go:377] "Caches are synced"
	I1229 07:15:16.060351       1 shared_informer.go:377] "Caches are synced"
	I1229 07:15:16.060412       1 shared_informer.go:377] "Caches are synced"
	I1229 07:15:16.060693       1 shared_informer.go:377] "Caches are synced"
	I1229 07:15:16.059832       1 shared_informer.go:377] "Caches are synced"
	I1229 07:15:16.060809       1 shared_informer.go:377] "Caches are synced"
	I1229 07:15:16.061127       1 shared_informer.go:377] "Caches are synced"
	I1229 07:15:16.061785       1 shared_informer.go:377] "Caches are synced"
	I1229 07:15:16.063326       1 range_allocator.go:433] "Set node PodCIDR" node="no-preload-122332" podCIDRs=["10.244.0.0/24"]
	I1229 07:15:16.063390       1 shared_informer.go:377] "Caches are synced"
	I1229 07:15:16.063715       1 shared_informer.go:370] "Waiting for caches to sync"
	I1229 07:15:16.156776       1 shared_informer.go:377] "Caches are synced"
	I1229 07:15:16.156863       1 garbagecollector.go:166] "Garbage collector: all resource monitors have synced"
	I1229 07:15:16.156873       1 garbagecollector.go:169] "Proceeding to collect garbage"
	I1229 07:15:16.164182       1 shared_informer.go:377] "Caches are synced"
	I1229 07:15:31.060656       1 node_lifecycle_controller.go:1057] "Controller detected that some Nodes are Ready. Exiting master disruption mode"
	
	
	==> kube-proxy [68c31ee0278066396c13f3e63de8bf7bb9217170312ac12b453b4bdd0fee306a] <==
	I1229 07:15:17.682797       1 server_linux.go:53] "Using iptables proxy"
	I1229 07:15:17.753162       1 shared_informer.go:370] "Waiting for caches to sync"
	I1229 07:15:17.853638       1 shared_informer.go:377] "Caches are synced"
	I1229 07:15:17.853677       1 server.go:218] "Successfully retrieved NodeIPs" NodeIPs=["192.168.94.2"]
	E1229 07:15:17.853790       1 server.go:255] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1229 07:15:17.881881       1 server.go:264] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1229 07:15:17.882105       1 server_linux.go:136] "Using iptables Proxier"
	I1229 07:15:17.889849       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1229 07:15:17.890547       1 server.go:529] "Version info" version="v1.35.0"
	I1229 07:15:17.890626       1 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1229 07:15:17.892685       1 config.go:309] "Starting node config controller"
	I1229 07:15:17.892740       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1229 07:15:17.892893       1 config.go:403] "Starting serviceCIDR config controller"
	I1229 07:15:17.892952       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1229 07:15:17.893630       1 config.go:200] "Starting service config controller"
	I1229 07:15:17.894625       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1229 07:15:17.893720       1 config.go:106] "Starting endpoint slice config controller"
	I1229 07:15:17.895878       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1229 07:15:17.993811       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1229 07:15:17.995023       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1229 07:15:17.996184       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1229 07:15:17.996326       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-scheduler [c391663458e81b11de01e1195fafed40219ede078c8a750d3c939c62f4c49546] <==
	E1229 07:15:09.269693       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Namespace"
	E1229 07:15:09.270773       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ReplicaSet"
	E1229 07:15:09.270823       1 reflector.go:204] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.DeviceClass"
	E1229 07:15:09.271483       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ResourceClaim"
	E1229 07:15:09.271493       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Pod"
	E1229 07:15:09.271652       1 reflector.go:204] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.PersistentVolume"
	E1229 07:15:09.271675       1 reflector.go:204] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.PodDisruptionBudget"
	E1229 07:15:09.271764       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Service"
	E1229 07:15:09.271909       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Node"
	E1229 07:15:09.271993       1 reflector.go:204] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.CSIDriver"
	E1229 07:15:09.272010       1 reflector.go:204] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.StatefulSet"
	E1229 07:15:09.272016       1 reflector.go:204] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.StorageClass"
	E1229 07:15:09.272087       1 reflector.go:204] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.PersistentVolumeClaim"
	E1229 07:15:09.272108       1 reflector.go:204] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.VolumeAttachment"
	E1229 07:15:09.272135       1 reflector.go:204] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.CSIStorageCapacity"
	E1229 07:15:09.272478       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ResourceSlice"
	E1229 07:15:09.272482       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ReplicationController"
	E1229 07:15:09.272644       1 reflector.go:204] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.CSINode"
	E1229 07:15:10.150693       1 reflector.go:204] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.DeviceClass"
	E1229 07:15:10.369875       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Namespace"
	E1229 07:15:10.410059       1 reflector.go:204] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.CSINode"
	E1229 07:15:10.447313       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Pod"
	E1229 07:15:10.476363       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Node"
	E1229 07:15:10.481062       1 reflector.go:204] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.StorageClass"
	I1229 07:15:10.864746       1 shared_informer.go:377] "Caches are synced"
	
	
	==> kubelet <==
	Dec 29 07:15:17 no-preload-122332 kubelet[2218]: I1229 07:15:17.285261    2218 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j5kjs\" (UniqueName: \"kubernetes.io/projected/01123e19-62cc-4666-8d46-8e51a274f6c9-kube-api-access-j5kjs\") pod \"kube-proxy-qvww2\" (UID: \"01123e19-62cc-4666-8d46-8e51a274f6c9\") " pod="kube-system/kube-proxy-qvww2"
	Dec 29 07:15:17 no-preload-122332 kubelet[2218]: I1229 07:15:17.386151    2218 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/bb2b7600-b85c-4a5b-aa87-b495394b1749-cni-cfg\") pod \"kindnet-rq99f\" (UID: \"bb2b7600-b85c-4a5b-aa87-b495394b1749\") " pod="kube-system/kindnet-rq99f"
	Dec 29 07:15:17 no-preload-122332 kubelet[2218]: I1229 07:15:17.386196    2218 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/bb2b7600-b85c-4a5b-aa87-b495394b1749-xtables-lock\") pod \"kindnet-rq99f\" (UID: \"bb2b7600-b85c-4a5b-aa87-b495394b1749\") " pod="kube-system/kindnet-rq99f"
	Dec 29 07:15:17 no-preload-122332 kubelet[2218]: I1229 07:15:17.386248    2218 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/bb2b7600-b85c-4a5b-aa87-b495394b1749-lib-modules\") pod \"kindnet-rq99f\" (UID: \"bb2b7600-b85c-4a5b-aa87-b495394b1749\") " pod="kube-system/kindnet-rq99f"
	Dec 29 07:15:17 no-preload-122332 kubelet[2218]: I1229 07:15:17.386281    2218 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8qtp5\" (UniqueName: \"kubernetes.io/projected/bb2b7600-b85c-4a5b-aa87-b495394b1749-kube-api-access-8qtp5\") pod \"kindnet-rq99f\" (UID: \"bb2b7600-b85c-4a5b-aa87-b495394b1749\") " pod="kube-system/kindnet-rq99f"
	Dec 29 07:15:17 no-preload-122332 kubelet[2218]: I1229 07:15:17.793952    2218 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kube-proxy-qvww2" podStartSLOduration=0.793932382 podStartE2EDuration="793.932382ms" podCreationTimestamp="2025-12-29 07:15:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-29 07:15:17.793908892 +0000 UTC m=+6.133983026" watchObservedRunningTime="2025-12-29 07:15:17.793932382 +0000 UTC m=+6.134006498"
	Dec 29 07:15:17 no-preload-122332 kubelet[2218]: E1229 07:15:17.916492    2218 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-no-preload-122332" containerName="kube-scheduler"
	Dec 29 07:15:18 no-preload-122332 kubelet[2218]: E1229 07:15:18.804768    2218 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-no-preload-122332" containerName="kube-apiserver"
	Dec 29 07:15:20 no-preload-122332 kubelet[2218]: E1229 07:15:20.284086    2218 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-controller-manager-no-preload-122332" containerName="kube-controller-manager"
	Dec 29 07:15:20 no-preload-122332 kubelet[2218]: I1229 07:15:20.294011    2218 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kindnet-rq99f" podStartSLOduration=2.080117403 podStartE2EDuration="3.293993586s" podCreationTimestamp="2025-12-29 07:15:17 +0000 UTC" firstStartedPulling="2025-12-29 07:15:17.583579412 +0000 UTC m=+5.923653521" lastFinishedPulling="2025-12-29 07:15:18.797455594 +0000 UTC m=+7.137529704" observedRunningTime="2025-12-29 07:15:19.799083064 +0000 UTC m=+8.139157189" watchObservedRunningTime="2025-12-29 07:15:20.293993586 +0000 UTC m=+8.634067701"
	Dec 29 07:15:24 no-preload-122332 kubelet[2218]: E1229 07:15:24.009699    2218 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/etcd-no-preload-122332" containerName="etcd"
	Dec 29 07:15:27 no-preload-122332 kubelet[2218]: E1229 07:15:27.921267    2218 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-no-preload-122332" containerName="kube-scheduler"
	Dec 29 07:15:28 no-preload-122332 kubelet[2218]: E1229 07:15:28.811326    2218 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-no-preload-122332" containerName="kube-apiserver"
	Dec 29 07:15:29 no-preload-122332 kubelet[2218]: I1229 07:15:29.346771    2218 kubelet_node_status.go:427] "Fast updating node status as it just became ready"
	Dec 29 07:15:29 no-preload-122332 kubelet[2218]: I1229 07:15:29.477893    2218 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/37396a97-f1db-4026-af7d-551f0fec188f-tmp\") pod \"storage-provisioner\" (UID: \"37396a97-f1db-4026-af7d-551f0fec188f\") " pod="kube-system/storage-provisioner"
	Dec 29 07:15:29 no-preload-122332 kubelet[2218]: I1229 07:15:29.477932    2218 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xwprl\" (UniqueName: \"kubernetes.io/projected/37396a97-f1db-4026-af7d-551f0fec188f-kube-api-access-xwprl\") pod \"storage-provisioner\" (UID: \"37396a97-f1db-4026-af7d-551f0fec188f\") " pod="kube-system/storage-provisioner"
	Dec 29 07:15:29 no-preload-122332 kubelet[2218]: I1229 07:15:29.477964    2218 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vwggd\" (UniqueName: \"kubernetes.io/projected/51ba32ec-f0c4-4dbd-b555-a3a3f8f02319-kube-api-access-vwggd\") pod \"coredns-7d764666f9-6rcr2\" (UID: \"51ba32ec-f0c4-4dbd-b555-a3a3f8f02319\") " pod="kube-system/coredns-7d764666f9-6rcr2"
	Dec 29 07:15:29 no-preload-122332 kubelet[2218]: I1229 07:15:29.477980    2218 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/51ba32ec-f0c4-4dbd-b555-a3a3f8f02319-config-volume\") pod \"coredns-7d764666f9-6rcr2\" (UID: \"51ba32ec-f0c4-4dbd-b555-a3a3f8f02319\") " pod="kube-system/coredns-7d764666f9-6rcr2"
	Dec 29 07:15:29 no-preload-122332 kubelet[2218]: E1229 07:15:29.813474    2218 prober_manager.go:209] "Readiness probe already exists for container" pod="kube-system/coredns-7d764666f9-6rcr2" containerName="coredns"
	Dec 29 07:15:29 no-preload-122332 kubelet[2218]: I1229 07:15:29.821808    2218 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=12.821793456 podStartE2EDuration="12.821793456s" podCreationTimestamp="2025-12-29 07:15:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-29 07:15:29.821548099 +0000 UTC m=+18.161622213" watchObservedRunningTime="2025-12-29 07:15:29.821793456 +0000 UTC m=+18.161867569"
	Dec 29 07:15:30 no-preload-122332 kubelet[2218]: E1229 07:15:30.289119    2218 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-controller-manager-no-preload-122332" containerName="kube-controller-manager"
	Dec 29 07:15:30 no-preload-122332 kubelet[2218]: I1229 07:15:30.300860    2218 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/coredns-7d764666f9-6rcr2" podStartSLOduration=13.300839455 podStartE2EDuration="13.300839455s" podCreationTimestamp="2025-12-29 07:15:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-29 07:15:29.831425885 +0000 UTC m=+18.171500002" watchObservedRunningTime="2025-12-29 07:15:30.300839455 +0000 UTC m=+18.640913569"
	Dec 29 07:15:30 no-preload-122332 kubelet[2218]: E1229 07:15:30.815719    2218 prober_manager.go:209] "Readiness probe already exists for container" pod="kube-system/coredns-7d764666f9-6rcr2" containerName="coredns"
	Dec 29 07:15:31 no-preload-122332 kubelet[2218]: E1229 07:15:31.818337    2218 prober_manager.go:209] "Readiness probe already exists for container" pod="kube-system/coredns-7d764666f9-6rcr2" containerName="coredns"
	Dec 29 07:15:32 no-preload-122332 kubelet[2218]: I1229 07:15:32.294898    2218 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9tll7\" (UniqueName: \"kubernetes.io/projected/64807d8c-0a89-4e9c-a816-fff1e31fce8f-kube-api-access-9tll7\") pod \"busybox\" (UID: \"64807d8c-0a89-4e9c-a816-fff1e31fce8f\") " pod="default/busybox"
	
	
	==> storage-provisioner [b533e73acb848e0f0b9c498ca9fb30db73d108cbb8654654476681b8bb30d861] <==
	I1229 07:15:29.729330       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1229 07:15:29.737763       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1229 07:15:29.737808       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1229 07:15:29.739980       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1229 07:15:29.745713       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 07:15:29.745902       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1229 07:15:29.746104       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-122332_c443d223-83bb-4a34-9cb5-322a8eadf9a4!
	I1229 07:15:29.746097       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"643709f2-3cd4-4ace-8f28-a3dfde29064a", APIVersion:"v1", ResourceVersion:"445", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-122332_c443d223-83bb-4a34-9cb5-322a8eadf9a4 became leader
	W1229 07:15:29.748195       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1229 07:15:29.751359       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 07:15:29.846861       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-122332_c443d223-83bb-4a34-9cb5-322a8eadf9a4!
	W1229 07:15:31.754335       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1229 07:15:31.759677       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1229 07:15:33.763040       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1229 07:15:33.766835       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1229 07:15:35.769739       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1229 07:15:35.775273       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1229 07:15:37.778249       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1229 07:15:37.782717       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1229 07:15:39.786154       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1229 07:15:39.795618       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1229 07:15:41.799425       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1229 07:15:41.930820       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-122332 -n no-preload-122332
helpers_test.go:270: (dbg) Run:  kubectl --context no-preload-122332 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:294: <<< TestStartStop/group/no-preload/serial/EnableAddonWhileActive FAILED: end of post-mortem logs <<<
helpers_test.go:295: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (2.61s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (2.08s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-739827 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-739827 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 11 (243.053746ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-29T07:16:28Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-739827 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 11
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context embed-certs-739827 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:213: (dbg) Non-zero exit: kubectl --context embed-certs-739827 describe deploy/metrics-server -n kube-system: exit status 1 (54.641547ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): deployments.apps "metrics-server" not found

                                                
                                                
** /stderr **
start_stop_delete_test.go:215: failed to get info on auto-pause deployments. args "kubectl --context embed-certs-739827 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:219: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/embed-certs/serial/EnableAddonWhileActive]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/embed-certs/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect embed-certs-739827
helpers_test.go:244: (dbg) docker inspect embed-certs-739827:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "5d317fcd0cf2e464b4302b3edb0923fd8225c8487adab12364480a572c254510",
	        "Created": "2025-12-29T07:15:42.247731806Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 253895,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-29T07:15:42.291489139Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:a2c772d8c9235c5e8a66a3d0af7f5184436aa141d74f90a5659fe0d594ea14d5",
	        "ResolvConfPath": "/var/lib/docker/containers/5d317fcd0cf2e464b4302b3edb0923fd8225c8487adab12364480a572c254510/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/5d317fcd0cf2e464b4302b3edb0923fd8225c8487adab12364480a572c254510/hostname",
	        "HostsPath": "/var/lib/docker/containers/5d317fcd0cf2e464b4302b3edb0923fd8225c8487adab12364480a572c254510/hosts",
	        "LogPath": "/var/lib/docker/containers/5d317fcd0cf2e464b4302b3edb0923fd8225c8487adab12364480a572c254510/5d317fcd0cf2e464b4302b3edb0923fd8225c8487adab12364480a572c254510-json.log",
	        "Name": "/embed-certs-739827",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "embed-certs-739827:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "embed-certs-739827",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "5d317fcd0cf2e464b4302b3edb0923fd8225c8487adab12364480a572c254510",
	                "LowerDir": "/var/lib/docker/overlay2/1ddb85c4d6d055685246eef346b309475d08181c1a016f3bdcb20ebf98f7bc7c-init/diff:/var/lib/docker/overlay2/3c9e424a3298c821d4195922375c499065668db3230de879006c717dc268a220/diff",
	                "MergedDir": "/var/lib/docker/overlay2/1ddb85c4d6d055685246eef346b309475d08181c1a016f3bdcb20ebf98f7bc7c/merged",
	                "UpperDir": "/var/lib/docker/overlay2/1ddb85c4d6d055685246eef346b309475d08181c1a016f3bdcb20ebf98f7bc7c/diff",
	                "WorkDir": "/var/lib/docker/overlay2/1ddb85c4d6d055685246eef346b309475d08181c1a016f3bdcb20ebf98f7bc7c/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "embed-certs-739827",
	                "Source": "/var/lib/docker/volumes/embed-certs-739827/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "embed-certs-739827",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766979815-22353@sha256:20dad5895b49b986a1253c0faab60865204843ac97fd3a6e6210da5896244409",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "embed-certs-739827",
	                "name.minikube.sigs.k8s.io": "embed-certs-739827",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "35e7c84943b7103bcb9467d093559160e60b16341f639788d1966d1e77592db7",
	            "SandboxKey": "/var/run/docker/netns/35e7c84943b7",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33068"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33069"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33072"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33070"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33071"
	                    }
	                ]
	            },
	            "Networks": {
	                "embed-certs-739827": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.103.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "b087e00cc8440c1f4006081344d5fbc0a2e6dd2a74b7013ef26beec3a624ea25",
	                    "EndpointID": "89a830ac013dcc6ec328120c12aebaae4145bc3eb3f56a8b390119b6d7e377ee",
	                    "Gateway": "192.168.103.1",
	                    "IPAddress": "192.168.103.2",
	                    "MacAddress": "9e:d3:9a:24:20:01",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "embed-certs-739827",
	                        "5d317fcd0cf2"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-739827 -n embed-certs-739827
helpers_test.go:253: <<< TestStartStop/group/embed-certs/serial/EnableAddonWhileActive FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/embed-certs/serial/EnableAddonWhileActive]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-739827 logs -n 25
helpers_test.go:261: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ start   │ -p cert-options-001954 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio                     │ cert-options-001954          │ jenkins │ v1.37.0 │ 29 Dec 25 07:12 UTC │ 29 Dec 25 07:13 UTC │
	│ ssh     │ cert-options-001954 ssh openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt                                                                                                                                                   │ cert-options-001954          │ jenkins │ v1.37.0 │ 29 Dec 25 07:13 UTC │ 29 Dec 25 07:13 UTC │
	│ ssh     │ -p cert-options-001954 -- sudo cat /etc/kubernetes/admin.conf                                                                                                                                                                                 │ cert-options-001954          │ jenkins │ v1.37.0 │ 29 Dec 25 07:13 UTC │ 29 Dec 25 07:13 UTC │
	│ delete  │ -p cert-options-001954                                                                                                                                                                                                                        │ cert-options-001954          │ jenkins │ v1.37.0 │ 29 Dec 25 07:13 UTC │ 29 Dec 25 07:13 UTC │
	│ start   │ -p old-k8s-version-876718 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-876718       │ jenkins │ v1.37.0 │ 29 Dec 25 07:13 UTC │ 29 Dec 25 07:13 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-876718 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-876718       │ jenkins │ v1.37.0 │ 29 Dec 25 07:14 UTC │                     │
	│ stop    │ -p old-k8s-version-876718 --alsologtostderr -v=3                                                                                                                                                                                              │ old-k8s-version-876718       │ jenkins │ v1.37.0 │ 29 Dec 25 07:14 UTC │ 29 Dec 25 07:14 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-876718 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-876718       │ jenkins │ v1.37.0 │ 29 Dec 25 07:14 UTC │ 29 Dec 25 07:14 UTC │
	│ start   │ -p old-k8s-version-876718 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-876718       │ jenkins │ v1.37.0 │ 29 Dec 25 07:14 UTC │ 29 Dec 25 07:15 UTC │
	│ delete  │ -p stopped-upgrade-518014                                                                                                                                                                                                                     │ stopped-upgrade-518014       │ jenkins │ v1.37.0 │ 29 Dec 25 07:14 UTC │ 29 Dec 25 07:14 UTC │
	│ start   │ -p no-preload-122332 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0                                                                                       │ no-preload-122332            │ jenkins │ v1.37.0 │ 29 Dec 25 07:14 UTC │ 29 Dec 25 07:15 UTC │
	│ image   │ old-k8s-version-876718 image list --format=json                                                                                                                                                                                               │ old-k8s-version-876718       │ jenkins │ v1.37.0 │ 29 Dec 25 07:15 UTC │ 29 Dec 25 07:15 UTC │
	│ pause   │ -p old-k8s-version-876718 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-876718       │ jenkins │ v1.37.0 │ 29 Dec 25 07:15 UTC │                     │
	│ delete  │ -p old-k8s-version-876718                                                                                                                                                                                                                     │ old-k8s-version-876718       │ jenkins │ v1.37.0 │ 29 Dec 25 07:15 UTC │ 29 Dec 25 07:15 UTC │
	│ delete  │ -p old-k8s-version-876718                                                                                                                                                                                                                     │ old-k8s-version-876718       │ jenkins │ v1.37.0 │ 29 Dec 25 07:15 UTC │ 29 Dec 25 07:15 UTC │
	│ start   │ -p embed-certs-739827 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0                                                                                        │ embed-certs-739827           │ jenkins │ v1.37.0 │ 29 Dec 25 07:15 UTC │ 29 Dec 25 07:16 UTC │
	│ addons  │ enable metrics-server -p no-preload-122332 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-122332            │ jenkins │ v1.37.0 │ 29 Dec 25 07:15 UTC │                     │
	│ start   │ -p cert-expiration-452455 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio                                                                                                                                     │ cert-expiration-452455       │ jenkins │ v1.37.0 │ 29 Dec 25 07:15 UTC │ 29 Dec 25 07:15 UTC │
	│ stop    │ -p no-preload-122332 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-122332            │ jenkins │ v1.37.0 │ 29 Dec 25 07:15 UTC │ 29 Dec 25 07:16 UTC │
	│ delete  │ -p cert-expiration-452455                                                                                                                                                                                                                     │ cert-expiration-452455       │ jenkins │ v1.37.0 │ 29 Dec 25 07:15 UTC │ 29 Dec 25 07:15 UTC │
	│ delete  │ -p disable-driver-mounts-708770                                                                                                                                                                                                               │ disable-driver-mounts-708770 │ jenkins │ v1.37.0 │ 29 Dec 25 07:15 UTC │ 29 Dec 25 07:15 UTC │
	│ start   │ -p default-k8s-diff-port-798607 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0                                                                      │ default-k8s-diff-port-798607 │ jenkins │ v1.37.0 │ 29 Dec 25 07:15 UTC │ 29 Dec 25 07:16 UTC │
	│ addons  │ enable dashboard -p no-preload-122332 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-122332            │ jenkins │ v1.37.0 │ 29 Dec 25 07:16 UTC │ 29 Dec 25 07:16 UTC │
	│ start   │ -p no-preload-122332 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0                                                                                       │ no-preload-122332            │ jenkins │ v1.37.0 │ 29 Dec 25 07:16 UTC │                     │
	│ addons  │ enable metrics-server -p embed-certs-739827 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-739827           │ jenkins │ v1.37.0 │ 29 Dec 25 07:16 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/29 07:16:02
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1229 07:16:02.443749  260780 out.go:360] Setting OutFile to fd 1 ...
	I1229 07:16:02.443868  260780 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1229 07:16:02.443876  260780 out.go:374] Setting ErrFile to fd 2...
	I1229 07:16:02.443880  260780 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1229 07:16:02.444091  260780 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22353-9207/.minikube/bin
	I1229 07:16:02.444548  260780 out.go:368] Setting JSON to false
	I1229 07:16:02.445612  260780 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":3514,"bootTime":1766989048,"procs":299,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1229 07:16:02.445676  260780 start.go:143] virtualization: kvm guest
	I1229 07:16:02.447529  260780 out.go:179] * [no-preload-122332] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1229 07:16:02.448624  260780 notify.go:221] Checking for updates...
	I1229 07:16:02.448661  260780 out.go:179]   - MINIKUBE_LOCATION=22353
	I1229 07:16:02.450006  260780 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1229 07:16:02.451256  260780 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22353-9207/kubeconfig
	I1229 07:16:02.452625  260780 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22353-9207/.minikube
	I1229 07:16:02.453802  260780 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1229 07:16:02.454837  260780 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1229 07:16:02.456380  260780 config.go:182] Loaded profile config "no-preload-122332": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1229 07:16:02.456926  260780 driver.go:422] Setting default libvirt URI to qemu:///system
	I1229 07:16:02.480656  260780 docker.go:124] docker version: linux-29.1.3:Docker Engine - Community
	I1229 07:16:02.480741  260780 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1229 07:16:02.548574  260780 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:65 OomKillDisable:false NGoroutines:76 SystemTime:2025-12-29 07:16:02.538465465 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:dea7da592f5d1d2b7755e3a161be07f43fad8f75 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1229 07:16:02.548680  260780 docker.go:319] overlay module found
	I1229 07:16:02.550359  260780 out.go:179] * Using the docker driver based on existing profile
	I1229 07:16:02.551629  260780 start.go:309] selected driver: docker
	I1229 07:16:02.551642  260780 start.go:928] validating driver "docker" against &{Name:no-preload-122332 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766979815-22353@sha256:20dad5895b49b986a1253c0faab60865204843ac97fd3a6e6210da5896244409 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:no-preload-122332 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9
p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1229 07:16:02.551718  260780 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1229 07:16:02.552298  260780 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1229 07:16:02.631849  260780 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:65 OomKillDisable:false NGoroutines:76 SystemTime:2025-12-29 07:16:02.604568832 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:dea7da592f5d1d2b7755e3a161be07f43fad8f75 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1229 07:16:02.632289  260780 start_flags.go:1019] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1229 07:16:02.632333  260780 cni.go:84] Creating CNI manager for ""
	I1229 07:16:02.632405  260780 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1229 07:16:02.632453  260780 start.go:353] cluster config:
	{Name:no-preload-122332 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766979815-22353@sha256:20dad5895b49b986a1253c0faab60865204843ac97fd3a6e6210da5896244409 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:no-preload-122332 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false
DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1229 07:16:02.635276  260780 out.go:179] * Starting "no-preload-122332" primary control-plane node in "no-preload-122332" cluster
	I1229 07:16:02.636438  260780 cache.go:134] Beginning downloading kic base image for docker with crio
	I1229 07:16:02.637622  260780 out.go:179] * Pulling base image v0.0.48-1766979815-22353 ...
	I1229 07:15:59.265535  252990 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1229 07:15:59.269699  252990 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.35.0/kubectl ...
	I1229 07:15:59.269714  252990 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2620 bytes)
	I1229 07:15:59.282597  252990 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1229 07:15:59.515698  252990 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1229 07:15:59.515868  252990 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1229 07:15:59.515878  252990 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes embed-certs-739827 minikube.k8s.io/updated_at=2025_12_29T07_15_59_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=306c26d738eaa3534a776d7c684ed563998a48b8 minikube.k8s.io/name=embed-certs-739827 minikube.k8s.io/primary=true
	I1229 07:15:59.602995  252990 ops.go:34] apiserver oom_adj: -16
	I1229 07:15:59.603094  252990 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1229 07:16:00.104202  252990 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1229 07:16:00.603616  252990 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1229 07:16:01.103828  252990 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1229 07:16:01.604171  252990 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1229 07:16:02.103429  252990 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1229 07:16:02.603669  252990 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1229 07:16:02.638877  260780 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime crio
	I1229 07:16:02.639014  260780 profile.go:143] Saving config to /home/jenkins/minikube-integration/22353-9207/.minikube/profiles/no-preload-122332/config.json ...
	I1229 07:16:02.639377  260780 cache.go:107] acquiring lock: {Name:mk524ccc7d3121d195adc7d1863af70c1e10cb09 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1229 07:16:02.639463  260780 cache.go:115] /home/jenkins/minikube-integration/22353-9207/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1229 07:16:02.639473  260780 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/22353-9207/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5" took 113.257µs
	I1229 07:16:02.639482  260780 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/22353-9207/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1229 07:16:02.639503  260780 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766979815-22353@sha256:20dad5895b49b986a1253c0faab60865204843ac97fd3a6e6210da5896244409 in local docker daemon
	I1229 07:16:02.639896  260780 cache.go:107] acquiring lock: {Name:mk4e3cc5ac4b58daa39b77bf4639b595a7b5e1bb Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1229 07:16:02.639969  260780 cache.go:115] /home/jenkins/minikube-integration/22353-9207/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.35.0 exists
	I1229 07:16:02.639978  260780 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.35.0" -> "/home/jenkins/minikube-integration/22353-9207/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.35.0" took 91.151µs
	I1229 07:16:02.639986  260780 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.35.0 -> /home/jenkins/minikube-integration/22353-9207/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.35.0 succeeded
	I1229 07:16:02.640002  260780 cache.go:107] acquiring lock: {Name:mkceb8935c60ed9a529274ab83854aa71dbe9a7d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1229 07:16:02.640049  260780 cache.go:115] /home/jenkins/minikube-integration/22353-9207/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.35.0 exists
	I1229 07:16:02.640056  260780 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.35.0" -> "/home/jenkins/minikube-integration/22353-9207/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.35.0" took 57.168µs
	I1229 07:16:02.640064  260780 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.35.0 -> /home/jenkins/minikube-integration/22353-9207/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.35.0 succeeded
	I1229 07:16:02.640076  260780 cache.go:107] acquiring lock: {Name:mk52f4077c79f8806c7eb2c6a7253ed35dcf09ce Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1229 07:16:02.640116  260780 cache.go:115] /home/jenkins/minikube-integration/22353-9207/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.35.0 exists
	I1229 07:16:02.640123  260780 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.35.0" -> "/home/jenkins/minikube-integration/22353-9207/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.35.0" took 49.4µs
	I1229 07:16:02.640131  260780 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.35.0 -> /home/jenkins/minikube-integration/22353-9207/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.35.0 succeeded
	I1229 07:16:02.640158  260780 cache.go:107] acquiring lock: {Name:mk6876db4017aa5ef89eab36b68c600dec62345c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1229 07:16:02.640193  260780 cache.go:115] /home/jenkins/minikube-integration/22353-9207/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.35.0 exists
	I1229 07:16:02.640199  260780 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.35.0" -> "/home/jenkins/minikube-integration/22353-9207/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.35.0" took 57.778µs
	I1229 07:16:02.640209  260780 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.35.0 -> /home/jenkins/minikube-integration/22353-9207/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.35.0 succeeded
	I1229 07:16:02.640254  260780 cache.go:107] acquiring lock: {Name:mkca02c24b265c83f3ba73c3e4bff2d28831259c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1229 07:16:02.640294  260780 cache.go:115] /home/jenkins/minikube-integration/22353-9207/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.6-0 exists
	I1229 07:16:02.640301  260780 cache.go:96] cache image "registry.k8s.io/etcd:3.6.6-0" -> "/home/jenkins/minikube-integration/22353-9207/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.6-0" took 50.343µs
	I1229 07:16:02.640308  260780 cache.go:80] save to tar file registry.k8s.io/etcd:3.6.6-0 -> /home/jenkins/minikube-integration/22353-9207/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.6-0 succeeded
	I1229 07:16:02.640319  260780 cache.go:107] acquiring lock: {Name:mk2827ee73a1c5c546c3035bd69b730bda1ef682 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1229 07:16:02.640351  260780 cache.go:115] /home/jenkins/minikube-integration/22353-9207/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1 exists
	I1229 07:16:02.640358  260780 cache.go:96] cache image "registry.k8s.io/pause:3.10.1" -> "/home/jenkins/minikube-integration/22353-9207/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1" took 40.709µs
	I1229 07:16:02.640366  260780 cache.go:80] save to tar file registry.k8s.io/pause:3.10.1 -> /home/jenkins/minikube-integration/22353-9207/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1 succeeded
	I1229 07:16:02.640379  260780 cache.go:107] acquiring lock: {Name:mkeb7d05fa98b741eb24c41313df007ce9bbb93e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1229 07:16:02.640417  260780 cache.go:115] /home/jenkins/minikube-integration/22353-9207/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.13.1 exists
	I1229 07:16:02.640434  260780 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.13.1" -> "/home/jenkins/minikube-integration/22353-9207/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.13.1" took 56.634µs
	I1229 07:16:02.640449  260780 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.13.1 -> /home/jenkins/minikube-integration/22353-9207/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.13.1 succeeded
	I1229 07:16:02.640457  260780 cache.go:87] Successfully saved all images to host disk.
	I1229 07:16:02.664020  260780 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766979815-22353@sha256:20dad5895b49b986a1253c0faab60865204843ac97fd3a6e6210da5896244409 in local docker daemon, skipping pull
	I1229 07:16:02.664052  260780 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766979815-22353@sha256:20dad5895b49b986a1253c0faab60865204843ac97fd3a6e6210da5896244409 exists in daemon, skipping load
	I1229 07:16:02.664073  260780 cache.go:243] Successfully downloaded all kic artifacts
	I1229 07:16:02.664108  260780 start.go:360] acquireMachinesLock for no-preload-122332: {Name:mka83f33e779c9aed23f5a0e4fef1298c9058532 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1229 07:16:02.664173  260780 start.go:364] duration metric: took 43.904µs to acquireMachinesLock for "no-preload-122332"
	I1229 07:16:02.664192  260780 start.go:96] Skipping create...Using existing machine configuration
	I1229 07:16:02.664198  260780 fix.go:54] fixHost starting: 
	I1229 07:16:02.664514  260780 cli_runner.go:164] Run: docker container inspect no-preload-122332 --format={{.State.Status}}
	I1229 07:16:02.686258  260780 fix.go:112] recreateIfNeeded on no-preload-122332: state=Stopped err=<nil>
	W1229 07:16:02.686292  260780 fix.go:138] unexpected machine state, will restart: <nil>
	I1229 07:16:03.103451  252990 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1229 07:16:03.603687  252990 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1229 07:16:04.103269  252990 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1229 07:16:04.200816  252990 kubeadm.go:1114] duration metric: took 4.684995965s to wait for elevateKubeSystemPrivileges
	I1229 07:16:04.200855  252990 kubeadm.go:403] duration metric: took 14.678699553s to StartCluster
	I1229 07:16:04.200877  252990 settings.go:142] acquiring lock: {Name:mkc0a676e8e9f981283a1b55a28ae5261bf35d87 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1229 07:16:04.200945  252990 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22353-9207/kubeconfig
	I1229 07:16:04.202494  252990 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22353-9207/kubeconfig: {Name:mkdd37af1bd6d0f9cc034a64c07c8d33ee5c10ac Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1229 07:16:04.202771  252990 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1229 07:16:04.202786  252990 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1229 07:16:04.202763  252990 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1229 07:16:04.202940  252990 addons.go:70] Setting default-storageclass=true in profile "embed-certs-739827"
	I1229 07:16:04.202966  252990 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-739827"
	I1229 07:16:04.202891  252990 addons.go:70] Setting storage-provisioner=true in profile "embed-certs-739827"
	I1229 07:16:04.203085  252990 addons.go:239] Setting addon storage-provisioner=true in "embed-certs-739827"
	I1229 07:16:04.203096  252990 config.go:182] Loaded profile config "embed-certs-739827": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1229 07:16:04.203108  252990 host.go:66] Checking if "embed-certs-739827" exists ...
	I1229 07:16:04.203462  252990 cli_runner.go:164] Run: docker container inspect embed-certs-739827 --format={{.State.Status}}
	I1229 07:16:04.203557  252990 cli_runner.go:164] Run: docker container inspect embed-certs-739827 --format={{.State.Status}}
	I1229 07:16:04.205492  252990 out.go:179] * Verifying Kubernetes components...
	I1229 07:16:04.206702  252990 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1229 07:16:04.230778  252990 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1229 07:16:00.228848  257698 out.go:252]   - Booting up control plane ...
	I1229 07:16:00.228978  257698 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1229 07:16:00.229080  257698 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1229 07:16:00.229742  257698 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1229 07:16:00.247141  257698 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1229 07:16:00.247292  257698 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1229 07:16:00.254581  257698 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1229 07:16:00.255265  257698 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1229 07:16:00.255330  257698 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1229 07:16:00.355716  257698 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1229 07:16:00.355826  257698 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1229 07:16:00.857320  257698 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 501.713178ms
	I1229 07:16:00.861513  257698 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1229 07:16:00.861676  257698 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.85.2:8444/livez
	I1229 07:16:00.861806  257698 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1229 07:16:00.861919  257698 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1229 07:16:01.866039  257698 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 1.004375882s
	I1229 07:16:02.779963  257698 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 1.917503387s
	I1229 07:16:04.364141  257698 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 3.502504327s
	I1229 07:16:04.385747  257698 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1229 07:16:04.396788  257698 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1229 07:16:04.408353  257698 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1229 07:16:04.408647  257698 kubeadm.go:319] [mark-control-plane] Marking the node default-k8s-diff-port-798607 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1229 07:16:04.420045  257698 kubeadm.go:319] [bootstrap-token] Using token: ya1d0f.4qbol9q1tpj6po5z
	I1229 07:16:04.422613  257698 out.go:252]   - Configuring RBAC rules ...
	I1229 07:16:04.422864  257698 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1229 07:16:04.426012  257698 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1229 07:16:04.432627  257698 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1229 07:16:04.436262  257698 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1229 07:16:04.439098  257698 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1229 07:16:04.442529  257698 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1229 07:16:04.232109  252990 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1229 07:16:04.232126  252990 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1229 07:16:04.232174  252990 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-739827
	I1229 07:16:04.233010  252990 addons.go:239] Setting addon default-storageclass=true in "embed-certs-739827"
	I1229 07:16:04.233060  252990 host.go:66] Checking if "embed-certs-739827" exists ...
	I1229 07:16:04.233546  252990 cli_runner.go:164] Run: docker container inspect embed-certs-739827 --format={{.State.Status}}
	I1229 07:16:04.263907  252990 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1229 07:16:04.263964  252990 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1229 07:16:04.264043  252990 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-739827
	I1229 07:16:04.263920  252990 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33068 SSHKeyPath:/home/jenkins/minikube-integration/22353-9207/.minikube/machines/embed-certs-739827/id_rsa Username:docker}
	I1229 07:16:04.290192  252990 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33068 SSHKeyPath:/home/jenkins/minikube-integration/22353-9207/.minikube/machines/embed-certs-739827/id_rsa Username:docker}
	I1229 07:16:04.310592  252990 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.103.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.35.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1229 07:16:04.369759  252990 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1229 07:16:04.383050  252990 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1229 07:16:04.405077  252990 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1229 07:16:04.494516  252990 start.go:987] {"host.minikube.internal": 192.168.103.1} host record injected into CoreDNS's ConfigMap
	I1229 07:16:04.495828  252990 node_ready.go:35] waiting up to 6m0s for node "embed-certs-739827" to be "Ready" ...
	I1229 07:16:04.747837  252990 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1229 07:16:04.770887  257698 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1229 07:16:05.207942  257698 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1229 07:16:05.771444  257698 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1229 07:16:05.772314  257698 kubeadm.go:319] 
	I1229 07:16:05.772377  257698 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1229 07:16:05.772387  257698 kubeadm.go:319] 
	I1229 07:16:05.772479  257698 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1229 07:16:05.772489  257698 kubeadm.go:319] 
	I1229 07:16:05.772512  257698 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1229 07:16:05.772564  257698 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1229 07:16:05.772609  257698 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1229 07:16:05.772615  257698 kubeadm.go:319] 
	I1229 07:16:05.772699  257698 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1229 07:16:05.772712  257698 kubeadm.go:319] 
	I1229 07:16:05.772779  257698 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1229 07:16:05.772788  257698 kubeadm.go:319] 
	I1229 07:16:05.772862  257698 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1229 07:16:05.772996  257698 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1229 07:16:05.773099  257698 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1229 07:16:05.773111  257698 kubeadm.go:319] 
	I1229 07:16:05.773232  257698 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1229 07:16:05.773329  257698 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1229 07:16:05.773337  257698 kubeadm.go:319] 
	I1229 07:16:05.773436  257698 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8444 --token ya1d0f.4qbol9q1tpj6po5z \
	I1229 07:16:05.773560  257698 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:d97bd7e33b97d9ccbd4ec53c25ea13f0ec6aabee3d7ac32cc2829ba2c3b93811 \
	I1229 07:16:05.773588  257698 kubeadm.go:319] 	--control-plane 
	I1229 07:16:05.773593  257698 kubeadm.go:319] 
	I1229 07:16:05.773695  257698 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1229 07:16:05.773702  257698 kubeadm.go:319] 
	I1229 07:16:05.773794  257698 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8444 --token ya1d0f.4qbol9q1tpj6po5z \
	I1229 07:16:05.773941  257698 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:d97bd7e33b97d9ccbd4ec53c25ea13f0ec6aabee3d7ac32cc2829ba2c3b93811 
	I1229 07:16:05.776726  257698 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1045-gcp\n", err: exit status 1
	I1229 07:16:05.776844  257698 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1229 07:16:05.776881  257698 cni.go:84] Creating CNI manager for ""
	I1229 07:16:05.776894  257698 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1229 07:16:05.778605  257698 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1229 07:16:02.688116  260780 out.go:252] * Restarting existing docker container for "no-preload-122332" ...
	I1229 07:16:02.688198  260780 cli_runner.go:164] Run: docker start no-preload-122332
	I1229 07:16:03.008410  260780 cli_runner.go:164] Run: docker container inspect no-preload-122332 --format={{.State.Status}}
	I1229 07:16:03.027569  260780 kic.go:430] container "no-preload-122332" state is running.
	I1229 07:16:03.027901  260780 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-122332
	I1229 07:16:03.047076  260780 profile.go:143] Saving config to /home/jenkins/minikube-integration/22353-9207/.minikube/profiles/no-preload-122332/config.json ...
	I1229 07:16:03.047347  260780 machine.go:94] provisionDockerMachine start ...
	I1229 07:16:03.047434  260780 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-122332
	I1229 07:16:03.067494  260780 main.go:144] libmachine: Using SSH client type: native
	I1229 07:16:03.067781  260780 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e300] 0x850fa0 <nil>  [] 0s} 127.0.0.1 33078 <nil> <nil>}
	I1229 07:16:03.067797  260780 main.go:144] libmachine: About to run SSH command:
	hostname
	I1229 07:16:03.068450  260780 main.go:144] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:49898->127.0.0.1:33078: read: connection reset by peer
	I1229 07:16:06.224107  260780 main.go:144] libmachine: SSH cmd err, output: <nil>: no-preload-122332
	
	I1229 07:16:06.224161  260780 ubuntu.go:182] provisioning hostname "no-preload-122332"
	I1229 07:16:06.224240  260780 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-122332
	I1229 07:16:06.243763  260780 main.go:144] libmachine: Using SSH client type: native
	I1229 07:16:06.244071  260780 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e300] 0x850fa0 <nil>  [] 0s} 127.0.0.1 33078 <nil> <nil>}
	I1229 07:16:06.244094  260780 main.go:144] libmachine: About to run SSH command:
	sudo hostname no-preload-122332 && echo "no-preload-122332" | sudo tee /etc/hostname
	I1229 07:16:06.395356  260780 main.go:144] libmachine: SSH cmd err, output: <nil>: no-preload-122332
	
	I1229 07:16:06.395431  260780 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-122332
	I1229 07:16:06.414003  260780 main.go:144] libmachine: Using SSH client type: native
	I1229 07:16:06.414305  260780 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e300] 0x850fa0 <nil>  [] 0s} 127.0.0.1 33078 <nil> <nil>}
	I1229 07:16:06.414327  260780 main.go:144] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-122332' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-122332/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-122332' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1229 07:16:06.551715  260780 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	I1229 07:16:06.551746  260780 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22353-9207/.minikube CaCertPath:/home/jenkins/minikube-integration/22353-9207/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22353-9207/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22353-9207/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22353-9207/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22353-9207/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22353-9207/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22353-9207/.minikube}
	I1229 07:16:06.551781  260780 ubuntu.go:190] setting up certificates
	I1229 07:16:06.551796  260780 provision.go:84] configureAuth start
	I1229 07:16:06.551861  260780 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-122332
	I1229 07:16:06.569689  260780 provision.go:143] copyHostCerts
	I1229 07:16:06.569739  260780 exec_runner.go:144] found /home/jenkins/minikube-integration/22353-9207/.minikube/ca.pem, removing ...
	I1229 07:16:06.569752  260780 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22353-9207/.minikube/ca.pem
	I1229 07:16:06.569828  260780 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22353-9207/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22353-9207/.minikube/ca.pem (1082 bytes)
	I1229 07:16:06.569940  260780 exec_runner.go:144] found /home/jenkins/minikube-integration/22353-9207/.minikube/cert.pem, removing ...
	I1229 07:16:06.569948  260780 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22353-9207/.minikube/cert.pem
	I1229 07:16:06.569976  260780 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22353-9207/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22353-9207/.minikube/cert.pem (1123 bytes)
	I1229 07:16:06.570057  260780 exec_runner.go:144] found /home/jenkins/minikube-integration/22353-9207/.minikube/key.pem, removing ...
	I1229 07:16:06.570068  260780 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22353-9207/.minikube/key.pem
	I1229 07:16:06.570106  260780 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22353-9207/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22353-9207/.minikube/key.pem (1675 bytes)
	I1229 07:16:06.570174  260780 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22353-9207/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22353-9207/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22353-9207/.minikube/certs/ca-key.pem org=jenkins.no-preload-122332 san=[127.0.0.1 192.168.94.2 localhost minikube no-preload-122332]
	I1229 07:16:06.818389  260780 provision.go:177] copyRemoteCerts
	I1229 07:16:06.818449  260780 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1229 07:16:06.818482  260780 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-122332
	I1229 07:16:06.837040  260780 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33078 SSHKeyPath:/home/jenkins/minikube-integration/22353-9207/.minikube/machines/no-preload-122332/id_rsa Username:docker}
	I1229 07:16:06.935125  260780 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-9207/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1229 07:16:06.952619  260780 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-9207/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1229 07:16:06.969746  260780 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-9207/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1229 07:16:06.986963  260780 provision.go:87] duration metric: took 435.143894ms to configureAuth
	I1229 07:16:06.987000  260780 ubuntu.go:206] setting minikube options for container-runtime
	I1229 07:16:06.987194  260780 config.go:182] Loaded profile config "no-preload-122332": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1229 07:16:06.987348  260780 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-122332
	I1229 07:16:07.005799  260780 main.go:144] libmachine: Using SSH client type: native
	I1229 07:16:07.006103  260780 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e300] 0x850fa0 <nil>  [] 0s} 127.0.0.1 33078 <nil> <nil>}
	I1229 07:16:07.006133  260780 main.go:144] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1229 07:16:04.749168  252990 addons.go:530] duration metric: took 546.374384ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I1229 07:16:04.999393  252990 kapi.go:214] "coredns" deployment in "kube-system" namespace and "embed-certs-739827" context rescaled to 1 replicas
	W1229 07:16:06.499864  252990 node_ready.go:57] node "embed-certs-739827" has "Ready":"False" status (will retry)
	I1229 07:16:07.493300  260780 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1229 07:16:07.493322  260780 machine.go:97] duration metric: took 4.445958531s to provisionDockerMachine
	I1229 07:16:07.493335  260780 start.go:293] postStartSetup for "no-preload-122332" (driver="docker")
	I1229 07:16:07.493349  260780 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1229 07:16:07.493418  260780 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1229 07:16:07.493453  260780 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-122332
	I1229 07:16:07.514630  260780 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33078 SSHKeyPath:/home/jenkins/minikube-integration/22353-9207/.minikube/machines/no-preload-122332/id_rsa Username:docker}
	I1229 07:16:07.613194  260780 ssh_runner.go:195] Run: cat /etc/os-release
	I1229 07:16:07.616995  260780 main.go:144] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1229 07:16:07.617024  260780 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1229 07:16:07.617034  260780 filesync.go:126] Scanning /home/jenkins/minikube-integration/22353-9207/.minikube/addons for local assets ...
	I1229 07:16:07.617074  260780 filesync.go:126] Scanning /home/jenkins/minikube-integration/22353-9207/.minikube/files for local assets ...
	I1229 07:16:07.617156  260780 filesync.go:149] local asset: /home/jenkins/minikube-integration/22353-9207/.minikube/files/etc/ssl/certs/127332.pem -> 127332.pem in /etc/ssl/certs
	I1229 07:16:07.617256  260780 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1229 07:16:07.625107  260780 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-9207/.minikube/files/etc/ssl/certs/127332.pem --> /etc/ssl/certs/127332.pem (1708 bytes)
	I1229 07:16:07.642849  260780 start.go:296] duration metric: took 149.498351ms for postStartSetup
	I1229 07:16:07.642936  260780 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1229 07:16:07.642983  260780 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-122332
	I1229 07:16:07.664279  260780 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33078 SSHKeyPath:/home/jenkins/minikube-integration/22353-9207/.minikube/machines/no-preload-122332/id_rsa Username:docker}
	I1229 07:16:07.762413  260780 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1229 07:16:07.766991  260780 fix.go:56] duration metric: took 5.102787642s for fixHost
	I1229 07:16:07.767017  260780 start.go:83] releasing machines lock for "no-preload-122332", held for 5.102835017s
	I1229 07:16:07.767090  260780 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-122332
	I1229 07:16:07.786134  260780 ssh_runner.go:195] Run: cat /version.json
	I1229 07:16:07.786181  260780 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-122332
	I1229 07:16:07.786213  260780 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1229 07:16:07.786309  260780 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-122332
	I1229 07:16:07.804023  260780 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33078 SSHKeyPath:/home/jenkins/minikube-integration/22353-9207/.minikube/machines/no-preload-122332/id_rsa Username:docker}
	I1229 07:16:07.804023  260780 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33078 SSHKeyPath:/home/jenkins/minikube-integration/22353-9207/.minikube/machines/no-preload-122332/id_rsa Username:docker}
	I1229 07:16:07.897336  260780 ssh_runner.go:195] Run: systemctl --version
	I1229 07:16:07.950849  260780 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1229 07:16:07.987111  260780 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1229 07:16:07.991621  260780 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1229 07:16:07.991699  260780 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1229 07:16:07.999673  260780 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1229 07:16:07.999694  260780 start.go:496] detecting cgroup driver to use...
	I1229 07:16:07.999725  260780 detect.go:190] detected "systemd" cgroup driver on host os
	I1229 07:16:07.999775  260780 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1229 07:16:08.013677  260780 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1229 07:16:08.025257  260780 docker.go:218] disabling cri-docker service (if available) ...
	I1229 07:16:08.025308  260780 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1229 07:16:08.040318  260780 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1229 07:16:08.052098  260780 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1229 07:16:08.130486  260780 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1229 07:16:08.216857  260780 docker.go:234] disabling docker service ...
	I1229 07:16:08.216913  260780 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1229 07:16:08.231380  260780 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1229 07:16:08.243659  260780 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1229 07:16:08.325705  260780 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1229 07:16:08.406352  260780 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1229 07:16:08.419666  260780 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1229 07:16:08.437006  260780 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1229 07:16:08.437068  260780 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1229 07:16:08.445952  260780 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1229 07:16:08.446004  260780 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1229 07:16:08.454594  260780 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1229 07:16:08.462937  260780 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1229 07:16:08.471778  260780 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1229 07:16:08.479676  260780 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1229 07:16:08.488706  260780 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1229 07:16:08.497036  260780 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1229 07:16:08.506200  260780 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1229 07:16:08.514213  260780 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1229 07:16:08.521389  260780 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1229 07:16:08.602735  260780 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1229 07:16:08.737390  260780 start.go:553] Will wait 60s for socket path /var/run/crio/crio.sock
	I1229 07:16:08.737446  260780 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1229 07:16:08.741340  260780 start.go:574] Will wait 60s for crictl version
	I1229 07:16:08.741384  260780 ssh_runner.go:195] Run: which crictl
	I1229 07:16:08.745157  260780 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1229 07:16:08.768460  260780 start.go:590] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.35.0
	RuntimeApiVersion:  v1
	I1229 07:16:08.768544  260780 ssh_runner.go:195] Run: crio --version
	I1229 07:16:08.795631  260780 ssh_runner.go:195] Run: crio --version
	I1229 07:16:08.824111  260780 out.go:179] * Preparing Kubernetes v1.35.0 on CRI-O 1.35.0 ...
	I1229 07:16:05.414177  225445 api_server.go:299] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1229 07:16:05.414637  225445 api_server.go:315] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1229 07:16:05.414688  225445 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1229 07:16:05.414733  225445 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1229 07:16:05.444555  225445 cri.go:96] found id: "8b80982715281e14912ea726d2a5a182323febb349dfab3bcb2a610db7fa1d11"
	I1229 07:16:05.444574  225445 cri.go:96] found id: "3a9c461f6549cedfcb902d493a98dfdfd45573c28ec554e6b1d2ce904ee1fd67"
	I1229 07:16:05.444578  225445 cri.go:96] found id: ""
	I1229 07:16:05.444585  225445 logs.go:282] 2 containers: [8b80982715281e14912ea726d2a5a182323febb349dfab3bcb2a610db7fa1d11 3a9c461f6549cedfcb902d493a98dfdfd45573c28ec554e6b1d2ce904ee1fd67]
	I1229 07:16:05.444640  225445 ssh_runner.go:195] Run: which crictl
	I1229 07:16:05.448489  225445 ssh_runner.go:195] Run: which crictl
	I1229 07:16:05.452440  225445 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1229 07:16:05.452513  225445 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1229 07:16:05.483010  225445 cri.go:96] found id: "02694b727a7ef5615b676832bc306017d23da15eff0a3aa9c1e32630ffe9e1bd"
	I1229 07:16:05.483033  225445 cri.go:96] found id: ""
	I1229 07:16:05.483042  225445 logs.go:282] 1 containers: [02694b727a7ef5615b676832bc306017d23da15eff0a3aa9c1e32630ffe9e1bd]
	I1229 07:16:05.483109  225445 ssh_runner.go:195] Run: which crictl
	I1229 07:16:05.487159  225445 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1229 07:16:05.487242  225445 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1229 07:16:05.516757  225445 cri.go:96] found id: ""
	I1229 07:16:05.516783  225445 logs.go:282] 0 containers: []
	W1229 07:16:05.516791  225445 logs.go:284] No container was found matching "coredns"
	I1229 07:16:05.516797  225445 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1229 07:16:05.516846  225445 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1229 07:16:05.545493  225445 cri.go:96] found id: "bcf9a8233b7fe61acd5d7b0071cecd58e00a26cec4247f4ead31d75316b79078"
	I1229 07:16:05.545512  225445 cri.go:96] found id: "1a2a5ec730671fe2fec7f6b801e54afe17547e25677f0f6dc85052e44f7374d4"
	I1229 07:16:05.545516  225445 cri.go:96] found id: "83f0b45b14d0f5c0f2428e774b2c22cc1cbf1cdc631be13cb8017de1791ed9c1"
	I1229 07:16:05.545519  225445 cri.go:96] found id: ""
	I1229 07:16:05.545526  225445 logs.go:282] 3 containers: [bcf9a8233b7fe61acd5d7b0071cecd58e00a26cec4247f4ead31d75316b79078 1a2a5ec730671fe2fec7f6b801e54afe17547e25677f0f6dc85052e44f7374d4 83f0b45b14d0f5c0f2428e774b2c22cc1cbf1cdc631be13cb8017de1791ed9c1]
	I1229 07:16:05.545570  225445 ssh_runner.go:195] Run: which crictl
	I1229 07:16:05.549488  225445 ssh_runner.go:195] Run: which crictl
	I1229 07:16:05.552977  225445 ssh_runner.go:195] Run: which crictl
	I1229 07:16:05.556385  225445 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1229 07:16:05.556435  225445 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1229 07:16:05.583362  225445 cri.go:96] found id: ""
	I1229 07:16:05.583383  225445 logs.go:282] 0 containers: []
	W1229 07:16:05.583391  225445 logs.go:284] No container was found matching "kube-proxy"
	I1229 07:16:05.583396  225445 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1229 07:16:05.583452  225445 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1229 07:16:05.613371  225445 cri.go:96] found id: "a67832fbf27ab39d26d7cd60ce6b9b5a496df64418a8f0557221862df96d6685"
	I1229 07:16:05.613391  225445 cri.go:96] found id: "c6904a1539983ab61b05abbaa0001209617c96ddc1921ff83557531b68191f66"
	I1229 07:16:05.613395  225445 cri.go:96] found id: ""
	I1229 07:16:05.613403  225445 logs.go:282] 2 containers: [a67832fbf27ab39d26d7cd60ce6b9b5a496df64418a8f0557221862df96d6685 c6904a1539983ab61b05abbaa0001209617c96ddc1921ff83557531b68191f66]
	I1229 07:16:05.613446  225445 ssh_runner.go:195] Run: which crictl
	I1229 07:16:05.617582  225445 ssh_runner.go:195] Run: which crictl
	I1229 07:16:05.621252  225445 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1229 07:16:05.621310  225445 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1229 07:16:05.649489  225445 cri.go:96] found id: ""
	I1229 07:16:05.649514  225445 logs.go:282] 0 containers: []
	W1229 07:16:05.649526  225445 logs.go:284] No container was found matching "kindnet"
	I1229 07:16:05.649533  225445 cri.go:61] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1229 07:16:05.649588  225445 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=storage-provisioner
	I1229 07:16:05.676676  225445 cri.go:96] found id: ""
	I1229 07:16:05.676699  225445 logs.go:282] 0 containers: []
	W1229 07:16:05.676706  225445 logs.go:284] No container was found matching "storage-provisioner"
	I1229 07:16:05.676714  225445 logs.go:123] Gathering logs for kube-controller-manager [c6904a1539983ab61b05abbaa0001209617c96ddc1921ff83557531b68191f66] ...
	I1229 07:16:05.676724  225445 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c6904a1539983ab61b05abbaa0001209617c96ddc1921ff83557531b68191f66"
	I1229 07:16:05.703589  225445 logs.go:123] Gathering logs for CRI-O ...
	I1229 07:16:05.703628  225445 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1229 07:16:05.779177  225445 logs.go:123] Gathering logs for kubelet ...
	I1229 07:16:05.779215  225445 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1229 07:16:05.879985  225445 logs.go:123] Gathering logs for kube-scheduler [1a2a5ec730671fe2fec7f6b801e54afe17547e25677f0f6dc85052e44f7374d4] ...
	I1229 07:16:05.880020  225445 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 1a2a5ec730671fe2fec7f6b801e54afe17547e25677f0f6dc85052e44f7374d4"
	I1229 07:16:05.953121  225445 logs.go:123] Gathering logs for kube-scheduler [83f0b45b14d0f5c0f2428e774b2c22cc1cbf1cdc631be13cb8017de1791ed9c1] ...
	I1229 07:16:05.953156  225445 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 83f0b45b14d0f5c0f2428e774b2c22cc1cbf1cdc631be13cb8017de1791ed9c1"
	I1229 07:16:05.983806  225445 logs.go:123] Gathering logs for kube-controller-manager [a67832fbf27ab39d26d7cd60ce6b9b5a496df64418a8f0557221862df96d6685] ...
	I1229 07:16:05.983842  225445 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a67832fbf27ab39d26d7cd60ce6b9b5a496df64418a8f0557221862df96d6685"
	I1229 07:16:06.015711  225445 logs.go:123] Gathering logs for container status ...
	I1229 07:16:06.015744  225445 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1229 07:16:06.052557  225445 logs.go:123] Gathering logs for dmesg ...
	I1229 07:16:06.052593  225445 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1229 07:16:06.067154  225445 logs.go:123] Gathering logs for describe nodes ...
	I1229 07:16:06.067181  225445 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1229 07:16:06.149329  225445 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1229 07:16:06.149356  225445 logs.go:123] Gathering logs for kube-apiserver [8b80982715281e14912ea726d2a5a182323febb349dfab3bcb2a610db7fa1d11] ...
	I1229 07:16:06.149373  225445 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 8b80982715281e14912ea726d2a5a182323febb349dfab3bcb2a610db7fa1d11"
	I1229 07:16:06.185562  225445 logs.go:123] Gathering logs for kube-apiserver [3a9c461f6549cedfcb902d493a98dfdfd45573c28ec554e6b1d2ce904ee1fd67] ...
	I1229 07:16:06.185590  225445 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3a9c461f6549cedfcb902d493a98dfdfd45573c28ec554e6b1d2ce904ee1fd67"
	I1229 07:16:06.217252  225445 logs.go:123] Gathering logs for etcd [02694b727a7ef5615b676832bc306017d23da15eff0a3aa9c1e32630ffe9e1bd] ...
	I1229 07:16:06.217278  225445 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 02694b727a7ef5615b676832bc306017d23da15eff0a3aa9c1e32630ffe9e1bd"
	I1229 07:16:06.253854  225445 logs.go:123] Gathering logs for kube-scheduler [bcf9a8233b7fe61acd5d7b0071cecd58e00a26cec4247f4ead31d75316b79078] ...
	I1229 07:16:06.253886  225445 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 bcf9a8233b7fe61acd5d7b0071cecd58e00a26cec4247f4ead31d75316b79078"
	W1229 07:16:06.280521  225445 logs.go:138] Found kube-scheduler [bcf9a8233b7fe61acd5d7b0071cecd58e00a26cec4247f4ead31d75316b79078] problem: E1229 07:14:51.301899       1 run.go:72] "command failed" err="failed to create listener: failed to listen on 127.0.0.1:10259: listen tcp 127.0.0.1:10259: bind: address already in use"
	I1229 07:16:06.280551  225445 out.go:374] Setting ErrFile to fd 2...
	I1229 07:16:06.280563  225445 out.go:408] TERM=,COLORTERM=, which probably does not support color
	W1229 07:16:06.280616  225445 out.go:285] X Problems detected in kube-scheduler [bcf9a8233b7fe61acd5d7b0071cecd58e00a26cec4247f4ead31d75316b79078]:
	W1229 07:16:06.280628  225445 out.go:285]   E1229 07:14:51.301899       1 run.go:72] "command failed" err="failed to create listener: failed to listen on 127.0.0.1:10259: listen tcp 127.0.0.1:10259: bind: address already in use"
	I1229 07:16:06.280633  225445 out.go:374] Setting ErrFile to fd 2...
	I1229 07:16:06.280637  225445 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1229 07:16:05.779739  257698 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1229 07:16:05.784323  257698 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.35.0/kubectl ...
	I1229 07:16:05.784340  257698 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2620 bytes)
	I1229 07:16:05.798520  257698 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1229 07:16:06.021729  257698 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1229 07:16:06.021853  257698 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1229 07:16:06.021877  257698 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes default-k8s-diff-port-798607 minikube.k8s.io/updated_at=2025_12_29T07_16_06_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=306c26d738eaa3534a776d7c684ed563998a48b8 minikube.k8s.io/name=default-k8s-diff-port-798607 minikube.k8s.io/primary=true
	I1229 07:16:06.032297  257698 ops.go:34] apiserver oom_adj: -16
	I1229 07:16:06.123180  257698 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1229 07:16:06.624136  257698 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1229 07:16:07.123583  257698 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1229 07:16:07.623394  257698 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1229 07:16:08.123788  257698 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1229 07:16:08.623977  257698 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1229 07:16:09.123172  257698 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1229 07:16:09.623339  257698 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1229 07:16:08.825511  260780 cli_runner.go:164] Run: docker network inspect no-preload-122332 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1229 07:16:08.843046  260780 ssh_runner.go:195] Run: grep 192.168.94.1	host.minikube.internal$ /etc/hosts
	I1229 07:16:08.847362  260780 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.94.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1229 07:16:08.857654  260780 kubeadm.go:884] updating cluster {Name:no-preload-122332 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766979815-22353@sha256:20dad5895b49b986a1253c0faab60865204843ac97fd3a6e6210da5896244409 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:no-preload-122332 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker
BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false} ...
	I1229 07:16:08.857747  260780 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime crio
	I1229 07:16:08.857778  260780 ssh_runner.go:195] Run: sudo crictl images --output json
	I1229 07:16:08.892104  260780 crio.go:561] all images are preloaded for cri-o runtime.
	I1229 07:16:08.892123  260780 cache_images.go:86] Images are preloaded, skipping loading
	I1229 07:16:08.892130  260780 kubeadm.go:935] updating node { 192.168.94.2 8443 v1.35.0 crio true true} ...
	I1229 07:16:08.892242  260780 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=no-preload-122332 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.94.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0 ClusterName:no-preload-122332 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1229 07:16:08.892308  260780 ssh_runner.go:195] Run: crio config
	I1229 07:16:08.938576  260780 cni.go:84] Creating CNI manager for ""
	I1229 07:16:08.938594  260780 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1229 07:16:08.938607  260780 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1229 07:16:08.938635  260780 kubeadm.go:197] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.94.2 APIServerPort:8443 KubernetesVersion:v1.35.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-122332 NodeName:no-preload-122332 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.94.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.94.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/
kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1229 07:16:08.938777  260780 kubeadm.go:203] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.94.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-122332"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.94.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.94.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1229 07:16:08.938840  260780 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0
	I1229 07:16:08.946885  260780 binaries.go:51] Found k8s binaries, skipping transfer
	I1229 07:16:08.946947  260780 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1229 07:16:08.954902  260780 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I1229 07:16:08.967474  260780 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1229 07:16:08.980397  260780 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2213 bytes)
	I1229 07:16:08.992871  260780 ssh_runner.go:195] Run: grep 192.168.94.2	control-plane.minikube.internal$ /etc/hosts
	I1229 07:16:08.996432  260780 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.94.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1229 07:16:09.006542  260780 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1229 07:16:09.087746  260780 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1229 07:16:09.112470  260780 certs.go:69] Setting up /home/jenkins/minikube-integration/22353-9207/.minikube/profiles/no-preload-122332 for IP: 192.168.94.2
	I1229 07:16:09.112493  260780 certs.go:195] generating shared ca certs ...
	I1229 07:16:09.112511  260780 certs.go:227] acquiring lock for ca certs: {Name:mk9ea2caa33885d3eff30cd6b31b0954826457bd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1229 07:16:09.112678  260780 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22353-9207/.minikube/ca.key
	I1229 07:16:09.112731  260780 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22353-9207/.minikube/proxy-client-ca.key
	I1229 07:16:09.112746  260780 certs.go:257] generating profile certs ...
	I1229 07:16:09.112845  260780 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22353-9207/.minikube/profiles/no-preload-122332/client.key
	I1229 07:16:09.112928  260780 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22353-9207/.minikube/profiles/no-preload-122332/apiserver.key.8c20c595
	I1229 07:16:09.112984  260780 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22353-9207/.minikube/profiles/no-preload-122332/proxy-client.key
	I1229 07:16:09.113144  260780 certs.go:484] found cert: /home/jenkins/minikube-integration/22353-9207/.minikube/certs/12733.pem (1338 bytes)
	W1229 07:16:09.113190  260780 certs.go:480] ignoring /home/jenkins/minikube-integration/22353-9207/.minikube/certs/12733_empty.pem, impossibly tiny 0 bytes
	I1229 07:16:09.113204  260780 certs.go:484] found cert: /home/jenkins/minikube-integration/22353-9207/.minikube/certs/ca-key.pem (1675 bytes)
	I1229 07:16:09.113256  260780 certs.go:484] found cert: /home/jenkins/minikube-integration/22353-9207/.minikube/certs/ca.pem (1082 bytes)
	I1229 07:16:09.113304  260780 certs.go:484] found cert: /home/jenkins/minikube-integration/22353-9207/.minikube/certs/cert.pem (1123 bytes)
	I1229 07:16:09.113336  260780 certs.go:484] found cert: /home/jenkins/minikube-integration/22353-9207/.minikube/certs/key.pem (1675 bytes)
	I1229 07:16:09.113392  260780 certs.go:484] found cert: /home/jenkins/minikube-integration/22353-9207/.minikube/files/etc/ssl/certs/127332.pem (1708 bytes)
	I1229 07:16:09.114188  260780 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-9207/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1229 07:16:09.135589  260780 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-9207/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1229 07:16:09.155082  260780 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-9207/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1229 07:16:09.174510  260780 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-9207/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1229 07:16:09.200211  260780 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-9207/.minikube/profiles/no-preload-122332/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1229 07:16:09.220502  260780 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-9207/.minikube/profiles/no-preload-122332/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1229 07:16:09.237694  260780 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-9207/.minikube/profiles/no-preload-122332/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1229 07:16:09.254571  260780 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-9207/.minikube/profiles/no-preload-122332/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1229 07:16:09.270728  260780 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-9207/.minikube/certs/12733.pem --> /usr/share/ca-certificates/12733.pem (1338 bytes)
	I1229 07:16:09.288431  260780 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-9207/.minikube/files/etc/ssl/certs/127332.pem --> /usr/share/ca-certificates/127332.pem (1708 bytes)
	I1229 07:16:09.305730  260780 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-9207/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1229 07:16:09.323684  260780 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (722 bytes)
	I1229 07:16:09.336197  260780 ssh_runner.go:195] Run: openssl version
	I1229 07:16:09.342075  260780 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/12733.pem
	I1229 07:16:09.348961  260780 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/12733.pem /etc/ssl/certs/12733.pem
	I1229 07:16:09.356166  260780 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12733.pem
	I1229 07:16:09.359621  260780 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 29 06:49 /usr/share/ca-certificates/12733.pem
	I1229 07:16:09.359673  260780 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12733.pem
	I1229 07:16:09.395522  260780 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1229 07:16:09.403023  260780 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/127332.pem
	I1229 07:16:09.410206  260780 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/127332.pem /etc/ssl/certs/127332.pem
	I1229 07:16:09.417565  260780 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/127332.pem
	I1229 07:16:09.421113  260780 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 29 06:49 /usr/share/ca-certificates/127332.pem
	I1229 07:16:09.421166  260780 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/127332.pem
	I1229 07:16:09.465713  260780 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1229 07:16:09.473479  260780 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1229 07:16:09.480734  260780 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1229 07:16:09.488057  260780 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1229 07:16:09.491795  260780 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 29 06:46 /usr/share/ca-certificates/minikubeCA.pem
	I1229 07:16:09.491842  260780 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1229 07:16:09.529280  260780 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1229 07:16:09.538665  260780 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1229 07:16:09.544676  260780 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1229 07:16:09.584472  260780 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1229 07:16:09.622357  260780 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1229 07:16:09.670719  260780 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1229 07:16:09.720180  260780 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1229 07:16:09.779489  260780 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1229 07:16:09.819391  260780 kubeadm.go:401] StartCluster: {Name:no-preload-122332 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766979815-22353@sha256:20dad5895b49b986a1253c0faab60865204843ac97fd3a6e6210da5896244409 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:no-preload-122332 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Bi
naryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1229 07:16:09.819485  260780 cri.go:61] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1229 07:16:09.819533  260780 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1229 07:16:09.849341  260780 cri.go:96] found id: "182221ab78b63253e283f5b17e6c4eefd8ff0cf8a867399484c79718b382becd"
	I1229 07:16:09.849360  260780 cri.go:96] found id: "3c840a729524e5af9fc1ab0924ee6323875c1b5066189ad27582f5313c496cbc"
	I1229 07:16:09.849364  260780 cri.go:96] found id: "482322719dad640690982288c2258e90836d194891b2179cab964e1340265902"
	I1229 07:16:09.849371  260780 cri.go:96] found id: "013472dcacb3dee11074415629264465301e3f2be8dd69785de033ac3c97d206"
	I1229 07:16:09.849374  260780 cri.go:96] found id: ""
	I1229 07:16:09.849413  260780 ssh_runner.go:195] Run: sudo runc list -f json
	W1229 07:16:09.861190  260780 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-29T07:16:09Z" level=error msg="open /run/runc: no such file or directory"
	I1229 07:16:09.861272  260780 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1229 07:16:09.869803  260780 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1229 07:16:09.869825  260780 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1229 07:16:09.869882  260780 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1229 07:16:09.878191  260780 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1229 07:16:09.878941  260780 kubeconfig.go:47] verify endpoint returned: get endpoint: "no-preload-122332" does not appear in /home/jenkins/minikube-integration/22353-9207/kubeconfig
	I1229 07:16:09.879427  260780 kubeconfig.go:62] /home/jenkins/minikube-integration/22353-9207/kubeconfig needs updating (will repair): [kubeconfig missing "no-preload-122332" cluster setting kubeconfig missing "no-preload-122332" context setting]
	I1229 07:16:09.880162  260780 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22353-9207/kubeconfig: {Name:mkdd37af1bd6d0f9cc034a64c07c8d33ee5c10ac Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1229 07:16:09.881753  260780 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1229 07:16:09.890244  260780 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.94.2
	I1229 07:16:09.890272  260780 kubeadm.go:602] duration metric: took 20.440452ms to restartPrimaryControlPlane
	I1229 07:16:09.890282  260780 kubeadm.go:403] duration metric: took 70.898981ms to StartCluster
	I1229 07:16:09.890298  260780 settings.go:142] acquiring lock: {Name:mkc0a676e8e9f981283a1b55a28ae5261bf35d87 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1229 07:16:09.890361  260780 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22353-9207/kubeconfig
	I1229 07:16:09.891559  260780 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22353-9207/kubeconfig: {Name:mkdd37af1bd6d0f9cc034a64c07c8d33ee5c10ac Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1229 07:16:09.891789  260780 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1229 07:16:09.891886  260780 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1229 07:16:09.891989  260780 addons.go:70] Setting storage-provisioner=true in profile "no-preload-122332"
	I1229 07:16:09.892005  260780 addons.go:70] Setting dashboard=true in profile "no-preload-122332"
	I1229 07:16:09.892011  260780 addons.go:239] Setting addon storage-provisioner=true in "no-preload-122332"
	I1229 07:16:09.892018  260780 addons.go:239] Setting addon dashboard=true in "no-preload-122332"
	W1229 07:16:09.892020  260780 addons.go:248] addon storage-provisioner should already be in state true
	W1229 07:16:09.892026  260780 addons.go:248] addon dashboard should already be in state true
	I1229 07:16:09.892043  260780 host.go:66] Checking if "no-preload-122332" exists ...
	I1229 07:16:09.892047  260780 host.go:66] Checking if "no-preload-122332" exists ...
	I1229 07:16:09.892042  260780 addons.go:70] Setting default-storageclass=true in profile "no-preload-122332"
	I1229 07:16:09.892067  260780 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "no-preload-122332"
	I1229 07:16:09.892403  260780 cli_runner.go:164] Run: docker container inspect no-preload-122332 --format={{.State.Status}}
	I1229 07:16:09.892516  260780 cli_runner.go:164] Run: docker container inspect no-preload-122332 --format={{.State.Status}}
	I1229 07:16:09.891991  260780 config.go:182] Loaded profile config "no-preload-122332": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1229 07:16:09.892615  260780 cli_runner.go:164] Run: docker container inspect no-preload-122332 --format={{.State.Status}}
	I1229 07:16:09.894792  260780 out.go:179] * Verifying Kubernetes components...
	I1229 07:16:09.896115  260780 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1229 07:16:09.918174  260780 addons.go:239] Setting addon default-storageclass=true in "no-preload-122332"
	W1229 07:16:09.918202  260780 addons.go:248] addon default-storageclass should already be in state true
	I1229 07:16:09.918241  260780 host.go:66] Checking if "no-preload-122332" exists ...
	I1229 07:16:09.918679  260780 cli_runner.go:164] Run: docker container inspect no-preload-122332 --format={{.State.Status}}
	I1229 07:16:09.921808  260780 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1229 07:16:09.922519  260780 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1229 07:16:09.924237  260780 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1229 07:16:10.123978  257698 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1229 07:16:10.623420  257698 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1229 07:16:10.704283  257698 kubeadm.go:1114] duration metric: took 4.682487614s to wait for elevateKubeSystemPrivileges
	I1229 07:16:10.704324  257698 kubeadm.go:403] duration metric: took 11.943493884s to StartCluster
	I1229 07:16:10.704346  257698 settings.go:142] acquiring lock: {Name:mkc0a676e8e9f981283a1b55a28ae5261bf35d87 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1229 07:16:10.704418  257698 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22353-9207/kubeconfig
	I1229 07:16:10.707046  257698 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22353-9207/kubeconfig: {Name:mkdd37af1bd6d0f9cc034a64c07c8d33ee5c10ac Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1229 07:16:10.707375  257698 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1229 07:16:10.707386  257698 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1229 07:16:10.707463  257698 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1229 07:16:10.707555  257698 addons.go:70] Setting storage-provisioner=true in profile "default-k8s-diff-port-798607"
	I1229 07:16:10.707570  257698 config.go:182] Loaded profile config "default-k8s-diff-port-798607": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1229 07:16:10.707574  257698 addons.go:239] Setting addon storage-provisioner=true in "default-k8s-diff-port-798607"
	I1229 07:16:10.707647  257698 host.go:66] Checking if "default-k8s-diff-port-798607" exists ...
	I1229 07:16:10.707572  257698 addons.go:70] Setting default-storageclass=true in profile "default-k8s-diff-port-798607"
	I1229 07:16:10.707699  257698 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-798607"
	I1229 07:16:10.708066  257698 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-798607 --format={{.State.Status}}
	I1229 07:16:10.708302  257698 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-798607 --format={{.State.Status}}
	I1229 07:16:10.709703  257698 out.go:179] * Verifying Kubernetes components...
	I1229 07:16:10.710951  257698 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1229 07:16:10.735779  257698 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1229 07:16:10.736976  257698 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1229 07:16:10.736995  257698 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1229 07:16:10.737045  257698 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-798607
	I1229 07:16:10.737451  257698 addons.go:239] Setting addon default-storageclass=true in "default-k8s-diff-port-798607"
	I1229 07:16:10.737533  257698 host.go:66] Checking if "default-k8s-diff-port-798607" exists ...
	I1229 07:16:10.737917  257698 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-798607 --format={{.State.Status}}
	I1229 07:16:10.767368  257698 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1229 07:16:10.767411  257698 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1229 07:16:10.767465  257698 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-798607
	I1229 07:16:10.769643  257698 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33073 SSHKeyPath:/home/jenkins/minikube-integration/22353-9207/.minikube/machines/default-k8s-diff-port-798607/id_rsa Username:docker}
	I1229 07:16:10.807313  257698 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33073 SSHKeyPath:/home/jenkins/minikube-integration/22353-9207/.minikube/machines/default-k8s-diff-port-798607/id_rsa Username:docker}
	I1229 07:16:10.843092  257698 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.85.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.35.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1229 07:16:10.865866  257698 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1229 07:16:10.903166  257698 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1229 07:16:10.938458  257698 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1229 07:16:11.033447  257698 start.go:987] {"host.minikube.internal": 192.168.85.1} host record injected into CoreDNS's ConfigMap
	I1229 07:16:11.037700  257698 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-798607" to be "Ready" ...
	I1229 07:16:11.242066  257698 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1229 07:16:09.924367  260780 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1229 07:16:09.924385  260780 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1229 07:16:09.924453  260780 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-122332
	I1229 07:16:09.927664  260780 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1229 07:16:09.927690  260780 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1229 07:16:09.927748  260780 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-122332
	I1229 07:16:09.947784  260780 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1229 07:16:09.947810  260780 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1229 07:16:09.947872  260780 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-122332
	I1229 07:16:09.962102  260780 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33078 SSHKeyPath:/home/jenkins/minikube-integration/22353-9207/.minikube/machines/no-preload-122332/id_rsa Username:docker}
	I1229 07:16:09.964066  260780 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33078 SSHKeyPath:/home/jenkins/minikube-integration/22353-9207/.minikube/machines/no-preload-122332/id_rsa Username:docker}
	I1229 07:16:09.976057  260780 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33078 SSHKeyPath:/home/jenkins/minikube-integration/22353-9207/.minikube/machines/no-preload-122332/id_rsa Username:docker}
	I1229 07:16:10.047398  260780 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1229 07:16:10.062250  260780 node_ready.go:35] waiting up to 6m0s for node "no-preload-122332" to be "Ready" ...
	I1229 07:16:10.079533  260780 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1229 07:16:10.080154  260780 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1229 07:16:10.080176  260780 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1229 07:16:10.088176  260780 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1229 07:16:10.094590  260780 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1229 07:16:10.094611  260780 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1229 07:16:10.109751  260780 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1229 07:16:10.109777  260780 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1229 07:16:10.123436  260780 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1229 07:16:10.123461  260780 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1229 07:16:10.138346  260780 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1229 07:16:10.138372  260780 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1229 07:16:10.152445  260780 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1229 07:16:10.152468  260780 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1229 07:16:10.165143  260780 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1229 07:16:10.165160  260780 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1229 07:16:10.178658  260780 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1229 07:16:10.178684  260780 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1229 07:16:10.193528  260780 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1229 07:16:10.193557  260780 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1229 07:16:10.205978  260780 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1229 07:16:11.530362  260780 node_ready.go:49] node "no-preload-122332" is "Ready"
	I1229 07:16:11.530405  260780 node_ready.go:38] duration metric: took 1.468113831s for node "no-preload-122332" to be "Ready" ...
	I1229 07:16:11.530423  260780 api_server.go:52] waiting for apiserver process to appear ...
	I1229 07:16:11.530481  260780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1229 07:16:12.096131  260780 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.016563566s)
	I1229 07:16:12.096237  260780 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.008009241s)
	I1229 07:16:12.096429  260780 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.890415578s)
	I1229 07:16:12.096486  260780 api_server.go:72] duration metric: took 2.204662826s to wait for apiserver process to appear ...
	I1229 07:16:12.096503  260780 api_server.go:88] waiting for apiserver healthz status ...
	I1229 07:16:12.096522  260780 api_server.go:299] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1229 07:16:12.097856  260780 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p no-preload-122332 addons enable metrics-server
	
	I1229 07:16:12.102178  260780 api_server.go:325] https://192.168.94.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1229 07:16:12.102206  260780 api_server.go:103] status: https://192.168.94.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1229 07:16:12.104099  260780 out.go:179] * Enabled addons: storage-provisioner, dashboard, default-storageclass
	I1229 07:16:12.105156  260780 addons.go:530] duration metric: took 2.213279034s for enable addons: enabled=[storage-provisioner dashboard default-storageclass]
	W1229 07:16:08.999265  252990 node_ready.go:57] node "embed-certs-739827" has "Ready":"False" status (will retry)
	W1229 07:16:11.000314  252990 node_ready.go:57] node "embed-certs-739827" has "Ready":"False" status (will retry)
	I1229 07:16:11.243139  257698 addons.go:530] duration metric: took 535.674583ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I1229 07:16:11.539081  257698 kapi.go:214] "coredns" deployment in "kube-system" namespace and "default-k8s-diff-port-798607" context rescaled to 1 replicas
	W1229 07:16:13.040968  257698 node_ready.go:57] node "default-k8s-diff-port-798607" has "Ready":"False" status (will retry)
	I1229 07:16:12.596646  260780 api_server.go:299] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1229 07:16:12.607232  260780 api_server.go:325] https://192.168.94.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1229 07:16:12.607265  260780 api_server.go:103] status: https://192.168.94.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1229 07:16:13.096738  260780 api_server.go:299] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1229 07:16:13.101713  260780 api_server.go:325] https://192.168.94.2:8443/healthz returned 200:
	ok
	I1229 07:16:13.103040  260780 api_server.go:141] control plane version: v1.35.0
	I1229 07:16:13.103069  260780 api_server.go:131] duration metric: took 1.006559392s to wait for apiserver health ...
	I1229 07:16:13.103077  260780 system_pods.go:43] waiting for kube-system pods to appear ...
	I1229 07:16:13.107159  260780 system_pods.go:59] 8 kube-system pods found
	I1229 07:16:13.107205  260780 system_pods.go:61] "coredns-7d764666f9-6rcr2" [51ba32ec-f0c4-4dbd-b555-a3a3f8f02319] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1229 07:16:13.107236  260780 system_pods.go:61] "etcd-no-preload-122332" [5a8423b5-2e58-4a29-86c5-e8ea350f48c0] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1229 07:16:13.107251  260780 system_pods.go:61] "kindnet-rq99f" [bb2b7600-b85c-4a5b-aa87-b495394b1749] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1229 07:16:13.107265  260780 system_pods.go:61] "kube-apiserver-no-preload-122332" [1186072e-56b1-4fd6-b028-b99efba982c8] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1229 07:16:13.107278  260780 system_pods.go:61] "kube-controller-manager-no-preload-122332" [ac595152-44f9-4812-843b-29329fd7c659] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1229 07:16:13.107290  260780 system_pods.go:61] "kube-proxy-qvww2" [01123e19-62cc-4666-8d46-8e51a274f6c9] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1229 07:16:13.107311  260780 system_pods.go:61] "kube-scheduler-no-preload-122332" [69d66c3a-fc72-44e8-8d5a-3a4914e8705b] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1229 07:16:13.107324  260780 system_pods.go:61] "storage-provisioner" [37396a97-f1db-4026-af7d-551f0fec188f] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1229 07:16:13.107332  260780 system_pods.go:74] duration metric: took 4.248721ms to wait for pod list to return data ...
	I1229 07:16:13.107643  260780 default_sa.go:34] waiting for default service account to be created ...
	I1229 07:16:13.111795  260780 default_sa.go:45] found service account: "default"
	I1229 07:16:13.111819  260780 default_sa.go:55] duration metric: took 4.157923ms for default service account to be created ...
	I1229 07:16:13.111830  260780 system_pods.go:116] waiting for k8s-apps to be running ...
	I1229 07:16:13.114940  260780 system_pods.go:86] 8 kube-system pods found
	I1229 07:16:13.114974  260780 system_pods.go:89] "coredns-7d764666f9-6rcr2" [51ba32ec-f0c4-4dbd-b555-a3a3f8f02319] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1229 07:16:13.114985  260780 system_pods.go:89] "etcd-no-preload-122332" [5a8423b5-2e58-4a29-86c5-e8ea350f48c0] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1229 07:16:13.115001  260780 system_pods.go:89] "kindnet-rq99f" [bb2b7600-b85c-4a5b-aa87-b495394b1749] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1229 07:16:13.115019  260780 system_pods.go:89] "kube-apiserver-no-preload-122332" [1186072e-56b1-4fd6-b028-b99efba982c8] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1229 07:16:13.115031  260780 system_pods.go:89] "kube-controller-manager-no-preload-122332" [ac595152-44f9-4812-843b-29329fd7c659] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1229 07:16:13.115040  260780 system_pods.go:89] "kube-proxy-qvww2" [01123e19-62cc-4666-8d46-8e51a274f6c9] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1229 07:16:13.115049  260780 system_pods.go:89] "kube-scheduler-no-preload-122332" [69d66c3a-fc72-44e8-8d5a-3a4914e8705b] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1229 07:16:13.115059  260780 system_pods.go:89] "storage-provisioner" [37396a97-f1db-4026-af7d-551f0fec188f] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1229 07:16:13.115068  260780 system_pods.go:126] duration metric: took 3.231622ms to wait for k8s-apps to be running ...
	I1229 07:16:13.115080  260780 system_svc.go:44] waiting for kubelet service to be running ....
	I1229 07:16:13.115134  260780 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1229 07:16:13.129420  260780 system_svc.go:56] duration metric: took 14.330066ms WaitForService to wait for kubelet
	I1229 07:16:13.129450  260780 kubeadm.go:587] duration metric: took 3.23762937s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1229 07:16:13.129471  260780 node_conditions.go:102] verifying NodePressure condition ...
	I1229 07:16:13.132922  260780 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1229 07:16:13.132958  260780 node_conditions.go:123] node cpu capacity is 8
	I1229 07:16:13.132986  260780 node_conditions.go:105] duration metric: took 3.508619ms to run NodePressure ...
	I1229 07:16:13.133002  260780 start.go:242] waiting for startup goroutines ...
	I1229 07:16:13.133013  260780 start.go:247] waiting for cluster config update ...
	I1229 07:16:13.133027  260780 start.go:256] writing updated cluster config ...
	I1229 07:16:13.133395  260780 ssh_runner.go:195] Run: rm -f paused
	I1229 07:16:13.137637  260780 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1229 07:16:13.141632  260780 pod_ready.go:83] waiting for pod "coredns-7d764666f9-6rcr2" in "kube-system" namespace to be "Ready" or be gone ...
	W1229 07:16:15.146677  260780 pod_ready.go:104] pod "coredns-7d764666f9-6rcr2" is not "Ready", error: <nil>
	W1229 07:16:17.147111  260780 pod_ready.go:104] pod "coredns-7d764666f9-6rcr2" is not "Ready", error: <nil>
	W1229 07:16:13.499098  252990 node_ready.go:57] node "embed-certs-739827" has "Ready":"False" status (will retry)
	W1229 07:16:15.999532  252990 node_ready.go:57] node "embed-certs-739827" has "Ready":"False" status (will retry)
	I1229 07:16:16.502072  252990 node_ready.go:49] node "embed-certs-739827" is "Ready"
	I1229 07:16:16.502105  252990 node_ready.go:38] duration metric: took 12.006247326s for node "embed-certs-739827" to be "Ready" ...
	I1229 07:16:16.502128  252990 api_server.go:52] waiting for apiserver process to appear ...
	I1229 07:16:16.502196  252990 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1229 07:16:16.522089  252990 api_server.go:72] duration metric: took 12.319199575s to wait for apiserver process to appear ...
	I1229 07:16:16.522121  252990 api_server.go:88] waiting for apiserver healthz status ...
	I1229 07:16:16.522169  252990 api_server.go:299] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1229 07:16:16.529618  252990 api_server.go:325] https://192.168.103.2:8443/healthz returned 200:
	ok
	I1229 07:16:16.530753  252990 api_server.go:141] control plane version: v1.35.0
	I1229 07:16:16.530774  252990 api_server.go:131] duration metric: took 8.646632ms to wait for apiserver health ...
	I1229 07:16:16.530782  252990 system_pods.go:43] waiting for kube-system pods to appear ...
	I1229 07:16:16.534314  252990 system_pods.go:59] 8 kube-system pods found
	I1229 07:16:16.534355  252990 system_pods.go:61] "coredns-7d764666f9-55529" [279a41bb-4bd1-4a8d-9999-27eb0a996229] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1229 07:16:16.534363  252990 system_pods.go:61] "etcd-embed-certs-739827" [c31026b1-ea55-4f4c-a4da-7acec9849459] Running
	I1229 07:16:16.534375  252990 system_pods.go:61] "kindnet-l6mxr" [8c745434-7be8-4f4a-9685-0b2ebdcd1a6f] Running
	I1229 07:16:16.534381  252990 system_pods.go:61] "kube-apiserver-embed-certs-739827" [0bc8eb4f-d400-4844-930d-20bd4547241d] Running
	I1229 07:16:16.534393  252990 system_pods.go:61] "kube-controller-manager-embed-certs-739827" [c97351b5-e77c-4fdf-909e-5953b4bf6a71] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1229 07:16:16.534408  252990 system_pods.go:61] "kube-proxy-hdmp6" [ddf343da-e4e2-4ea1-a49d-02ad395abdaa] Running
	I1229 07:16:16.534414  252990 system_pods.go:61] "kube-scheduler-embed-certs-739827" [db6cf8d3-cae8-4601-ae36-c2003c4d368d] Running
	I1229 07:16:16.534421  252990 system_pods.go:61] "storage-provisioner" [b7edc06a-181d-4c30-b979-9aa3f1f50ecb] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1229 07:16:16.534428  252990 system_pods.go:74] duration metric: took 3.64069ms to wait for pod list to return data ...
	I1229 07:16:16.534437  252990 default_sa.go:34] waiting for default service account to be created ...
	I1229 07:16:16.536899  252990 default_sa.go:45] found service account: "default"
	I1229 07:16:16.536918  252990 default_sa.go:55] duration metric: took 2.474071ms for default service account to be created ...
	I1229 07:16:16.536928  252990 system_pods.go:116] waiting for k8s-apps to be running ...
	I1229 07:16:16.540806  252990 system_pods.go:86] 8 kube-system pods found
	I1229 07:16:16.540861  252990 system_pods.go:89] "coredns-7d764666f9-55529" [279a41bb-4bd1-4a8d-9999-27eb0a996229] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1229 07:16:16.540873  252990 system_pods.go:89] "etcd-embed-certs-739827" [c31026b1-ea55-4f4c-a4da-7acec9849459] Running
	I1229 07:16:16.540881  252990 system_pods.go:89] "kindnet-l6mxr" [8c745434-7be8-4f4a-9685-0b2ebdcd1a6f] Running
	I1229 07:16:16.540890  252990 system_pods.go:89] "kube-apiserver-embed-certs-739827" [0bc8eb4f-d400-4844-930d-20bd4547241d] Running
	I1229 07:16:16.540899  252990 system_pods.go:89] "kube-controller-manager-embed-certs-739827" [c97351b5-e77c-4fdf-909e-5953b4bf6a71] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1229 07:16:16.540905  252990 system_pods.go:89] "kube-proxy-hdmp6" [ddf343da-e4e2-4ea1-a49d-02ad395abdaa] Running
	I1229 07:16:16.540920  252990 system_pods.go:89] "kube-scheduler-embed-certs-739827" [db6cf8d3-cae8-4601-ae36-c2003c4d368d] Running
	I1229 07:16:16.540934  252990 system_pods.go:89] "storage-provisioner" [b7edc06a-181d-4c30-b979-9aa3f1f50ecb] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1229 07:16:16.540969  252990 retry.go:84] will retry after 300ms: missing components: kube-dns
	I1229 07:16:16.809060  252990 system_pods.go:86] 8 kube-system pods found
	I1229 07:16:16.809101  252990 system_pods.go:89] "coredns-7d764666f9-55529" [279a41bb-4bd1-4a8d-9999-27eb0a996229] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1229 07:16:16.809110  252990 system_pods.go:89] "etcd-embed-certs-739827" [c31026b1-ea55-4f4c-a4da-7acec9849459] Running
	I1229 07:16:16.809120  252990 system_pods.go:89] "kindnet-l6mxr" [8c745434-7be8-4f4a-9685-0b2ebdcd1a6f] Running
	I1229 07:16:16.809127  252990 system_pods.go:89] "kube-apiserver-embed-certs-739827" [0bc8eb4f-d400-4844-930d-20bd4547241d] Running
	I1229 07:16:16.809137  252990 system_pods.go:89] "kube-controller-manager-embed-certs-739827" [c97351b5-e77c-4fdf-909e-5953b4bf6a71] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1229 07:16:16.809146  252990 system_pods.go:89] "kube-proxy-hdmp6" [ddf343da-e4e2-4ea1-a49d-02ad395abdaa] Running
	I1229 07:16:16.809154  252990 system_pods.go:89] "kube-scheduler-embed-certs-739827" [db6cf8d3-cae8-4601-ae36-c2003c4d368d] Running
	I1229 07:16:16.809164  252990 system_pods.go:89] "storage-provisioner" [b7edc06a-181d-4c30-b979-9aa3f1f50ecb] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1229 07:16:17.193827  252990 system_pods.go:86] 8 kube-system pods found
	I1229 07:16:17.193866  252990 system_pods.go:89] "coredns-7d764666f9-55529" [279a41bb-4bd1-4a8d-9999-27eb0a996229] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1229 07:16:17.193876  252990 system_pods.go:89] "etcd-embed-certs-739827" [c31026b1-ea55-4f4c-a4da-7acec9849459] Running
	I1229 07:16:17.193884  252990 system_pods.go:89] "kindnet-l6mxr" [8c745434-7be8-4f4a-9685-0b2ebdcd1a6f] Running
	I1229 07:16:17.193889  252990 system_pods.go:89] "kube-apiserver-embed-certs-739827" [0bc8eb4f-d400-4844-930d-20bd4547241d] Running
	I1229 07:16:17.193919  252990 system_pods.go:89] "kube-controller-manager-embed-certs-739827" [c97351b5-e77c-4fdf-909e-5953b4bf6a71] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1229 07:16:17.193931  252990 system_pods.go:89] "kube-proxy-hdmp6" [ddf343da-e4e2-4ea1-a49d-02ad395abdaa] Running
	I1229 07:16:17.193939  252990 system_pods.go:89] "kube-scheduler-embed-certs-739827" [db6cf8d3-cae8-4601-ae36-c2003c4d368d] Running
	I1229 07:16:17.193946  252990 system_pods.go:89] "storage-provisioner" [b7edc06a-181d-4c30-b979-9aa3f1f50ecb] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1229 07:16:17.538060  252990 system_pods.go:86] 8 kube-system pods found
	I1229 07:16:17.538103  252990 system_pods.go:89] "coredns-7d764666f9-55529" [279a41bb-4bd1-4a8d-9999-27eb0a996229] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1229 07:16:17.538112  252990 system_pods.go:89] "etcd-embed-certs-739827" [c31026b1-ea55-4f4c-a4da-7acec9849459] Running
	I1229 07:16:17.538119  252990 system_pods.go:89] "kindnet-l6mxr" [8c745434-7be8-4f4a-9685-0b2ebdcd1a6f] Running
	I1229 07:16:17.538125  252990 system_pods.go:89] "kube-apiserver-embed-certs-739827" [0bc8eb4f-d400-4844-930d-20bd4547241d] Running
	I1229 07:16:17.538133  252990 system_pods.go:89] "kube-controller-manager-embed-certs-739827" [c97351b5-e77c-4fdf-909e-5953b4bf6a71] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1229 07:16:17.538139  252990 system_pods.go:89] "kube-proxy-hdmp6" [ddf343da-e4e2-4ea1-a49d-02ad395abdaa] Running
	I1229 07:16:17.538145  252990 system_pods.go:89] "kube-scheduler-embed-certs-739827" [db6cf8d3-cae8-4601-ae36-c2003c4d368d] Running
	I1229 07:16:17.538153  252990 system_pods.go:89] "storage-provisioner" [b7edc06a-181d-4c30-b979-9aa3f1f50ecb] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1229 07:16:16.286301  225445 api_server.go:299] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1229 07:16:16.286785  225445 api_server.go:315] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1229 07:16:16.286845  225445 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1229 07:16:16.286914  225445 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1229 07:16:16.322121  225445 cri.go:96] found id: "8b80982715281e14912ea726d2a5a182323febb349dfab3bcb2a610db7fa1d11"
	I1229 07:16:16.322148  225445 cri.go:96] found id: "3a9c461f6549cedfcb902d493a98dfdfd45573c28ec554e6b1d2ce904ee1fd67"
	I1229 07:16:16.322153  225445 cri.go:96] found id: ""
	I1229 07:16:16.322163  225445 logs.go:282] 2 containers: [8b80982715281e14912ea726d2a5a182323febb349dfab3bcb2a610db7fa1d11 3a9c461f6549cedfcb902d493a98dfdfd45573c28ec554e6b1d2ce904ee1fd67]
	I1229 07:16:16.322251  225445 ssh_runner.go:195] Run: which crictl
	I1229 07:16:16.327395  225445 ssh_runner.go:195] Run: which crictl
	I1229 07:16:16.332794  225445 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1229 07:16:16.332858  225445 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1229 07:16:16.369417  225445 cri.go:96] found id: "02694b727a7ef5615b676832bc306017d23da15eff0a3aa9c1e32630ffe9e1bd"
	I1229 07:16:16.369445  225445 cri.go:96] found id: ""
	I1229 07:16:16.369457  225445 logs.go:282] 1 containers: [02694b727a7ef5615b676832bc306017d23da15eff0a3aa9c1e32630ffe9e1bd]
	I1229 07:16:16.369520  225445 ssh_runner.go:195] Run: which crictl
	I1229 07:16:16.374332  225445 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1229 07:16:16.374395  225445 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1229 07:16:16.408673  225445 cri.go:96] found id: ""
	I1229 07:16:16.408703  225445 logs.go:282] 0 containers: []
	W1229 07:16:16.408715  225445 logs.go:284] No container was found matching "coredns"
	I1229 07:16:16.408722  225445 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1229 07:16:16.408777  225445 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1229 07:16:16.440753  225445 cri.go:96] found id: "bcf9a8233b7fe61acd5d7b0071cecd58e00a26cec4247f4ead31d75316b79078"
	I1229 07:16:16.440777  225445 cri.go:96] found id: "1a2a5ec730671fe2fec7f6b801e54afe17547e25677f0f6dc85052e44f7374d4"
	I1229 07:16:16.440782  225445 cri.go:96] found id: "83f0b45b14d0f5c0f2428e774b2c22cc1cbf1cdc631be13cb8017de1791ed9c1"
	I1229 07:16:16.440786  225445 cri.go:96] found id: ""
	I1229 07:16:16.440794  225445 logs.go:282] 3 containers: [bcf9a8233b7fe61acd5d7b0071cecd58e00a26cec4247f4ead31d75316b79078 1a2a5ec730671fe2fec7f6b801e54afe17547e25677f0f6dc85052e44f7374d4 83f0b45b14d0f5c0f2428e774b2c22cc1cbf1cdc631be13cb8017de1791ed9c1]
	I1229 07:16:16.440857  225445 ssh_runner.go:195] Run: which crictl
	I1229 07:16:16.445989  225445 ssh_runner.go:195] Run: which crictl
	I1229 07:16:16.450432  225445 ssh_runner.go:195] Run: which crictl
	I1229 07:16:16.454706  225445 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1229 07:16:16.454763  225445 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1229 07:16:16.489203  225445 cri.go:96] found id: ""
	I1229 07:16:16.489239  225445 logs.go:282] 0 containers: []
	W1229 07:16:16.489250  225445 logs.go:284] No container was found matching "kube-proxy"
	I1229 07:16:16.489257  225445 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1229 07:16:16.489318  225445 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1229 07:16:16.528556  225445 cri.go:96] found id: "a67832fbf27ab39d26d7cd60ce6b9b5a496df64418a8f0557221862df96d6685"
	I1229 07:16:16.528577  225445 cri.go:96] found id: "c6904a1539983ab61b05abbaa0001209617c96ddc1921ff83557531b68191f66"
	I1229 07:16:16.528583  225445 cri.go:96] found id: ""
	I1229 07:16:16.528592  225445 logs.go:282] 2 containers: [a67832fbf27ab39d26d7cd60ce6b9b5a496df64418a8f0557221862df96d6685 c6904a1539983ab61b05abbaa0001209617c96ddc1921ff83557531b68191f66]
	I1229 07:16:16.528645  225445 ssh_runner.go:195] Run: which crictl
	I1229 07:16:16.534131  225445 ssh_runner.go:195] Run: which crictl
	I1229 07:16:16.538795  225445 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1229 07:16:16.538858  225445 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1229 07:16:16.573286  225445 cri.go:96] found id: ""
	I1229 07:16:16.573315  225445 logs.go:282] 0 containers: []
	W1229 07:16:16.573325  225445 logs.go:284] No container was found matching "kindnet"
	I1229 07:16:16.573333  225445 cri.go:61] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1229 07:16:16.573394  225445 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=storage-provisioner
	I1229 07:16:16.610365  225445 cri.go:96] found id: ""
	I1229 07:16:16.610393  225445 logs.go:282] 0 containers: []
	W1229 07:16:16.610406  225445 logs.go:284] No container was found matching "storage-provisioner"
	I1229 07:16:16.610419  225445 logs.go:123] Gathering logs for kube-scheduler [bcf9a8233b7fe61acd5d7b0071cecd58e00a26cec4247f4ead31d75316b79078] ...
	I1229 07:16:16.610437  225445 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 bcf9a8233b7fe61acd5d7b0071cecd58e00a26cec4247f4ead31d75316b79078"
	W1229 07:16:16.642063  225445 logs.go:138] Found kube-scheduler [bcf9a8233b7fe61acd5d7b0071cecd58e00a26cec4247f4ead31d75316b79078] problem: E1229 07:14:51.301899       1 run.go:72] "command failed" err="failed to create listener: failed to listen on 127.0.0.1:10259: listen tcp 127.0.0.1:10259: bind: address already in use"
	I1229 07:16:16.642090  225445 logs.go:123] Gathering logs for CRI-O ...
	I1229 07:16:16.642104  225445 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1229 07:16:16.738108  225445 logs.go:123] Gathering logs for describe nodes ...
	I1229 07:16:16.738204  225445 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1229 07:16:16.807730  225445 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1229 07:16:16.807758  225445 logs.go:123] Gathering logs for kube-apiserver [8b80982715281e14912ea726d2a5a182323febb349dfab3bcb2a610db7fa1d11] ...
	I1229 07:16:16.807774  225445 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 8b80982715281e14912ea726d2a5a182323febb349dfab3bcb2a610db7fa1d11"
	I1229 07:16:16.848558  225445 logs.go:123] Gathering logs for kube-scheduler [1a2a5ec730671fe2fec7f6b801e54afe17547e25677f0f6dc85052e44f7374d4] ...
	I1229 07:16:16.848589  225445 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 1a2a5ec730671fe2fec7f6b801e54afe17547e25677f0f6dc85052e44f7374d4"
	I1229 07:16:16.940984  225445 logs.go:123] Gathering logs for kube-scheduler [83f0b45b14d0f5c0f2428e774b2c22cc1cbf1cdc631be13cb8017de1791ed9c1] ...
	I1229 07:16:16.941019  225445 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 83f0b45b14d0f5c0f2428e774b2c22cc1cbf1cdc631be13cb8017de1791ed9c1"
	I1229 07:16:16.975456  225445 logs.go:123] Gathering logs for kube-controller-manager [a67832fbf27ab39d26d7cd60ce6b9b5a496df64418a8f0557221862df96d6685] ...
	I1229 07:16:16.975490  225445 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a67832fbf27ab39d26d7cd60ce6b9b5a496df64418a8f0557221862df96d6685"
	I1229 07:16:17.010586  225445 logs.go:123] Gathering logs for kube-controller-manager [c6904a1539983ab61b05abbaa0001209617c96ddc1921ff83557531b68191f66] ...
	I1229 07:16:17.010619  225445 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c6904a1539983ab61b05abbaa0001209617c96ddc1921ff83557531b68191f66"
	I1229 07:16:17.046987  225445 logs.go:123] Gathering logs for container status ...
	I1229 07:16:17.047022  225445 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1229 07:16:17.088590  225445 logs.go:123] Gathering logs for kubelet ...
	I1229 07:16:17.088624  225445 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1229 07:16:17.218079  225445 logs.go:123] Gathering logs for dmesg ...
	I1229 07:16:17.218119  225445 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1229 07:16:17.236043  225445 logs.go:123] Gathering logs for kube-apiserver [3a9c461f6549cedfcb902d493a98dfdfd45573c28ec554e6b1d2ce904ee1fd67] ...
	I1229 07:16:17.236078  225445 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3a9c461f6549cedfcb902d493a98dfdfd45573c28ec554e6b1d2ce904ee1fd67"
	I1229 07:16:17.275451  225445 logs.go:123] Gathering logs for etcd [02694b727a7ef5615b676832bc306017d23da15eff0a3aa9c1e32630ffe9e1bd] ...
	I1229 07:16:17.275485  225445 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 02694b727a7ef5615b676832bc306017d23da15eff0a3aa9c1e32630ffe9e1bd"
	I1229 07:16:17.318709  225445 out.go:374] Setting ErrFile to fd 2...
	I1229 07:16:17.318742  225445 out.go:408] TERM=,COLORTERM=, which probably does not support color
	W1229 07:16:17.318809  225445 out.go:285] X Problems detected in kube-scheduler [bcf9a8233b7fe61acd5d7b0071cecd58e00a26cec4247f4ead31d75316b79078]:
	W1229 07:16:17.318824  225445 out.go:285]   E1229 07:14:51.301899       1 run.go:72] "command failed" err="failed to create listener: failed to listen on 127.0.0.1:10259: listen tcp 127.0.0.1:10259: bind: address already in use"
	I1229 07:16:17.318830  225445 out.go:374] Setting ErrFile to fd 2...
	I1229 07:16:17.318836  225445 out.go:408] TERM=,COLORTERM=, which probably does not support color
	W1229 07:16:15.041173  257698 node_ready.go:57] node "default-k8s-diff-port-798607" has "Ready":"False" status (will retry)
	W1229 07:16:17.041830  257698 node_ready.go:57] node "default-k8s-diff-port-798607" has "Ready":"False" status (will retry)
	W1229 07:16:19.042177  257698 node_ready.go:57] node "default-k8s-diff-port-798607" has "Ready":"False" status (will retry)
	I1229 07:16:18.120694  252990 system_pods.go:86] 8 kube-system pods found
	I1229 07:16:18.120733  252990 system_pods.go:89] "coredns-7d764666f9-55529" [279a41bb-4bd1-4a8d-9999-27eb0a996229] Running
	I1229 07:16:18.120744  252990 system_pods.go:89] "etcd-embed-certs-739827" [c31026b1-ea55-4f4c-a4da-7acec9849459] Running
	I1229 07:16:18.120750  252990 system_pods.go:89] "kindnet-l6mxr" [8c745434-7be8-4f4a-9685-0b2ebdcd1a6f] Running
	I1229 07:16:18.120757  252990 system_pods.go:89] "kube-apiserver-embed-certs-739827" [0bc8eb4f-d400-4844-930d-20bd4547241d] Running
	I1229 07:16:18.120769  252990 system_pods.go:89] "kube-controller-manager-embed-certs-739827" [c97351b5-e77c-4fdf-909e-5953b4bf6a71] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1229 07:16:18.120784  252990 system_pods.go:89] "kube-proxy-hdmp6" [ddf343da-e4e2-4ea1-a49d-02ad395abdaa] Running
	I1229 07:16:18.120792  252990 system_pods.go:89] "kube-scheduler-embed-certs-739827" [db6cf8d3-cae8-4601-ae36-c2003c4d368d] Running
	I1229 07:16:18.120799  252990 system_pods.go:89] "storage-provisioner" [b7edc06a-181d-4c30-b979-9aa3f1f50ecb] Running
	I1229 07:16:18.120809  252990 system_pods.go:126] duration metric: took 1.583874938s to wait for k8s-apps to be running ...
	I1229 07:16:18.120820  252990 system_svc.go:44] waiting for kubelet service to be running ....
	I1229 07:16:18.120875  252990 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1229 07:16:18.138516  252990 system_svc.go:56] duration metric: took 17.687868ms WaitForService to wait for kubelet
	I1229 07:16:18.138549  252990 kubeadm.go:587] duration metric: took 13.935664043s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1229 07:16:18.138571  252990 node_conditions.go:102] verifying NodePressure condition ...
	I1229 07:16:18.141761  252990 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1229 07:16:18.141791  252990 node_conditions.go:123] node cpu capacity is 8
	I1229 07:16:18.141814  252990 node_conditions.go:105] duration metric: took 3.23376ms to run NodePressure ...
	I1229 07:16:18.141829  252990 start.go:242] waiting for startup goroutines ...
	I1229 07:16:18.141843  252990 start.go:247] waiting for cluster config update ...
	I1229 07:16:18.141856  252990 start.go:256] writing updated cluster config ...
	I1229 07:16:18.142150  252990 ssh_runner.go:195] Run: rm -f paused
	I1229 07:16:18.147256  252990 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1229 07:16:18.151738  252990 pod_ready.go:83] waiting for pod "coredns-7d764666f9-55529" in "kube-system" namespace to be "Ready" or be gone ...
	I1229 07:16:18.156521  252990 pod_ready.go:94] pod "coredns-7d764666f9-55529" is "Ready"
	I1229 07:16:18.156544  252990 pod_ready.go:86] duration metric: took 4.780643ms for pod "coredns-7d764666f9-55529" in "kube-system" namespace to be "Ready" or be gone ...
	I1229 07:16:18.158730  252990 pod_ready.go:83] waiting for pod "etcd-embed-certs-739827" in "kube-system" namespace to be "Ready" or be gone ...
	I1229 07:16:18.163175  252990 pod_ready.go:94] pod "etcd-embed-certs-739827" is "Ready"
	I1229 07:16:18.163204  252990 pod_ready.go:86] duration metric: took 4.452227ms for pod "etcd-embed-certs-739827" in "kube-system" namespace to be "Ready" or be gone ...
	I1229 07:16:18.165573  252990 pod_ready.go:83] waiting for pod "kube-apiserver-embed-certs-739827" in "kube-system" namespace to be "Ready" or be gone ...
	I1229 07:16:18.169958  252990 pod_ready.go:94] pod "kube-apiserver-embed-certs-739827" is "Ready"
	I1229 07:16:18.169980  252990 pod_ready.go:86] duration metric: took 4.385251ms for pod "kube-apiserver-embed-certs-739827" in "kube-system" namespace to be "Ready" or be gone ...
	I1229 07:16:18.172262  252990 pod_ready.go:83] waiting for pod "kube-controller-manager-embed-certs-739827" in "kube-system" namespace to be "Ready" or be gone ...
	I1229 07:16:18.952863  252990 pod_ready.go:94] pod "kube-controller-manager-embed-certs-739827" is "Ready"
	I1229 07:16:18.952902  252990 pod_ready.go:86] duration metric: took 780.618636ms for pod "kube-controller-manager-embed-certs-739827" in "kube-system" namespace to be "Ready" or be gone ...
	I1229 07:16:19.153103  252990 pod_ready.go:83] waiting for pod "kube-proxy-hdmp6" in "kube-system" namespace to be "Ready" or be gone ...
	I1229 07:16:19.552450  252990 pod_ready.go:94] pod "kube-proxy-hdmp6" is "Ready"
	I1229 07:16:19.552474  252990 pod_ready.go:86] duration metric: took 399.346089ms for pod "kube-proxy-hdmp6" in "kube-system" namespace to be "Ready" or be gone ...
	I1229 07:16:19.752024  252990 pod_ready.go:83] waiting for pod "kube-scheduler-embed-certs-739827" in "kube-system" namespace to be "Ready" or be gone ...
	I1229 07:16:20.151703  252990 pod_ready.go:94] pod "kube-scheduler-embed-certs-739827" is "Ready"
	I1229 07:16:20.151737  252990 pod_ready.go:86] duration metric: took 399.681992ms for pod "kube-scheduler-embed-certs-739827" in "kube-system" namespace to be "Ready" or be gone ...
	I1229 07:16:20.151753  252990 pod_ready.go:40] duration metric: took 2.004461757s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1229 07:16:20.197550  252990 start.go:625] kubectl: 1.35.0, cluster: 1.35.0 (minor skew: 0)
	I1229 07:16:20.228678  252990 out.go:179] * Done! kubectl is now configured to use "embed-certs-739827" cluster and "default" namespace by default
	W1229 07:16:19.147566  260780 pod_ready.go:104] pod "coredns-7d764666f9-6rcr2" is not "Ready", error: <nil>
	W1229 07:16:21.147802  260780 pod_ready.go:104] pod "coredns-7d764666f9-6rcr2" is not "Ready", error: <nil>
	W1229 07:16:21.541192  257698 node_ready.go:57] node "default-k8s-diff-port-798607" has "Ready":"False" status (will retry)
	W1229 07:16:24.041265  257698 node_ready.go:57] node "default-k8s-diff-port-798607" has "Ready":"False" status (will retry)
	I1229 07:16:24.541697  257698 node_ready.go:49] node "default-k8s-diff-port-798607" is "Ready"
	I1229 07:16:24.541740  257698 node_ready.go:38] duration metric: took 13.504008187s for node "default-k8s-diff-port-798607" to be "Ready" ...
	I1229 07:16:24.541757  257698 api_server.go:52] waiting for apiserver process to appear ...
	I1229 07:16:24.541817  257698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1229 07:16:24.555349  257698 api_server.go:72] duration metric: took 13.847927079s to wait for apiserver process to appear ...
	I1229 07:16:24.555380  257698 api_server.go:88] waiting for apiserver healthz status ...
	I1229 07:16:24.555397  257698 api_server.go:299] Checking apiserver healthz at https://192.168.85.2:8444/healthz ...
	I1229 07:16:24.560461  257698 api_server.go:325] https://192.168.85.2:8444/healthz returned 200:
	ok
	I1229 07:16:24.561323  257698 api_server.go:141] control plane version: v1.35.0
	I1229 07:16:24.561348  257698 api_server.go:131] duration metric: took 5.961012ms to wait for apiserver health ...
	I1229 07:16:24.561358  257698 system_pods.go:43] waiting for kube-system pods to appear ...
	I1229 07:16:24.564806  257698 system_pods.go:59] 8 kube-system pods found
	I1229 07:16:24.564850  257698 system_pods.go:61] "coredns-7d764666f9-jwmww" [1ab5b614-62d4-4118-9c4b-2e12e7ae7aec] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1229 07:16:24.564865  257698 system_pods.go:61] "etcd-default-k8s-diff-port-798607" [e1c1af51-4014-4c32-bcff-e34907986cbd] Running
	I1229 07:16:24.564876  257698 system_pods.go:61] "kindnet-m6jd2" [eae39509-802b-4a6e-b436-904c44761153] Running
	I1229 07:16:24.564884  257698 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-798607" [45d77ffe-320b-4e0c-b70c-c8f5c10e462f] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1229 07:16:24.564891  257698 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-798607" [8babc737-acfc-4cad-9bd0-3f28bf89533b] Running
	I1229 07:16:24.564900  257698 system_pods.go:61] "kube-proxy-4mnzc" [c322649a-8539-4264-9165-2a2522f06078] Running
	I1229 07:16:24.564906  257698 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-798607" [89336461-1b92-451b-b02f-3fe54f3b6309] Running
	I1229 07:16:24.564923  257698 system_pods.go:61] "storage-provisioner" [77ec6576-1cba-401f-8b20-e6e97d7be45d] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1229 07:16:24.564935  257698 system_pods.go:74] duration metric: took 3.569715ms to wait for pod list to return data ...
	I1229 07:16:24.564947  257698 default_sa.go:34] waiting for default service account to be created ...
	I1229 07:16:24.568054  257698 default_sa.go:45] found service account: "default"
	I1229 07:16:24.568071  257698 default_sa.go:55] duration metric: took 3.116968ms for default service account to be created ...
	I1229 07:16:24.568078  257698 system_pods.go:116] waiting for k8s-apps to be running ...
	I1229 07:16:24.571042  257698 system_pods.go:86] 8 kube-system pods found
	I1229 07:16:24.571074  257698 system_pods.go:89] "coredns-7d764666f9-jwmww" [1ab5b614-62d4-4118-9c4b-2e12e7ae7aec] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1229 07:16:24.571082  257698 system_pods.go:89] "etcd-default-k8s-diff-port-798607" [e1c1af51-4014-4c32-bcff-e34907986cbd] Running
	I1229 07:16:24.571091  257698 system_pods.go:89] "kindnet-m6jd2" [eae39509-802b-4a6e-b436-904c44761153] Running
	I1229 07:16:24.571101  257698 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-798607" [45d77ffe-320b-4e0c-b70c-c8f5c10e462f] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1229 07:16:24.571111  257698 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-798607" [8babc737-acfc-4cad-9bd0-3f28bf89533b] Running
	I1229 07:16:24.571117  257698 system_pods.go:89] "kube-proxy-4mnzc" [c322649a-8539-4264-9165-2a2522f06078] Running
	I1229 07:16:24.571123  257698 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-798607" [89336461-1b92-451b-b02f-3fe54f3b6309] Running
	I1229 07:16:24.571132  257698 system_pods.go:89] "storage-provisioner" [77ec6576-1cba-401f-8b20-e6e97d7be45d] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1229 07:16:24.571171  257698 retry.go:84] will retry after 200ms: missing components: kube-dns
	I1229 07:16:24.766859  257698 system_pods.go:86] 8 kube-system pods found
	I1229 07:16:24.766915  257698 system_pods.go:89] "coredns-7d764666f9-jwmww" [1ab5b614-62d4-4118-9c4b-2e12e7ae7aec] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1229 07:16:24.766924  257698 system_pods.go:89] "etcd-default-k8s-diff-port-798607" [e1c1af51-4014-4c32-bcff-e34907986cbd] Running
	I1229 07:16:24.766933  257698 system_pods.go:89] "kindnet-m6jd2" [eae39509-802b-4a6e-b436-904c44761153] Running
	I1229 07:16:24.766941  257698 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-798607" [45d77ffe-320b-4e0c-b70c-c8f5c10e462f] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1229 07:16:24.766951  257698 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-798607" [8babc737-acfc-4cad-9bd0-3f28bf89533b] Running
	I1229 07:16:24.766957  257698 system_pods.go:89] "kube-proxy-4mnzc" [c322649a-8539-4264-9165-2a2522f06078] Running
	I1229 07:16:24.766962  257698 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-798607" [89336461-1b92-451b-b02f-3fe54f3b6309] Running
	I1229 07:16:24.766973  257698 system_pods.go:89] "storage-provisioner" [77ec6576-1cba-401f-8b20-e6e97d7be45d] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1229 07:16:25.156403  257698 system_pods.go:86] 8 kube-system pods found
	I1229 07:16:25.156439  257698 system_pods.go:89] "coredns-7d764666f9-jwmww" [1ab5b614-62d4-4118-9c4b-2e12e7ae7aec] Running
	I1229 07:16:25.156448  257698 system_pods.go:89] "etcd-default-k8s-diff-port-798607" [e1c1af51-4014-4c32-bcff-e34907986cbd] Running
	I1229 07:16:25.156455  257698 system_pods.go:89] "kindnet-m6jd2" [eae39509-802b-4a6e-b436-904c44761153] Running
	I1229 07:16:25.156460  257698 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-798607" [45d77ffe-320b-4e0c-b70c-c8f5c10e462f] Running
	I1229 07:16:25.156466  257698 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-798607" [8babc737-acfc-4cad-9bd0-3f28bf89533b] Running
	I1229 07:16:25.156472  257698 system_pods.go:89] "kube-proxy-4mnzc" [c322649a-8539-4264-9165-2a2522f06078] Running
	I1229 07:16:25.156478  257698 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-798607" [89336461-1b92-451b-b02f-3fe54f3b6309] Running
	I1229 07:16:25.156483  257698 system_pods.go:89] "storage-provisioner" [77ec6576-1cba-401f-8b20-e6e97d7be45d] Running
	I1229 07:16:25.156493  257698 system_pods.go:126] duration metric: took 588.408771ms to wait for k8s-apps to be running ...
	I1229 07:16:25.156506  257698 system_svc.go:44] waiting for kubelet service to be running ....
	I1229 07:16:25.156558  257698 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1229 07:16:25.170169  257698 system_svc.go:56] duration metric: took 13.655258ms WaitForService to wait for kubelet
	I1229 07:16:25.170197  257698 kubeadm.go:587] duration metric: took 14.462778971s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1229 07:16:25.170213  257698 node_conditions.go:102] verifying NodePressure condition ...
	I1229 07:16:25.174799  257698 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1229 07:16:25.174825  257698 node_conditions.go:123] node cpu capacity is 8
	I1229 07:16:25.174841  257698 node_conditions.go:105] duration metric: took 4.624004ms to run NodePressure ...
	I1229 07:16:25.174852  257698 start.go:242] waiting for startup goroutines ...
	I1229 07:16:25.174858  257698 start.go:247] waiting for cluster config update ...
	I1229 07:16:25.174868  257698 start.go:256] writing updated cluster config ...
	I1229 07:16:25.175140  257698 ssh_runner.go:195] Run: rm -f paused
	I1229 07:16:25.178840  257698 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1229 07:16:25.256252  257698 pod_ready.go:83] waiting for pod "coredns-7d764666f9-jwmww" in "kube-system" namespace to be "Ready" or be gone ...
	I1229 07:16:25.260813  257698 pod_ready.go:94] pod "coredns-7d764666f9-jwmww" is "Ready"
	I1229 07:16:25.260841  257698 pod_ready.go:86] duration metric: took 4.558151ms for pod "coredns-7d764666f9-jwmww" in "kube-system" namespace to be "Ready" or be gone ...
	I1229 07:16:25.262830  257698 pod_ready.go:83] waiting for pod "etcd-default-k8s-diff-port-798607" in "kube-system" namespace to be "Ready" or be gone ...
	I1229 07:16:25.266727  257698 pod_ready.go:94] pod "etcd-default-k8s-diff-port-798607" is "Ready"
	I1229 07:16:25.266752  257698 pod_ready.go:86] duration metric: took 3.893811ms for pod "etcd-default-k8s-diff-port-798607" in "kube-system" namespace to be "Ready" or be gone ...
	I1229 07:16:25.268446  257698 pod_ready.go:83] waiting for pod "kube-apiserver-default-k8s-diff-port-798607" in "kube-system" namespace to be "Ready" or be gone ...
	I1229 07:16:25.271916  257698 pod_ready.go:94] pod "kube-apiserver-default-k8s-diff-port-798607" is "Ready"
	I1229 07:16:25.271945  257698 pod_ready.go:86] duration metric: took 3.478604ms for pod "kube-apiserver-default-k8s-diff-port-798607" in "kube-system" namespace to be "Ready" or be gone ...
	I1229 07:16:25.273743  257698 pod_ready.go:83] waiting for pod "kube-controller-manager-default-k8s-diff-port-798607" in "kube-system" namespace to be "Ready" or be gone ...
	I1229 07:16:25.583106  257698 pod_ready.go:94] pod "kube-controller-manager-default-k8s-diff-port-798607" is "Ready"
	I1229 07:16:25.583133  257698 pod_ready.go:86] duration metric: took 309.370002ms for pod "kube-controller-manager-default-k8s-diff-port-798607" in "kube-system" namespace to be "Ready" or be gone ...
	I1229 07:16:25.783207  257698 pod_ready.go:83] waiting for pod "kube-proxy-4mnzc" in "kube-system" namespace to be "Ready" or be gone ...
	I1229 07:16:26.183509  257698 pod_ready.go:94] pod "kube-proxy-4mnzc" is "Ready"
	I1229 07:16:26.183537  257698 pod_ready.go:86] duration metric: took 400.277554ms for pod "kube-proxy-4mnzc" in "kube-system" namespace to be "Ready" or be gone ...
	I1229 07:16:26.383639  257698 pod_ready.go:83] waiting for pod "kube-scheduler-default-k8s-diff-port-798607" in "kube-system" namespace to be "Ready" or be gone ...
	I1229 07:16:26.782731  257698 pod_ready.go:94] pod "kube-scheduler-default-k8s-diff-port-798607" is "Ready"
	I1229 07:16:26.782755  257698 pod_ready.go:86] duration metric: took 399.089831ms for pod "kube-scheduler-default-k8s-diff-port-798607" in "kube-system" namespace to be "Ready" or be gone ...
	I1229 07:16:26.782766  257698 pod_ready.go:40] duration metric: took 1.603900332s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1229 07:16:26.826665  257698 start.go:625] kubectl: 1.35.0, cluster: 1.35.0 (minor skew: 0)
	I1229 07:16:26.828685  257698 out.go:179] * Done! kubectl is now configured to use "default-k8s-diff-port-798607" cluster and "default" namespace by default
	W1229 07:16:23.647149  260780 pod_ready.go:104] pod "coredns-7d764666f9-6rcr2" is not "Ready", error: <nil>
	W1229 07:16:26.147485  260780 pod_ready.go:104] pod "coredns-7d764666f9-6rcr2" is not "Ready", error: <nil>
	I1229 07:16:27.320716  225445 api_server.go:299] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1229 07:16:27.321052  225445 api_server.go:315] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1229 07:16:27.321103  225445 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1229 07:16:27.321144  225445 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1229 07:16:27.350441  225445 cri.go:96] found id: "8b80982715281e14912ea726d2a5a182323febb349dfab3bcb2a610db7fa1d11"
	I1229 07:16:27.350465  225445 cri.go:96] found id: "3a9c461f6549cedfcb902d493a98dfdfd45573c28ec554e6b1d2ce904ee1fd67"
	I1229 07:16:27.350471  225445 cri.go:96] found id: ""
	I1229 07:16:27.350480  225445 logs.go:282] 2 containers: [8b80982715281e14912ea726d2a5a182323febb349dfab3bcb2a610db7fa1d11 3a9c461f6549cedfcb902d493a98dfdfd45573c28ec554e6b1d2ce904ee1fd67]
	I1229 07:16:27.350537  225445 ssh_runner.go:195] Run: which crictl
	I1229 07:16:27.354565  225445 ssh_runner.go:195] Run: which crictl
	I1229 07:16:27.358053  225445 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1229 07:16:27.358107  225445 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1229 07:16:27.383946  225445 cri.go:96] found id: "02694b727a7ef5615b676832bc306017d23da15eff0a3aa9c1e32630ffe9e1bd"
	I1229 07:16:27.383967  225445 cri.go:96] found id: ""
	I1229 07:16:27.383977  225445 logs.go:282] 1 containers: [02694b727a7ef5615b676832bc306017d23da15eff0a3aa9c1e32630ffe9e1bd]
	I1229 07:16:27.384027  225445 ssh_runner.go:195] Run: which crictl
	I1229 07:16:27.387929  225445 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1229 07:16:27.387982  225445 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1229 07:16:27.415193  225445 cri.go:96] found id: ""
	I1229 07:16:27.415214  225445 logs.go:282] 0 containers: []
	W1229 07:16:27.415236  225445 logs.go:284] No container was found matching "coredns"
	I1229 07:16:27.415244  225445 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1229 07:16:27.415300  225445 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1229 07:16:27.442113  225445 cri.go:96] found id: "14199977eb133b2984f339d2e84d171a190e387623bebdf7ed21e27d8cf71d90"
	I1229 07:16:27.442133  225445 cri.go:96] found id: "1a2a5ec730671fe2fec7f6b801e54afe17547e25677f0f6dc85052e44f7374d4"
	I1229 07:16:27.442152  225445 cri.go:96] found id: "83f0b45b14d0f5c0f2428e774b2c22cc1cbf1cdc631be13cb8017de1791ed9c1"
	I1229 07:16:27.442156  225445 cri.go:96] found id: ""
	I1229 07:16:27.442163  225445 logs.go:282] 3 containers: [14199977eb133b2984f339d2e84d171a190e387623bebdf7ed21e27d8cf71d90 1a2a5ec730671fe2fec7f6b801e54afe17547e25677f0f6dc85052e44f7374d4 83f0b45b14d0f5c0f2428e774b2c22cc1cbf1cdc631be13cb8017de1791ed9c1]
	I1229 07:16:27.442245  225445 ssh_runner.go:195] Run: which crictl
	I1229 07:16:27.446341  225445 ssh_runner.go:195] Run: which crictl
	I1229 07:16:27.449897  225445 ssh_runner.go:195] Run: which crictl
	I1229 07:16:27.453338  225445 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1229 07:16:27.453396  225445 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1229 07:16:27.480706  225445 cri.go:96] found id: ""
	I1229 07:16:27.480735  225445 logs.go:282] 0 containers: []
	W1229 07:16:27.480746  225445 logs.go:284] No container was found matching "kube-proxy"
	I1229 07:16:27.480754  225445 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1229 07:16:27.480811  225445 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1229 07:16:27.508753  225445 cri.go:96] found id: "a67832fbf27ab39d26d7cd60ce6b9b5a496df64418a8f0557221862df96d6685"
	I1229 07:16:27.508778  225445 cri.go:96] found id: "c6904a1539983ab61b05abbaa0001209617c96ddc1921ff83557531b68191f66"
	I1229 07:16:27.508783  225445 cri.go:96] found id: ""
	I1229 07:16:27.508789  225445 logs.go:282] 2 containers: [a67832fbf27ab39d26d7cd60ce6b9b5a496df64418a8f0557221862df96d6685 c6904a1539983ab61b05abbaa0001209617c96ddc1921ff83557531b68191f66]
	I1229 07:16:27.508833  225445 ssh_runner.go:195] Run: which crictl
	I1229 07:16:27.513001  225445 ssh_runner.go:195] Run: which crictl
	I1229 07:16:27.517076  225445 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1229 07:16:27.517136  225445 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1229 07:16:27.543841  225445 cri.go:96] found id: ""
	I1229 07:16:27.543869  225445 logs.go:282] 0 containers: []
	W1229 07:16:27.543881  225445 logs.go:284] No container was found matching "kindnet"
	I1229 07:16:27.543911  225445 cri.go:61] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1229 07:16:27.543965  225445 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=storage-provisioner
	I1229 07:16:27.571613  225445 cri.go:96] found id: ""
	I1229 07:16:27.571640  225445 logs.go:282] 0 containers: []
	W1229 07:16:27.571650  225445 logs.go:284] No container was found matching "storage-provisioner"
	I1229 07:16:27.571662  225445 logs.go:123] Gathering logs for kube-controller-manager [a67832fbf27ab39d26d7cd60ce6b9b5a496df64418a8f0557221862df96d6685] ...
	I1229 07:16:27.571679  225445 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a67832fbf27ab39d26d7cd60ce6b9b5a496df64418a8f0557221862df96d6685"
	I1229 07:16:27.598341  225445 logs.go:123] Gathering logs for kube-controller-manager [c6904a1539983ab61b05abbaa0001209617c96ddc1921ff83557531b68191f66] ...
	I1229 07:16:27.598373  225445 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c6904a1539983ab61b05abbaa0001209617c96ddc1921ff83557531b68191f66"
	I1229 07:16:27.625760  225445 logs.go:123] Gathering logs for CRI-O ...
	I1229 07:16:27.625787  225445 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1229 07:16:27.695839  225445 logs.go:123] Gathering logs for describe nodes ...
	I1229 07:16:27.695882  225445 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1229 07:16:27.752835  225445 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1229 07:16:27.752855  225445 logs.go:123] Gathering logs for kube-apiserver [8b80982715281e14912ea726d2a5a182323febb349dfab3bcb2a610db7fa1d11] ...
	I1229 07:16:27.752867  225445 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 8b80982715281e14912ea726d2a5a182323febb349dfab3bcb2a610db7fa1d11"
	I1229 07:16:27.784333  225445 logs.go:123] Gathering logs for kube-apiserver [3a9c461f6549cedfcb902d493a98dfdfd45573c28ec554e6b1d2ce904ee1fd67] ...
	I1229 07:16:27.784371  225445 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3a9c461f6549cedfcb902d493a98dfdfd45573c28ec554e6b1d2ce904ee1fd67"
	I1229 07:16:27.814527  225445 logs.go:123] Gathering logs for kube-scheduler [83f0b45b14d0f5c0f2428e774b2c22cc1cbf1cdc631be13cb8017de1791ed9c1] ...
	I1229 07:16:27.814559  225445 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 83f0b45b14d0f5c0f2428e774b2c22cc1cbf1cdc631be13cb8017de1791ed9c1"
	I1229 07:16:27.841004  225445 logs.go:123] Gathering logs for container status ...
	I1229 07:16:27.841034  225445 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1229 07:16:27.871420  225445 logs.go:123] Gathering logs for kubelet ...
	I1229 07:16:27.871446  225445 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1229 07:16:27.961848  225445 logs.go:123] Gathering logs for dmesg ...
	I1229 07:16:27.961880  225445 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1229 07:16:27.975778  225445 logs.go:123] Gathering logs for etcd [02694b727a7ef5615b676832bc306017d23da15eff0a3aa9c1e32630ffe9e1bd] ...
	I1229 07:16:27.975810  225445 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 02694b727a7ef5615b676832bc306017d23da15eff0a3aa9c1e32630ffe9e1bd"
	I1229 07:16:28.009006  225445 logs.go:123] Gathering logs for kube-scheduler [14199977eb133b2984f339d2e84d171a190e387623bebdf7ed21e27d8cf71d90] ...
	I1229 07:16:28.009032  225445 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 14199977eb133b2984f339d2e84d171a190e387623bebdf7ed21e27d8cf71d90"
	W1229 07:16:28.034661  225445 logs.go:138] Found kube-scheduler [14199977eb133b2984f339d2e84d171a190e387623bebdf7ed21e27d8cf71d90] problem: E1229 07:16:24.252692       1 run.go:72] "command failed" err="failed to create listener: failed to listen on 127.0.0.1:10259: listen tcp 127.0.0.1:10259: bind: address already in use"
	I1229 07:16:28.034684  225445 logs.go:123] Gathering logs for kube-scheduler [1a2a5ec730671fe2fec7f6b801e54afe17547e25677f0f6dc85052e44f7374d4] ...
	I1229 07:16:28.034695  225445 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 1a2a5ec730671fe2fec7f6b801e54afe17547e25677f0f6dc85052e44f7374d4"
	I1229 07:16:28.103089  225445 out.go:374] Setting ErrFile to fd 2...
	I1229 07:16:28.103115  225445 out.go:408] TERM=,COLORTERM=, which probably does not support color
	W1229 07:16:28.103169  225445 out.go:285] X Problems detected in kube-scheduler [14199977eb133b2984f339d2e84d171a190e387623bebdf7ed21e27d8cf71d90]:
	W1229 07:16:28.103181  225445 out.go:285]   E1229 07:16:24.252692       1 run.go:72] "command failed" err="failed to create listener: failed to listen on 127.0.0.1:10259: listen tcp 127.0.0.1:10259: bind: address already in use"
	I1229 07:16:28.103188  225445 out.go:374] Setting ErrFile to fd 2...
	I1229 07:16:28.103194  225445 out.go:408] TERM=,COLORTERM=, which probably does not support color
	
	
	==> CRI-O <==
	Dec 29 07:16:16 embed-certs-739827 crio[776]: time="2025-12-29T07:16:16.713800896Z" level=info msg="Starting container: 6a3ee81d1c0da0ebe553112c7fafa74b7dfbc3a76e2995ef19de8224bd7a1292" id=a119e741-b853-4dcf-bdb4-5486cc4db215 name=/runtime.v1.RuntimeService/StartContainer
	Dec 29 07:16:16 embed-certs-739827 crio[776]: time="2025-12-29T07:16:16.716653561Z" level=info msg="Started container" PID=1896 containerID=6a3ee81d1c0da0ebe553112c7fafa74b7dfbc3a76e2995ef19de8224bd7a1292 description=kube-system/coredns-7d764666f9-55529/coredns id=a119e741-b853-4dcf-bdb4-5486cc4db215 name=/runtime.v1.RuntimeService/StartContainer sandboxID=8b7bb31a67ff8210b59064e5e9fe4544f1de196b19c100c5df6537c77627f78f
	Dec 29 07:16:20 embed-certs-739827 crio[776]: time="2025-12-29T07:16:20.77388779Z" level=info msg="Running pod sandbox: default/busybox/POD" id=7cad195f-5389-4b66-a0ff-adf24b52ca8a name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 29 07:16:20 embed-certs-739827 crio[776]: time="2025-12-29T07:16:20.773990425Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 29 07:16:20 embed-certs-739827 crio[776]: time="2025-12-29T07:16:20.778831137Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:bc328eb6e95a52f6090f09ad11c9c6c03b9b633b6e586be61146a12ea7b93572 UID:fec130f6-04f7-4f99-8723-932ebe4f8b00 NetNS:/var/run/netns/d52944bb-37a4-4414-9e7e-103f31c90c5b Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc000590ed8}] Aliases:map[]}"
	Dec 29 07:16:20 embed-certs-739827 crio[776]: time="2025-12-29T07:16:20.778859501Z" level=info msg="Adding pod default_busybox to CNI network \"kindnet\" (type=ptp)"
	Dec 29 07:16:20 embed-certs-739827 crio[776]: time="2025-12-29T07:16:20.794949936Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:bc328eb6e95a52f6090f09ad11c9c6c03b9b633b6e586be61146a12ea7b93572 UID:fec130f6-04f7-4f99-8723-932ebe4f8b00 NetNS:/var/run/netns/d52944bb-37a4-4414-9e7e-103f31c90c5b Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc000590ed8}] Aliases:map[]}"
	Dec 29 07:16:20 embed-certs-739827 crio[776]: time="2025-12-29T07:16:20.795085178Z" level=info msg="Checking pod default_busybox for CNI network kindnet (type=ptp)"
	Dec 29 07:16:20 embed-certs-739827 crio[776]: time="2025-12-29T07:16:20.795880902Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Dec 29 07:16:20 embed-certs-739827 crio[776]: time="2025-12-29T07:16:20.796801241Z" level=info msg="Ran pod sandbox bc328eb6e95a52f6090f09ad11c9c6c03b9b633b6e586be61146a12ea7b93572 with infra container: default/busybox/POD" id=7cad195f-5389-4b66-a0ff-adf24b52ca8a name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 29 07:16:20 embed-certs-739827 crio[776]: time="2025-12-29T07:16:20.79808406Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=70ea546a-b7d2-4f01-99a5-1756fbd28394 name=/runtime.v1.ImageService/ImageStatus
	Dec 29 07:16:20 embed-certs-739827 crio[776]: time="2025-12-29T07:16:20.798205412Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=70ea546a-b7d2-4f01-99a5-1756fbd28394 name=/runtime.v1.ImageService/ImageStatus
	Dec 29 07:16:20 embed-certs-739827 crio[776]: time="2025-12-29T07:16:20.798320995Z" level=info msg="Neither image nor artfiact gcr.io/k8s-minikube/busybox:1.28.4-glibc found" id=70ea546a-b7d2-4f01-99a5-1756fbd28394 name=/runtime.v1.ImageService/ImageStatus
	Dec 29 07:16:20 embed-certs-739827 crio[776]: time="2025-12-29T07:16:20.799040486Z" level=info msg="Pulling image: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=03c484e1-d1b5-4837-b98a-fd7e8b723872 name=/runtime.v1.ImageService/PullImage
	Dec 29 07:16:20 embed-certs-739827 crio[776]: time="2025-12-29T07:16:20.799419331Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Dec 29 07:16:21 embed-certs-739827 crio[776]: time="2025-12-29T07:16:21.981108208Z" level=info msg="Pulled image: gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998" id=03c484e1-d1b5-4837-b98a-fd7e8b723872 name=/runtime.v1.ImageService/PullImage
	Dec 29 07:16:21 embed-certs-739827 crio[776]: time="2025-12-29T07:16:21.981773121Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=93a526bc-c884-455b-861c-30cf7fb42c73 name=/runtime.v1.ImageService/ImageStatus
	Dec 29 07:16:21 embed-certs-739827 crio[776]: time="2025-12-29T07:16:21.983386237Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=37d15036-a69c-4b3e-acd7-b2df5322ac5b name=/runtime.v1.ImageService/ImageStatus
	Dec 29 07:16:21 embed-certs-739827 crio[776]: time="2025-12-29T07:16:21.986645905Z" level=info msg="Creating container: default/busybox/busybox" id=63374d5f-a2ca-4da5-a1b2-9968ebef3090 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 29 07:16:21 embed-certs-739827 crio[776]: time="2025-12-29T07:16:21.986790302Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 29 07:16:21 embed-certs-739827 crio[776]: time="2025-12-29T07:16:21.990995249Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 29 07:16:21 embed-certs-739827 crio[776]: time="2025-12-29T07:16:21.991576203Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 29 07:16:22 embed-certs-739827 crio[776]: time="2025-12-29T07:16:22.019442616Z" level=info msg="Created container 1fd768318a6d3847c304a45bdedcbbafdf26724c8f4cd8761a4ad9b2dddaf8f7: default/busybox/busybox" id=63374d5f-a2ca-4da5-a1b2-9968ebef3090 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 29 07:16:22 embed-certs-739827 crio[776]: time="2025-12-29T07:16:22.020037958Z" level=info msg="Starting container: 1fd768318a6d3847c304a45bdedcbbafdf26724c8f4cd8761a4ad9b2dddaf8f7" id=3c52baed-2f82-4483-b65b-df0459109563 name=/runtime.v1.RuntimeService/StartContainer
	Dec 29 07:16:22 embed-certs-739827 crio[776]: time="2025-12-29T07:16:22.02202267Z" level=info msg="Started container" PID=1978 containerID=1fd768318a6d3847c304a45bdedcbbafdf26724c8f4cd8761a4ad9b2dddaf8f7 description=default/busybox/busybox id=3c52baed-2f82-4483-b65b-df0459109563 name=/runtime.v1.RuntimeService/StartContainer sandboxID=bc328eb6e95a52f6090f09ad11c9c6c03b9b633b6e586be61146a12ea7b93572
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                          NAMESPACE
	1fd768318a6d3       gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998   7 seconds ago       Running             busybox                   0                   bc328eb6e95a5       busybox                                      default
	6a3ee81d1c0da       aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139                                      13 seconds ago      Running             coredns                   0                   8b7bb31a67ff8       coredns-7d764666f9-55529                     kube-system
	a72a8e607c46f       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      13 seconds ago      Running             storage-provisioner       0                   9a216a03b5c50       storage-provisioner                          kube-system
	9328932c88eea       docker.io/kindest/kindnetd@sha256:7c22558dc06a570d46ea6e8a73b23cdc754eb81f7c08d3441a3171ad359ffc27    24 seconds ago      Running             kindnet-cni               0                   442df4b6a1a33       kindnet-l6mxr                                kube-system
	8758ac8dc2d8d       32652ff1bbe6b6df16a3dc621fcad4e6e185a2852bffc5847aab79e36c54bfb8                                      25 seconds ago      Running             kube-proxy                0                   43a74b6cc2a99       kube-proxy-hdmp6                             kube-system
	2983a12c566c7       550794e3b12ac21ec7fd940bdfb45f7c8dae3c52acd8c70e580b882de20c3dcc                                      35 seconds ago      Running             kube-scheduler            0                   f4f70cf7080e4       kube-scheduler-embed-certs-739827            kube-system
	e38dde494fc58       0a108f7189562e99793bdecab61fdf1a7c9d913af3385de9da17fb9d6ff430e2                                      35 seconds ago      Running             etcd                      0                   1fde77af335d7       etcd-embed-certs-739827                      kube-system
	a74d7d84b5391       2c9a4b058bd7e6ec479c38f9e8a1dac2f5ee5b0a3ebda6dfac968e8720229508                                      35 seconds ago      Running             kube-controller-manager   0                   08c47d26b1dd2       kube-controller-manager-embed-certs-739827   kube-system
	becb7f5b7f6cb       5c6acd67e9cd10eb60a246cd233db251ef62ea97e6572f897e873f0cb648f499                                      35 seconds ago      Running             kube-apiserver            0                   b7c183b0f50f6       kube-apiserver-embed-certs-739827            kube-system
	
	
	==> coredns [6a3ee81d1c0da0ebe553112c7fafa74b7dfbc3a76e2995ef19de8224bd7a1292] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 66f0a748f44f6317a6b122af3f457c9dd0ecaed8718ffbf95a69434523efd9ec4992e71f54c7edd5753646fe9af89ac2138b9c3ce14d4a0ba9d2372a55f120bb
	CoreDNS-1.13.1
	linux/amd64, go1.25.2, 1db4568
	[INFO] 127.0.0.1:42167 - 18273 "HINFO IN 2224260924776093381.4666710767400200589. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.02297738s
	
	
	==> describe nodes <==
	Name:               embed-certs-739827
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=embed-certs-739827
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=306c26d738eaa3534a776d7c684ed563998a48b8
	                    minikube.k8s.io/name=embed-certs-739827
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_29T07_15_59_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 29 Dec 2025 07:15:56 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-739827
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 29 Dec 2025 07:16:29 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 29 Dec 2025 07:16:29 +0000   Mon, 29 Dec 2025 07:15:54 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 29 Dec 2025 07:16:29 +0000   Mon, 29 Dec 2025 07:15:54 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 29 Dec 2025 07:16:29 +0000   Mon, 29 Dec 2025 07:15:54 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 29 Dec 2025 07:16:29 +0000   Mon, 29 Dec 2025 07:16:16 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.103.2
	  Hostname:    embed-certs-739827
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863360Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863360Ki
	  pods:               110
	System Info:
	  Machine ID:                 44f990608ba801cb32708aeb6951f96d
	  System UUID:                ab46c7d0-f92f-48dd-a29d-7cfb62a7d0f3
	  Boot ID:                    38b59c2f-2e15-4538-b2f7-46a8b6545a02
	  Kernel Version:             6.8.0-1045-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.35.0
	  Kubelet Version:            v1.35.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         9s
	  kube-system                 coredns-7d764666f9-55529                      100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     25s
	  kube-system                 etcd-embed-certs-739827                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         31s
	  kube-system                 kindnet-l6mxr                                 100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      26s
	  kube-system                 kube-apiserver-embed-certs-739827             250m (3%)     0 (0%)      0 (0%)           0 (0%)         31s
	  kube-system                 kube-controller-manager-embed-certs-739827    200m (2%)     0 (0%)      0 (0%)           0 (0%)         31s
	  kube-system                 kube-proxy-hdmp6                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         26s
	  kube-system                 kube-scheduler-embed-certs-739827             100m (1%)     0 (0%)      0 (0%)           0 (0%)         31s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         25s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason          Age   From             Message
	  ----    ------          ----  ----             -------
	  Normal  RegisteredNode  27s   node-controller  Node embed-certs-739827 event: Registered Node embed-certs-739827 in Controller
	
	
	==> dmesg <==
	[Dec29 06:17] MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details.
	[  +0.001714] TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details.
	[  +0.001000] MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details.
	[  +0.084008] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge
	[  +0.380672] i8042: Warning: Keylock active
	[  +0.013979] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.493300] block sda: the capability attribute has been deprecated.
	[  +0.084603] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.024785] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +4.737934] kauditd_printk_skb: 47 callbacks suppressed
	
	
	==> etcd [e38dde494fc5803c27f6aebef5b96cf5752608ffe7aced65b29468136310a971] <==
	{"level":"info","ts":"2025-12-29T07:15:54.211169Z","caller":"membership/cluster.go:424","msg":"added member","cluster-id":"3336683c081d149d","local-member-id":"f23060b075c4c089","added-peer-id":"f23060b075c4c089","added-peer-peer-urls":["https://192.168.103.2:2380"],"added-peer-is-learner":false}
	{"level":"info","ts":"2025-12-29T07:15:55.101835Z","logger":"raft","caller":"v3@v3.6.0/raft.go:988","msg":"f23060b075c4c089 is starting a new election at term 1"}
	{"level":"info","ts":"2025-12-29T07:15:55.101930Z","logger":"raft","caller":"v3@v3.6.0/raft.go:930","msg":"f23060b075c4c089 became pre-candidate at term 1"}
	{"level":"info","ts":"2025-12-29T07:15:55.102026Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1077","msg":"f23060b075c4c089 received MsgPreVoteResp from f23060b075c4c089 at term 1"}
	{"level":"info","ts":"2025-12-29T07:15:55.102052Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1693","msg":"f23060b075c4c089 has received 1 MsgPreVoteResp votes and 0 vote rejections"}
	{"level":"info","ts":"2025-12-29T07:15:55.102073Z","logger":"raft","caller":"v3@v3.6.0/raft.go:912","msg":"f23060b075c4c089 became candidate at term 2"}
	{"level":"info","ts":"2025-12-29T07:15:55.102507Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1077","msg":"f23060b075c4c089 received MsgVoteResp from f23060b075c4c089 at term 2"}
	{"level":"info","ts":"2025-12-29T07:15:55.102549Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1693","msg":"f23060b075c4c089 has received 1 MsgVoteResp votes and 0 vote rejections"}
	{"level":"info","ts":"2025-12-29T07:15:55.102567Z","logger":"raft","caller":"v3@v3.6.0/raft.go:970","msg":"f23060b075c4c089 became leader at term 2"}
	{"level":"info","ts":"2025-12-29T07:15:55.102575Z","logger":"raft","caller":"v3@v3.6.0/node.go:370","msg":"raft.node: f23060b075c4c089 elected leader f23060b075c4c089 at term 2"}
	{"level":"info","ts":"2025-12-29T07:15:55.103371Z","caller":"etcdserver/server.go:1820","msg":"published local member to cluster through raft","local-member-id":"f23060b075c4c089","local-member-attributes":"{Name:embed-certs-739827 ClientURLs:[https://192.168.103.2:2379]}","cluster-id":"3336683c081d149d","publish-timeout":"7s"}
	{"level":"info","ts":"2025-12-29T07:15:55.103401Z","caller":"embed/serve.go:138","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-12-29T07:15:55.103468Z","caller":"etcdserver/server.go:2420","msg":"setting up initial cluster version using v3 API","cluster-version":"3.6"}
	{"level":"info","ts":"2025-12-29T07:15:55.103381Z","caller":"embed/serve.go:138","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-12-29T07:15:55.103643Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-12-29T07:15:55.103692Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-12-29T07:15:55.104687Z","caller":"membership/cluster.go:682","msg":"set initial cluster version","cluster-id":"3336683c081d149d","local-member-id":"f23060b075c4c089","cluster-version":"3.6"}
	{"level":"info","ts":"2025-12-29T07:15:55.104794Z","caller":"api/capability.go:76","msg":"enabled capabilities for version","cluster-version":"3.6"}
	{"level":"info","ts":"2025-12-29T07:15:55.104830Z","caller":"etcdserver/server.go:2440","msg":"cluster version is updated","cluster-version":"3.6"}
	{"level":"info","ts":"2025-12-29T07:15:55.104892Z","caller":"version/monitor.go:116","msg":"cluster version differs from storage version.","cluster-version":"3.6.0","storage-version":"3.5.0"}
	{"level":"info","ts":"2025-12-29T07:15:55.105001Z","caller":"v3rpc/health.go:63","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-12-29T07:15:55.105261Z","caller":"v3rpc/health.go:63","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-12-29T07:15:55.105971Z","caller":"schema/migration.go:65","msg":"updated storage version","new-storage-version":"3.6.0"}
	{"level":"info","ts":"2025-12-29T07:15:55.108363Z","caller":"embed/serve.go:283","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2025-12-29T07:15:55.108382Z","caller":"embed/serve.go:283","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.103.2:2379"}
	
	
	==> kernel <==
	 07:16:29 up 59 min,  0 user,  load average: 3.11, 2.79, 2.01
	Linux embed-certs-739827 6.8.0-1045-gcp #48~22.04.1-Ubuntu SMP Tue Nov 25 13:07:56 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [9328932c88eea0e290b2f95dd12c749c314bbe1afa6abeff3f2aaecf076ae506] <==
	I1229 07:16:05.696901       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1229 07:16:05.789579       1 main.go:139] hostIP = 192.168.103.2
	podIP = 192.168.103.2
	I1229 07:16:05.789727       1 main.go:148] setting mtu 1500 for CNI 
	I1229 07:16:05.789756       1 main.go:178] kindnetd IP family: "ipv4"
	I1229 07:16:05.789787       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-12-29T07:16:05Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1229 07:16:05.994285       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1229 07:16:05.994794       1 controller.go:381] "Waiting for informer caches to sync"
	I1229 07:16:05.994836       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1229 07:16:05.995115       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1229 07:16:06.489497       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1229 07:16:06.489524       1 metrics.go:72] Registering metrics
	I1229 07:16:06.489574       1 controller.go:711] "Syncing nftables rules"
	I1229 07:16:15.994637       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1229 07:16:15.994724       1 main.go:301] handling current node
	I1229 07:16:25.996650       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1229 07:16:25.996698       1 main.go:301] handling current node
	
	
	==> kube-apiserver [becb7f5b7f6cbd2bb0c9a780602c6b3cde51fd1357bb0a09dfc013da3508e630] <==
	I1229 07:15:56.112162       1 shared_informer.go:377] "Caches are synced"
	I1229 07:15:56.112197       1 default_servicecidr_controller.go:169] Creating default ServiceCIDR with CIDRs: [10.96.0.0/12]
	I1229 07:15:56.116737       1 cidrallocator.go:302] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1229 07:15:56.117087       1 default_servicecidr_controller.go:231] Setting default ServiceCIDR condition Ready to True
	I1229 07:15:56.122098       1 cidrallocator.go:278] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1229 07:15:56.301453       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1229 07:15:57.009669       1 storage_scheduling.go:123] created PriorityClass system-node-critical with value 2000001000
	I1229 07:15:57.013422       1 storage_scheduling.go:123] created PriorityClass system-cluster-critical with value 2000000000
	I1229 07:15:57.013442       1 storage_scheduling.go:139] all system priority classes are created successfully or already exist.
	I1229 07:15:57.496164       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1229 07:15:57.532135       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1229 07:15:57.613286       1 alloc.go:329] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1229 07:15:57.619047       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.103.2]
	I1229 07:15:57.620297       1 controller.go:667] quota admission added evaluator for: endpoints
	I1229 07:15:57.624259       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1229 07:15:58.035722       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1229 07:15:58.665129       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1229 07:15:58.675431       1 alloc.go:329] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1229 07:15:58.684853       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1229 07:16:02.989742       1 cidrallocator.go:278] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1229 07:16:02.994467       1 cidrallocator.go:278] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1229 07:16:03.788800       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	I1229 07:16:03.788801       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	I1229 07:16:03.989415       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	E1229 07:16:28.561597       1 conn.go:339] Error on socket receive: read tcp 192.168.103.2:8443->192.168.103.1:39152: use of closed network connection
	
	
	==> kube-controller-manager [a74d7d84b53913daca4c313eec5de577ecb40ab8c3e6086c6f1f8d7e7622b876] <==
	I1229 07:16:02.873370       1 shared_informer.go:377] "Caches are synced"
	I1229 07:16:02.873386       1 shared_informer.go:377] "Caches are synced"
	I1229 07:16:02.873433       1 shared_informer.go:377] "Caches are synced"
	I1229 07:16:02.873748       1 shared_informer.go:377] "Caches are synced"
	I1229 07:16:02.873782       1 shared_informer.go:377] "Caches are synced"
	I1229 07:16:02.873993       1 shared_informer.go:377] "Caches are synced"
	I1229 07:16:02.874280       1 shared_informer.go:377] "Caches are synced"
	I1229 07:16:02.874494       1 shared_informer.go:377] "Caches are synced"
	I1229 07:16:02.874579       1 shared_informer.go:377] "Caches are synced"
	I1229 07:16:02.876691       1 shared_informer.go:377] "Caches are synced"
	I1229 07:16:02.876730       1 shared_informer.go:377] "Caches are synced"
	I1229 07:16:02.876869       1 shared_informer.go:377] "Caches are synced"
	I1229 07:16:02.877298       1 shared_informer.go:377] "Caches are synced"
	I1229 07:16:02.877329       1 shared_informer.go:377] "Caches are synced"
	I1229 07:16:02.877349       1 shared_informer.go:377] "Caches are synced"
	I1229 07:16:02.877390       1 shared_informer.go:377] "Caches are synced"
	I1229 07:16:02.877428       1 shared_informer.go:377] "Caches are synced"
	I1229 07:16:02.877452       1 shared_informer.go:377] "Caches are synced"
	I1229 07:16:02.881871       1 range_allocator.go:433] "Set node PodCIDR" node="embed-certs-739827" podCIDRs=["10.244.0.0/24"]
	I1229 07:16:02.946931       1 shared_informer.go:370] "Waiting for caches to sync"
	I1229 07:16:02.967710       1 shared_informer.go:377] "Caches are synced"
	I1229 07:16:02.967727       1 garbagecollector.go:166] "Garbage collector: all resource monitors have synced"
	I1229 07:16:02.967731       1 garbagecollector.go:169] "Proceeding to collect garbage"
	I1229 07:16:03.047423       1 shared_informer.go:377] "Caches are synced"
	I1229 07:16:17.866064       1 node_lifecycle_controller.go:1057] "Controller detected that some Nodes are Ready. Exiting master disruption mode"
	
	
	==> kube-proxy [8758ac8dc2d8d609f5ac84f701dd379df3f8378f41b67279a426dd41e082e9c7] <==
	I1229 07:16:04.239719       1 server_linux.go:53] "Using iptables proxy"
	I1229 07:16:04.310558       1 shared_informer.go:370] "Waiting for caches to sync"
	I1229 07:16:04.411012       1 shared_informer.go:377] "Caches are synced"
	I1229 07:16:04.411063       1 server.go:218] "Successfully retrieved NodeIPs" NodeIPs=["192.168.103.2"]
	E1229 07:16:04.411168       1 server.go:255] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1229 07:16:04.438167       1 server.go:264] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1229 07:16:04.438256       1 server_linux.go:136] "Using iptables Proxier"
	I1229 07:16:04.446106       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1229 07:16:04.446512       1 server.go:529] "Version info" version="v1.35.0"
	I1229 07:16:04.446552       1 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1229 07:16:04.448188       1 config.go:403] "Starting serviceCIDR config controller"
	I1229 07:16:04.448279       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1229 07:16:04.448331       1 config.go:106] "Starting endpoint slice config controller"
	I1229 07:16:04.448337       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1229 07:16:04.448213       1 config.go:200] "Starting service config controller"
	I1229 07:16:04.448358       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1229 07:16:04.448623       1 config.go:309] "Starting node config controller"
	I1229 07:16:04.448632       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1229 07:16:04.448640       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1229 07:16:04.549418       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1229 07:16:04.549401       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1229 07:16:04.549457       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [2983a12c566c78bab4c4fa591d2a244d243aa7e4f626c5339b619b9142afeac4] <==
	E1229 07:15:56.052901       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ReplicaSet"
	E1229 07:15:56.052918       1 reflector.go:204] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.PersistentVolumeClaim"
	E1229 07:15:56.052493       1 reflector.go:204] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.CSIDriver"
	E1229 07:15:56.053205       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Node"
	E1229 07:15:56.053455       1 reflector.go:204] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.DeviceClass"
	E1229 07:15:56.053467       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ResourceSlice"
	E1229 07:15:56.053480       1 reflector.go:204] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.CSIStorageCapacity"
	E1229 07:15:56.053534       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Service"
	E1229 07:15:56.053543       1 reflector.go:204] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.StatefulSet"
	E1229 07:15:56.911824       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ResourceClaim"
	E1229 07:15:56.914824       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Service"
	E1229 07:15:56.935778       1 reflector.go:204] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.VolumeAttachment"
	E1229 07:15:56.998099       1 reflector.go:204] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.StatefulSet"
	E1229 07:15:57.040465       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Node"
	E1229 07:15:57.042358       1 reflector.go:204] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.StorageClass"
	E1229 07:15:57.114774       1 reflector.go:204] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.CSIStorageCapacity"
	E1229 07:15:57.159198       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ReplicationController"
	E1229 07:15:57.198351       1 reflector.go:204] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.DeviceClass"
	E1229 07:15:57.199468       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1693" type="*v1.ConfigMap"
	E1229 07:15:57.217555       1 reflector.go:204] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.PodDisruptionBudget"
	E1229 07:15:57.241772       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Namespace"
	E1229 07:15:57.243701       1 reflector.go:204] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.CSINode"
	E1229 07:15:57.270150       1 reflector.go:204] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.PersistentVolume"
	E1229 07:15:57.319511       1 reflector.go:204] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.CSIDriver"
	I1229 07:15:59.546460       1 shared_informer.go:377] "Caches are synced"
	
	
	==> kubelet <==
	Dec 29 07:16:03 embed-certs-739827 kubelet[1304]: I1229 07:16:03.904415    1304 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-67zpc\" (UniqueName: \"kubernetes.io/projected/8c745434-7be8-4f4a-9685-0b2ebdcd1a6f-kube-api-access-67zpc\") pod \"kindnet-l6mxr\" (UID: \"8c745434-7be8-4f4a-9685-0b2ebdcd1a6f\") " pod="kube-system/kindnet-l6mxr"
	Dec 29 07:16:03 embed-certs-739827 kubelet[1304]: I1229 07:16:03.904518    1304 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/ddf343da-e4e2-4ea1-a49d-02ad395abdaa-kube-proxy\") pod \"kube-proxy-hdmp6\" (UID: \"ddf343da-e4e2-4ea1-a49d-02ad395abdaa\") " pod="kube-system/kube-proxy-hdmp6"
	Dec 29 07:16:03 embed-certs-739827 kubelet[1304]: I1229 07:16:03.904568    1304 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ddf343da-e4e2-4ea1-a49d-02ad395abdaa-xtables-lock\") pod \"kube-proxy-hdmp6\" (UID: \"ddf343da-e4e2-4ea1-a49d-02ad395abdaa\") " pod="kube-system/kube-proxy-hdmp6"
	Dec 29 07:16:03 embed-certs-739827 kubelet[1304]: I1229 07:16:03.904598    1304 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v8s9h\" (UniqueName: \"kubernetes.io/projected/ddf343da-e4e2-4ea1-a49d-02ad395abdaa-kube-api-access-v8s9h\") pod \"kube-proxy-hdmp6\" (UID: \"ddf343da-e4e2-4ea1-a49d-02ad395abdaa\") " pod="kube-system/kube-proxy-hdmp6"
	Dec 29 07:16:04 embed-certs-739827 kubelet[1304]: E1229 07:16:04.136764    1304 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-embed-certs-739827" containerName="kube-scheduler"
	Dec 29 07:16:04 embed-certs-739827 kubelet[1304]: E1229 07:16:04.468649    1304 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/etcd-embed-certs-739827" containerName="etcd"
	Dec 29 07:16:04 embed-certs-739827 kubelet[1304]: I1229 07:16:04.531924    1304 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kube-proxy-hdmp6" podStartSLOduration=1.531904948 podStartE2EDuration="1.531904948s" podCreationTimestamp="2025-12-29 07:16:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-29 07:16:04.531414555 +0000 UTC m=+6.132261858" watchObservedRunningTime="2025-12-29 07:16:04.531904948 +0000 UTC m=+6.132752250"
	Dec 29 07:16:08 embed-certs-739827 kubelet[1304]: E1229 07:16:08.413134    1304 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-controller-manager-embed-certs-739827" containerName="kube-controller-manager"
	Dec 29 07:16:08 embed-certs-739827 kubelet[1304]: I1229 07:16:08.425093    1304 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kindnet-l6mxr" podStartSLOduration=4.084491537 podStartE2EDuration="5.425072114s" podCreationTimestamp="2025-12-29 07:16:03 +0000 UTC" firstStartedPulling="2025-12-29 07:16:04.128165246 +0000 UTC m=+5.729012539" lastFinishedPulling="2025-12-29 07:16:05.468745823 +0000 UTC m=+7.069593116" observedRunningTime="2025-12-29 07:16:05.535121869 +0000 UTC m=+7.135969170" watchObservedRunningTime="2025-12-29 07:16:08.425072114 +0000 UTC m=+10.025919435"
	Dec 29 07:16:13 embed-certs-739827 kubelet[1304]: E1229 07:16:13.136843    1304 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-embed-certs-739827" containerName="kube-apiserver"
	Dec 29 07:16:14 embed-certs-739827 kubelet[1304]: E1229 07:16:14.141114    1304 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-embed-certs-739827" containerName="kube-scheduler"
	Dec 29 07:16:14 embed-certs-739827 kubelet[1304]: E1229 07:16:14.469601    1304 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/etcd-embed-certs-739827" containerName="etcd"
	Dec 29 07:16:16 embed-certs-739827 kubelet[1304]: I1229 07:16:16.310343    1304 kubelet_node_status.go:427] "Fast updating node status as it just became ready"
	Dec 29 07:16:16 embed-certs-739827 kubelet[1304]: I1229 07:16:16.395842    1304 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2qn5j\" (UniqueName: \"kubernetes.io/projected/279a41bb-4bd1-4a8d-9999-27eb0a996229-kube-api-access-2qn5j\") pod \"coredns-7d764666f9-55529\" (UID: \"279a41bb-4bd1-4a8d-9999-27eb0a996229\") " pod="kube-system/coredns-7d764666f9-55529"
	Dec 29 07:16:16 embed-certs-739827 kubelet[1304]: I1229 07:16:16.395906    1304 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/b7edc06a-181d-4c30-b979-9aa3f1f50ecb-tmp\") pod \"storage-provisioner\" (UID: \"b7edc06a-181d-4c30-b979-9aa3f1f50ecb\") " pod="kube-system/storage-provisioner"
	Dec 29 07:16:16 embed-certs-739827 kubelet[1304]: I1229 07:16:16.396007    1304 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rnq6c\" (UniqueName: \"kubernetes.io/projected/b7edc06a-181d-4c30-b979-9aa3f1f50ecb-kube-api-access-rnq6c\") pod \"storage-provisioner\" (UID: \"b7edc06a-181d-4c30-b979-9aa3f1f50ecb\") " pod="kube-system/storage-provisioner"
	Dec 29 07:16:16 embed-certs-739827 kubelet[1304]: I1229 07:16:16.396114    1304 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/279a41bb-4bd1-4a8d-9999-27eb0a996229-config-volume\") pod \"coredns-7d764666f9-55529\" (UID: \"279a41bb-4bd1-4a8d-9999-27eb0a996229\") " pod="kube-system/coredns-7d764666f9-55529"
	Dec 29 07:16:17 embed-certs-739827 kubelet[1304]: E1229 07:16:17.550961    1304 prober_manager.go:209] "Readiness probe already exists for container" pod="kube-system/coredns-7d764666f9-55529" containerName="coredns"
	Dec 29 07:16:17 embed-certs-739827 kubelet[1304]: I1229 07:16:17.577304    1304 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=13.577283567 podStartE2EDuration="13.577283567s" podCreationTimestamp="2025-12-29 07:16:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-29 07:16:17.576936936 +0000 UTC m=+19.177784256" watchObservedRunningTime="2025-12-29 07:16:17.577283567 +0000 UTC m=+19.178130871"
	Dec 29 07:16:17 embed-certs-739827 kubelet[1304]: I1229 07:16:17.577418    1304 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/coredns-7d764666f9-55529" podStartSLOduration=13.577408864 podStartE2EDuration="13.577408864s" podCreationTimestamp="2025-12-29 07:16:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-29 07:16:17.566717115 +0000 UTC m=+19.167564486" watchObservedRunningTime="2025-12-29 07:16:17.577408864 +0000 UTC m=+19.178256166"
	Dec 29 07:16:18 embed-certs-739827 kubelet[1304]: E1229 07:16:18.418262    1304 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-controller-manager-embed-certs-739827" containerName="kube-controller-manager"
	Dec 29 07:16:18 embed-certs-739827 kubelet[1304]: E1229 07:16:18.555895    1304 prober_manager.go:209] "Readiness probe already exists for container" pod="kube-system/coredns-7d764666f9-55529" containerName="coredns"
	Dec 29 07:16:19 embed-certs-739827 kubelet[1304]: E1229 07:16:19.557867    1304 prober_manager.go:209] "Readiness probe already exists for container" pod="kube-system/coredns-7d764666f9-55529" containerName="coredns"
	Dec 29 07:16:20 embed-certs-739827 kubelet[1304]: I1229 07:16:20.522585    1304 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-flxqn\" (UniqueName: \"kubernetes.io/projected/fec130f6-04f7-4f99-8723-932ebe4f8b00-kube-api-access-flxqn\") pod \"busybox\" (UID: \"fec130f6-04f7-4f99-8723-932ebe4f8b00\") " pod="default/busybox"
	Dec 29 07:16:22 embed-certs-739827 kubelet[1304]: I1229 07:16:22.576585    1304 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="default/busybox" podStartSLOduration=1.392729039 podStartE2EDuration="2.57656535s" podCreationTimestamp="2025-12-29 07:16:20 +0000 UTC" firstStartedPulling="2025-12-29 07:16:20.798705185 +0000 UTC m=+22.399552479" lastFinishedPulling="2025-12-29 07:16:21.982541508 +0000 UTC m=+23.583388790" observedRunningTime="2025-12-29 07:16:22.576392206 +0000 UTC m=+24.177239508" watchObservedRunningTime="2025-12-29 07:16:22.57656535 +0000 UTC m=+24.177412651"
	
	
	==> storage-provisioner [a72a8e607c46f936581488c3ca66580c3d4805f123ebb8c3e53258f54a2b8b97] <==
	I1229 07:16:16.721032       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1229 07:16:16.732383       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1229 07:16:16.732435       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1229 07:16:16.735374       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1229 07:16:16.742211       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 07:16:16.742605       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1229 07:16:16.742864       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-739827_8124b4f7-66b6-4b76-81a9-71f8b18aab60!
	I1229 07:16:16.743189       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"347ea651-c327-4cd5-b9c6-ab5a3882fdf7", APIVersion:"v1", ResourceVersion:"423", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-739827_8124b4f7-66b6-4b76-81a9-71f8b18aab60 became leader
	W1229 07:16:16.746010       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1229 07:16:16.750117       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 07:16:16.843173       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-739827_8124b4f7-66b6-4b76-81a9-71f8b18aab60!
	W1229 07:16:18.754356       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1229 07:16:18.758567       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1229 07:16:20.761674       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1229 07:16:20.766590       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1229 07:16:22.769911       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1229 07:16:22.776591       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1229 07:16:24.779399       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1229 07:16:24.784196       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1229 07:16:26.787811       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1229 07:16:26.792048       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1229 07:16:28.795411       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1229 07:16:28.800029       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-739827 -n embed-certs-739827
helpers_test.go:270: (dbg) Run:  kubectl --context embed-certs-739827 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:294: <<< TestStartStop/group/embed-certs/serial/EnableAddonWhileActive FAILED: end of post-mortem logs <<<
helpers_test.go:295: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (2.08s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (2.08s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-798607 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-798607 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 11 (246.096552ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-29T07:16:35Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-798607 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 11
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context default-k8s-diff-port-798607 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:213: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-798607 describe deploy/metrics-server -n kube-system: exit status 1 (54.698904ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): deployments.apps "metrics-server" not found

                                                
                                                
** /stderr **
start_stop_delete_test.go:215: failed to get info on auto-pause deployments. args "kubectl --context default-k8s-diff-port-798607 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:219: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect default-k8s-diff-port-798607
helpers_test.go:244: (dbg) docker inspect default-k8s-diff-port-798607:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "430601fd040d9a7e1a9a24ba66eb256f26a98ca00bf762061420bd7abf8da277",
	        "Created": "2025-12-29T07:15:54.159908787Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 258572,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-29T07:15:54.211372795Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:a2c772d8c9235c5e8a66a3d0af7f5184436aa141d74f90a5659fe0d594ea14d5",
	        "ResolvConfPath": "/var/lib/docker/containers/430601fd040d9a7e1a9a24ba66eb256f26a98ca00bf762061420bd7abf8da277/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/430601fd040d9a7e1a9a24ba66eb256f26a98ca00bf762061420bd7abf8da277/hostname",
	        "HostsPath": "/var/lib/docker/containers/430601fd040d9a7e1a9a24ba66eb256f26a98ca00bf762061420bd7abf8da277/hosts",
	        "LogPath": "/var/lib/docker/containers/430601fd040d9a7e1a9a24ba66eb256f26a98ca00bf762061420bd7abf8da277/430601fd040d9a7e1a9a24ba66eb256f26a98ca00bf762061420bd7abf8da277-json.log",
	        "Name": "/default-k8s-diff-port-798607",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "default-k8s-diff-port-798607:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "default-k8s-diff-port-798607",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "430601fd040d9a7e1a9a24ba66eb256f26a98ca00bf762061420bd7abf8da277",
	                "LowerDir": "/var/lib/docker/overlay2/934a99af38cf59b603256a4b9c3c25dd4ffa4ebaa0e924a1acf3daedfa4003e5-init/diff:/var/lib/docker/overlay2/3c9e424a3298c821d4195922375c499065668db3230de879006c717dc268a220/diff",
	                "MergedDir": "/var/lib/docker/overlay2/934a99af38cf59b603256a4b9c3c25dd4ffa4ebaa0e924a1acf3daedfa4003e5/merged",
	                "UpperDir": "/var/lib/docker/overlay2/934a99af38cf59b603256a4b9c3c25dd4ffa4ebaa0e924a1acf3daedfa4003e5/diff",
	                "WorkDir": "/var/lib/docker/overlay2/934a99af38cf59b603256a4b9c3c25dd4ffa4ebaa0e924a1acf3daedfa4003e5/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "default-k8s-diff-port-798607",
	                "Source": "/var/lib/docker/volumes/default-k8s-diff-port-798607/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "default-k8s-diff-port-798607",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8444/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766979815-22353@sha256:20dad5895b49b986a1253c0faab60865204843ac97fd3a6e6210da5896244409",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "default-k8s-diff-port-798607",
	                "name.minikube.sigs.k8s.io": "default-k8s-diff-port-798607",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "44cae1fce3fe2e1eb217b227b200ac9d26ed2592dfbfdd0cc336defaa1f3c676",
	            "SandboxKey": "/var/run/docker/netns/44cae1fce3fe",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33073"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33074"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33077"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33075"
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33076"
	                    }
	                ]
	            },
	            "Networks": {
	                "default-k8s-diff-port-798607": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "a50196d85ec6cf5fe29b96f215bd3c465a58a5511f7e880d6481f36ac7ca686a",
	                    "EndpointID": "85b111dd2b438d10a880567f6f22ac6670ad20bfbd824c5cd62755517a53bab6",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "MacAddress": "92:1c:48:4f:d4:be",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "default-k8s-diff-port-798607",
	                        "430601fd040d"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-798607 -n default-k8s-diff-port-798607
helpers_test.go:253: <<< TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-798607 logs -n 25
helpers_test.go:261: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ ssh     │ -p cert-options-001954 -- sudo cat /etc/kubernetes/admin.conf                                                                                                                                                                                 │ cert-options-001954          │ jenkins │ v1.37.0 │ 29 Dec 25 07:13 UTC │ 29 Dec 25 07:13 UTC │
	│ delete  │ -p cert-options-001954                                                                                                                                                                                                                        │ cert-options-001954          │ jenkins │ v1.37.0 │ 29 Dec 25 07:13 UTC │ 29 Dec 25 07:13 UTC │
	│ start   │ -p old-k8s-version-876718 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-876718       │ jenkins │ v1.37.0 │ 29 Dec 25 07:13 UTC │ 29 Dec 25 07:13 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-876718 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-876718       │ jenkins │ v1.37.0 │ 29 Dec 25 07:14 UTC │                     │
	│ stop    │ -p old-k8s-version-876718 --alsologtostderr -v=3                                                                                                                                                                                              │ old-k8s-version-876718       │ jenkins │ v1.37.0 │ 29 Dec 25 07:14 UTC │ 29 Dec 25 07:14 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-876718 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-876718       │ jenkins │ v1.37.0 │ 29 Dec 25 07:14 UTC │ 29 Dec 25 07:14 UTC │
	│ start   │ -p old-k8s-version-876718 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-876718       │ jenkins │ v1.37.0 │ 29 Dec 25 07:14 UTC │ 29 Dec 25 07:15 UTC │
	│ delete  │ -p stopped-upgrade-518014                                                                                                                                                                                                                     │ stopped-upgrade-518014       │ jenkins │ v1.37.0 │ 29 Dec 25 07:14 UTC │ 29 Dec 25 07:14 UTC │
	│ start   │ -p no-preload-122332 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0                                                                                       │ no-preload-122332            │ jenkins │ v1.37.0 │ 29 Dec 25 07:14 UTC │ 29 Dec 25 07:15 UTC │
	│ image   │ old-k8s-version-876718 image list --format=json                                                                                                                                                                                               │ old-k8s-version-876718       │ jenkins │ v1.37.0 │ 29 Dec 25 07:15 UTC │ 29 Dec 25 07:15 UTC │
	│ pause   │ -p old-k8s-version-876718 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-876718       │ jenkins │ v1.37.0 │ 29 Dec 25 07:15 UTC │                     │
	│ delete  │ -p old-k8s-version-876718                                                                                                                                                                                                                     │ old-k8s-version-876718       │ jenkins │ v1.37.0 │ 29 Dec 25 07:15 UTC │ 29 Dec 25 07:15 UTC │
	│ delete  │ -p old-k8s-version-876718                                                                                                                                                                                                                     │ old-k8s-version-876718       │ jenkins │ v1.37.0 │ 29 Dec 25 07:15 UTC │ 29 Dec 25 07:15 UTC │
	│ start   │ -p embed-certs-739827 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0                                                                                        │ embed-certs-739827           │ jenkins │ v1.37.0 │ 29 Dec 25 07:15 UTC │ 29 Dec 25 07:16 UTC │
	│ addons  │ enable metrics-server -p no-preload-122332 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-122332            │ jenkins │ v1.37.0 │ 29 Dec 25 07:15 UTC │                     │
	│ start   │ -p cert-expiration-452455 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio                                                                                                                                     │ cert-expiration-452455       │ jenkins │ v1.37.0 │ 29 Dec 25 07:15 UTC │ 29 Dec 25 07:15 UTC │
	│ stop    │ -p no-preload-122332 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-122332            │ jenkins │ v1.37.0 │ 29 Dec 25 07:15 UTC │ 29 Dec 25 07:16 UTC │
	│ delete  │ -p cert-expiration-452455                                                                                                                                                                                                                     │ cert-expiration-452455       │ jenkins │ v1.37.0 │ 29 Dec 25 07:15 UTC │ 29 Dec 25 07:15 UTC │
	│ delete  │ -p disable-driver-mounts-708770                                                                                                                                                                                                               │ disable-driver-mounts-708770 │ jenkins │ v1.37.0 │ 29 Dec 25 07:15 UTC │ 29 Dec 25 07:15 UTC │
	│ start   │ -p default-k8s-diff-port-798607 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0                                                                      │ default-k8s-diff-port-798607 │ jenkins │ v1.37.0 │ 29 Dec 25 07:15 UTC │ 29 Dec 25 07:16 UTC │
	│ addons  │ enable dashboard -p no-preload-122332 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-122332            │ jenkins │ v1.37.0 │ 29 Dec 25 07:16 UTC │ 29 Dec 25 07:16 UTC │
	│ start   │ -p no-preload-122332 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0                                                                                       │ no-preload-122332            │ jenkins │ v1.37.0 │ 29 Dec 25 07:16 UTC │                     │
	│ addons  │ enable metrics-server -p embed-certs-739827 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-739827           │ jenkins │ v1.37.0 │ 29 Dec 25 07:16 UTC │                     │
	│ stop    │ -p embed-certs-739827 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-739827           │ jenkins │ v1.37.0 │ 29 Dec 25 07:16 UTC │                     │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-798607 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-798607 │ jenkins │ v1.37.0 │ 29 Dec 25 07:16 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/29 07:16:02
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1229 07:16:02.443749  260780 out.go:360] Setting OutFile to fd 1 ...
	I1229 07:16:02.443868  260780 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1229 07:16:02.443876  260780 out.go:374] Setting ErrFile to fd 2...
	I1229 07:16:02.443880  260780 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1229 07:16:02.444091  260780 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22353-9207/.minikube/bin
	I1229 07:16:02.444548  260780 out.go:368] Setting JSON to false
	I1229 07:16:02.445612  260780 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":3514,"bootTime":1766989048,"procs":299,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1229 07:16:02.445676  260780 start.go:143] virtualization: kvm guest
	I1229 07:16:02.447529  260780 out.go:179] * [no-preload-122332] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1229 07:16:02.448624  260780 notify.go:221] Checking for updates...
	I1229 07:16:02.448661  260780 out.go:179]   - MINIKUBE_LOCATION=22353
	I1229 07:16:02.450006  260780 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1229 07:16:02.451256  260780 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22353-9207/kubeconfig
	I1229 07:16:02.452625  260780 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22353-9207/.minikube
	I1229 07:16:02.453802  260780 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1229 07:16:02.454837  260780 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1229 07:16:02.456380  260780 config.go:182] Loaded profile config "no-preload-122332": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1229 07:16:02.456926  260780 driver.go:422] Setting default libvirt URI to qemu:///system
	I1229 07:16:02.480656  260780 docker.go:124] docker version: linux-29.1.3:Docker Engine - Community
	I1229 07:16:02.480741  260780 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1229 07:16:02.548574  260780 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:65 OomKillDisable:false NGoroutines:76 SystemTime:2025-12-29 07:16:02.538465465 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:dea7da592f5d1d2b7755e3a161be07f43fad8f75 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1229 07:16:02.548680  260780 docker.go:319] overlay module found
	I1229 07:16:02.550359  260780 out.go:179] * Using the docker driver based on existing profile
	I1229 07:16:02.551629  260780 start.go:309] selected driver: docker
	I1229 07:16:02.551642  260780 start.go:928] validating driver "docker" against &{Name:no-preload-122332 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766979815-22353@sha256:20dad5895b49b986a1253c0faab60865204843ac97fd3a6e6210da5896244409 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:no-preload-122332 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9
p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1229 07:16:02.551718  260780 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1229 07:16:02.552298  260780 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1229 07:16:02.631849  260780 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:65 OomKillDisable:false NGoroutines:76 SystemTime:2025-12-29 07:16:02.604568832 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:dea7da592f5d1d2b7755e3a161be07f43fad8f75 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1229 07:16:02.632289  260780 start_flags.go:1019] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1229 07:16:02.632333  260780 cni.go:84] Creating CNI manager for ""
	I1229 07:16:02.632405  260780 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1229 07:16:02.632453  260780 start.go:353] cluster config:
	{Name:no-preload-122332 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766979815-22353@sha256:20dad5895b49b986a1253c0faab60865204843ac97fd3a6e6210da5896244409 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:no-preload-122332 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false
DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1229 07:16:02.635276  260780 out.go:179] * Starting "no-preload-122332" primary control-plane node in "no-preload-122332" cluster
	I1229 07:16:02.636438  260780 cache.go:134] Beginning downloading kic base image for docker with crio
	I1229 07:16:02.637622  260780 out.go:179] * Pulling base image v0.0.48-1766979815-22353 ...
	I1229 07:15:59.265535  252990 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1229 07:15:59.269699  252990 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.35.0/kubectl ...
	I1229 07:15:59.269714  252990 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2620 bytes)
	I1229 07:15:59.282597  252990 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1229 07:15:59.515698  252990 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1229 07:15:59.515868  252990 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1229 07:15:59.515878  252990 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes embed-certs-739827 minikube.k8s.io/updated_at=2025_12_29T07_15_59_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=306c26d738eaa3534a776d7c684ed563998a48b8 minikube.k8s.io/name=embed-certs-739827 minikube.k8s.io/primary=true
	I1229 07:15:59.602995  252990 ops.go:34] apiserver oom_adj: -16
	I1229 07:15:59.603094  252990 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1229 07:16:00.104202  252990 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1229 07:16:00.603616  252990 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1229 07:16:01.103828  252990 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1229 07:16:01.604171  252990 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1229 07:16:02.103429  252990 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1229 07:16:02.603669  252990 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1229 07:16:02.638877  260780 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime crio
	I1229 07:16:02.639014  260780 profile.go:143] Saving config to /home/jenkins/minikube-integration/22353-9207/.minikube/profiles/no-preload-122332/config.json ...
	I1229 07:16:02.639377  260780 cache.go:107] acquiring lock: {Name:mk524ccc7d3121d195adc7d1863af70c1e10cb09 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1229 07:16:02.639463  260780 cache.go:115] /home/jenkins/minikube-integration/22353-9207/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1229 07:16:02.639473  260780 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/22353-9207/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5" took 113.257µs
	I1229 07:16:02.639482  260780 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/22353-9207/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1229 07:16:02.639503  260780 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766979815-22353@sha256:20dad5895b49b986a1253c0faab60865204843ac97fd3a6e6210da5896244409 in local docker daemon
	I1229 07:16:02.639896  260780 cache.go:107] acquiring lock: {Name:mk4e3cc5ac4b58daa39b77bf4639b595a7b5e1bb Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1229 07:16:02.639969  260780 cache.go:115] /home/jenkins/minikube-integration/22353-9207/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.35.0 exists
	I1229 07:16:02.639978  260780 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.35.0" -> "/home/jenkins/minikube-integration/22353-9207/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.35.0" took 91.151µs
	I1229 07:16:02.639986  260780 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.35.0 -> /home/jenkins/minikube-integration/22353-9207/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.35.0 succeeded
	I1229 07:16:02.640002  260780 cache.go:107] acquiring lock: {Name:mkceb8935c60ed9a529274ab83854aa71dbe9a7d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1229 07:16:02.640049  260780 cache.go:115] /home/jenkins/minikube-integration/22353-9207/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.35.0 exists
	I1229 07:16:02.640056  260780 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.35.0" -> "/home/jenkins/minikube-integration/22353-9207/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.35.0" took 57.168µs
	I1229 07:16:02.640064  260780 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.35.0 -> /home/jenkins/minikube-integration/22353-9207/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.35.0 succeeded
	I1229 07:16:02.640076  260780 cache.go:107] acquiring lock: {Name:mk52f4077c79f8806c7eb2c6a7253ed35dcf09ce Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1229 07:16:02.640116  260780 cache.go:115] /home/jenkins/minikube-integration/22353-9207/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.35.0 exists
	I1229 07:16:02.640123  260780 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.35.0" -> "/home/jenkins/minikube-integration/22353-9207/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.35.0" took 49.4µs
	I1229 07:16:02.640131  260780 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.35.0 -> /home/jenkins/minikube-integration/22353-9207/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.35.0 succeeded
	I1229 07:16:02.640158  260780 cache.go:107] acquiring lock: {Name:mk6876db4017aa5ef89eab36b68c600dec62345c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1229 07:16:02.640193  260780 cache.go:115] /home/jenkins/minikube-integration/22353-9207/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.35.0 exists
	I1229 07:16:02.640199  260780 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.35.0" -> "/home/jenkins/minikube-integration/22353-9207/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.35.0" took 57.778µs
	I1229 07:16:02.640209  260780 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.35.0 -> /home/jenkins/minikube-integration/22353-9207/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.35.0 succeeded
	I1229 07:16:02.640254  260780 cache.go:107] acquiring lock: {Name:mkca02c24b265c83f3ba73c3e4bff2d28831259c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1229 07:16:02.640294  260780 cache.go:115] /home/jenkins/minikube-integration/22353-9207/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.6-0 exists
	I1229 07:16:02.640301  260780 cache.go:96] cache image "registry.k8s.io/etcd:3.6.6-0" -> "/home/jenkins/minikube-integration/22353-9207/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.6-0" took 50.343µs
	I1229 07:16:02.640308  260780 cache.go:80] save to tar file registry.k8s.io/etcd:3.6.6-0 -> /home/jenkins/minikube-integration/22353-9207/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.6-0 succeeded
	I1229 07:16:02.640319  260780 cache.go:107] acquiring lock: {Name:mk2827ee73a1c5c546c3035bd69b730bda1ef682 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1229 07:16:02.640351  260780 cache.go:115] /home/jenkins/minikube-integration/22353-9207/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1 exists
	I1229 07:16:02.640358  260780 cache.go:96] cache image "registry.k8s.io/pause:3.10.1" -> "/home/jenkins/minikube-integration/22353-9207/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1" took 40.709µs
	I1229 07:16:02.640366  260780 cache.go:80] save to tar file registry.k8s.io/pause:3.10.1 -> /home/jenkins/minikube-integration/22353-9207/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1 succeeded
	I1229 07:16:02.640379  260780 cache.go:107] acquiring lock: {Name:mkeb7d05fa98b741eb24c41313df007ce9bbb93e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1229 07:16:02.640417  260780 cache.go:115] /home/jenkins/minikube-integration/22353-9207/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.13.1 exists
	I1229 07:16:02.640434  260780 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.13.1" -> "/home/jenkins/minikube-integration/22353-9207/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.13.1" took 56.634µs
	I1229 07:16:02.640449  260780 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.13.1 -> /home/jenkins/minikube-integration/22353-9207/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.13.1 succeeded
	I1229 07:16:02.640457  260780 cache.go:87] Successfully saved all images to host disk.
	I1229 07:16:02.664020  260780 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766979815-22353@sha256:20dad5895b49b986a1253c0faab60865204843ac97fd3a6e6210da5896244409 in local docker daemon, skipping pull
	I1229 07:16:02.664052  260780 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766979815-22353@sha256:20dad5895b49b986a1253c0faab60865204843ac97fd3a6e6210da5896244409 exists in daemon, skipping load
	I1229 07:16:02.664073  260780 cache.go:243] Successfully downloaded all kic artifacts
	I1229 07:16:02.664108  260780 start.go:360] acquireMachinesLock for no-preload-122332: {Name:mka83f33e779c9aed23f5a0e4fef1298c9058532 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1229 07:16:02.664173  260780 start.go:364] duration metric: took 43.904µs to acquireMachinesLock for "no-preload-122332"
	I1229 07:16:02.664192  260780 start.go:96] Skipping create...Using existing machine configuration
	I1229 07:16:02.664198  260780 fix.go:54] fixHost starting: 
	I1229 07:16:02.664514  260780 cli_runner.go:164] Run: docker container inspect no-preload-122332 --format={{.State.Status}}
	I1229 07:16:02.686258  260780 fix.go:112] recreateIfNeeded on no-preload-122332: state=Stopped err=<nil>
	W1229 07:16:02.686292  260780 fix.go:138] unexpected machine state, will restart: <nil>
	I1229 07:16:03.103451  252990 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1229 07:16:03.603687  252990 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1229 07:16:04.103269  252990 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1229 07:16:04.200816  252990 kubeadm.go:1114] duration metric: took 4.684995965s to wait for elevateKubeSystemPrivileges
	I1229 07:16:04.200855  252990 kubeadm.go:403] duration metric: took 14.678699553s to StartCluster
	I1229 07:16:04.200877  252990 settings.go:142] acquiring lock: {Name:mkc0a676e8e9f981283a1b55a28ae5261bf35d87 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1229 07:16:04.200945  252990 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22353-9207/kubeconfig
	I1229 07:16:04.202494  252990 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22353-9207/kubeconfig: {Name:mkdd37af1bd6d0f9cc034a64c07c8d33ee5c10ac Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1229 07:16:04.202771  252990 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1229 07:16:04.202786  252990 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1229 07:16:04.202763  252990 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1229 07:16:04.202940  252990 addons.go:70] Setting default-storageclass=true in profile "embed-certs-739827"
	I1229 07:16:04.202966  252990 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-739827"
	I1229 07:16:04.202891  252990 addons.go:70] Setting storage-provisioner=true in profile "embed-certs-739827"
	I1229 07:16:04.203085  252990 addons.go:239] Setting addon storage-provisioner=true in "embed-certs-739827"
	I1229 07:16:04.203096  252990 config.go:182] Loaded profile config "embed-certs-739827": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1229 07:16:04.203108  252990 host.go:66] Checking if "embed-certs-739827" exists ...
	I1229 07:16:04.203462  252990 cli_runner.go:164] Run: docker container inspect embed-certs-739827 --format={{.State.Status}}
	I1229 07:16:04.203557  252990 cli_runner.go:164] Run: docker container inspect embed-certs-739827 --format={{.State.Status}}
	I1229 07:16:04.205492  252990 out.go:179] * Verifying Kubernetes components...
	I1229 07:16:04.206702  252990 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1229 07:16:04.230778  252990 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1229 07:16:00.228848  257698 out.go:252]   - Booting up control plane ...
	I1229 07:16:00.228978  257698 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1229 07:16:00.229080  257698 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1229 07:16:00.229742  257698 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1229 07:16:00.247141  257698 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1229 07:16:00.247292  257698 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1229 07:16:00.254581  257698 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1229 07:16:00.255265  257698 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1229 07:16:00.255330  257698 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1229 07:16:00.355716  257698 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1229 07:16:00.355826  257698 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1229 07:16:00.857320  257698 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 501.713178ms
	I1229 07:16:00.861513  257698 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1229 07:16:00.861676  257698 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.85.2:8444/livez
	I1229 07:16:00.861806  257698 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1229 07:16:00.861919  257698 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1229 07:16:01.866039  257698 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 1.004375882s
	I1229 07:16:02.779963  257698 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 1.917503387s
	I1229 07:16:04.364141  257698 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 3.502504327s
	I1229 07:16:04.385747  257698 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1229 07:16:04.396788  257698 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1229 07:16:04.408353  257698 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1229 07:16:04.408647  257698 kubeadm.go:319] [mark-control-plane] Marking the node default-k8s-diff-port-798607 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1229 07:16:04.420045  257698 kubeadm.go:319] [bootstrap-token] Using token: ya1d0f.4qbol9q1tpj6po5z
	I1229 07:16:04.422613  257698 out.go:252]   - Configuring RBAC rules ...
	I1229 07:16:04.422864  257698 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1229 07:16:04.426012  257698 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1229 07:16:04.432627  257698 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1229 07:16:04.436262  257698 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1229 07:16:04.439098  257698 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1229 07:16:04.442529  257698 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1229 07:16:04.232109  252990 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1229 07:16:04.232126  252990 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1229 07:16:04.232174  252990 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-739827
	I1229 07:16:04.233010  252990 addons.go:239] Setting addon default-storageclass=true in "embed-certs-739827"
	I1229 07:16:04.233060  252990 host.go:66] Checking if "embed-certs-739827" exists ...
	I1229 07:16:04.233546  252990 cli_runner.go:164] Run: docker container inspect embed-certs-739827 --format={{.State.Status}}
	I1229 07:16:04.263907  252990 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1229 07:16:04.263964  252990 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1229 07:16:04.264043  252990 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-739827
	I1229 07:16:04.263920  252990 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33068 SSHKeyPath:/home/jenkins/minikube-integration/22353-9207/.minikube/machines/embed-certs-739827/id_rsa Username:docker}
	I1229 07:16:04.290192  252990 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33068 SSHKeyPath:/home/jenkins/minikube-integration/22353-9207/.minikube/machines/embed-certs-739827/id_rsa Username:docker}
	I1229 07:16:04.310592  252990 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.103.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.35.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1229 07:16:04.369759  252990 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1229 07:16:04.383050  252990 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1229 07:16:04.405077  252990 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1229 07:16:04.494516  252990 start.go:987] {"host.minikube.internal": 192.168.103.1} host record injected into CoreDNS's ConfigMap
	I1229 07:16:04.495828  252990 node_ready.go:35] waiting up to 6m0s for node "embed-certs-739827" to be "Ready" ...
	I1229 07:16:04.747837  252990 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1229 07:16:04.770887  257698 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1229 07:16:05.207942  257698 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1229 07:16:05.771444  257698 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1229 07:16:05.772314  257698 kubeadm.go:319] 
	I1229 07:16:05.772377  257698 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1229 07:16:05.772387  257698 kubeadm.go:319] 
	I1229 07:16:05.772479  257698 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1229 07:16:05.772489  257698 kubeadm.go:319] 
	I1229 07:16:05.772512  257698 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1229 07:16:05.772564  257698 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1229 07:16:05.772609  257698 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1229 07:16:05.772615  257698 kubeadm.go:319] 
	I1229 07:16:05.772699  257698 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1229 07:16:05.772712  257698 kubeadm.go:319] 
	I1229 07:16:05.772779  257698 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1229 07:16:05.772788  257698 kubeadm.go:319] 
	I1229 07:16:05.772862  257698 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1229 07:16:05.772996  257698 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1229 07:16:05.773099  257698 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1229 07:16:05.773111  257698 kubeadm.go:319] 
	I1229 07:16:05.773232  257698 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1229 07:16:05.773329  257698 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1229 07:16:05.773337  257698 kubeadm.go:319] 
	I1229 07:16:05.773436  257698 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8444 --token ya1d0f.4qbol9q1tpj6po5z \
	I1229 07:16:05.773560  257698 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:d97bd7e33b97d9ccbd4ec53c25ea13f0ec6aabee3d7ac32cc2829ba2c3b93811 \
	I1229 07:16:05.773588  257698 kubeadm.go:319] 	--control-plane 
	I1229 07:16:05.773593  257698 kubeadm.go:319] 
	I1229 07:16:05.773695  257698 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1229 07:16:05.773702  257698 kubeadm.go:319] 
	I1229 07:16:05.773794  257698 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8444 --token ya1d0f.4qbol9q1tpj6po5z \
	I1229 07:16:05.773941  257698 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:d97bd7e33b97d9ccbd4ec53c25ea13f0ec6aabee3d7ac32cc2829ba2c3b93811 
	I1229 07:16:05.776726  257698 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1045-gcp\n", err: exit status 1
	I1229 07:16:05.776844  257698 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1229 07:16:05.776881  257698 cni.go:84] Creating CNI manager for ""
	I1229 07:16:05.776894  257698 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1229 07:16:05.778605  257698 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1229 07:16:02.688116  260780 out.go:252] * Restarting existing docker container for "no-preload-122332" ...
	I1229 07:16:02.688198  260780 cli_runner.go:164] Run: docker start no-preload-122332
	I1229 07:16:03.008410  260780 cli_runner.go:164] Run: docker container inspect no-preload-122332 --format={{.State.Status}}
	I1229 07:16:03.027569  260780 kic.go:430] container "no-preload-122332" state is running.
	I1229 07:16:03.027901  260780 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-122332
	I1229 07:16:03.047076  260780 profile.go:143] Saving config to /home/jenkins/minikube-integration/22353-9207/.minikube/profiles/no-preload-122332/config.json ...
	I1229 07:16:03.047347  260780 machine.go:94] provisionDockerMachine start ...
	I1229 07:16:03.047434  260780 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-122332
	I1229 07:16:03.067494  260780 main.go:144] libmachine: Using SSH client type: native
	I1229 07:16:03.067781  260780 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e300] 0x850fa0 <nil>  [] 0s} 127.0.0.1 33078 <nil> <nil>}
	I1229 07:16:03.067797  260780 main.go:144] libmachine: About to run SSH command:
	hostname
	I1229 07:16:03.068450  260780 main.go:144] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:49898->127.0.0.1:33078: read: connection reset by peer
	I1229 07:16:06.224107  260780 main.go:144] libmachine: SSH cmd err, output: <nil>: no-preload-122332
	
	I1229 07:16:06.224161  260780 ubuntu.go:182] provisioning hostname "no-preload-122332"
	I1229 07:16:06.224240  260780 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-122332
	I1229 07:16:06.243763  260780 main.go:144] libmachine: Using SSH client type: native
	I1229 07:16:06.244071  260780 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e300] 0x850fa0 <nil>  [] 0s} 127.0.0.1 33078 <nil> <nil>}
	I1229 07:16:06.244094  260780 main.go:144] libmachine: About to run SSH command:
	sudo hostname no-preload-122332 && echo "no-preload-122332" | sudo tee /etc/hostname
	I1229 07:16:06.395356  260780 main.go:144] libmachine: SSH cmd err, output: <nil>: no-preload-122332
	
	I1229 07:16:06.395431  260780 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-122332
	I1229 07:16:06.414003  260780 main.go:144] libmachine: Using SSH client type: native
	I1229 07:16:06.414305  260780 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e300] 0x850fa0 <nil>  [] 0s} 127.0.0.1 33078 <nil> <nil>}
	I1229 07:16:06.414327  260780 main.go:144] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-122332' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-122332/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-122332' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1229 07:16:06.551715  260780 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	I1229 07:16:06.551746  260780 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22353-9207/.minikube CaCertPath:/home/jenkins/minikube-integration/22353-9207/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22353-9207/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22353-9207/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22353-9207/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22353-9207/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22353-9207/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22353-9207/.minikube}
	I1229 07:16:06.551781  260780 ubuntu.go:190] setting up certificates
	I1229 07:16:06.551796  260780 provision.go:84] configureAuth start
	I1229 07:16:06.551861  260780 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-122332
	I1229 07:16:06.569689  260780 provision.go:143] copyHostCerts
	I1229 07:16:06.569739  260780 exec_runner.go:144] found /home/jenkins/minikube-integration/22353-9207/.minikube/ca.pem, removing ...
	I1229 07:16:06.569752  260780 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22353-9207/.minikube/ca.pem
	I1229 07:16:06.569828  260780 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22353-9207/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22353-9207/.minikube/ca.pem (1082 bytes)
	I1229 07:16:06.569940  260780 exec_runner.go:144] found /home/jenkins/minikube-integration/22353-9207/.minikube/cert.pem, removing ...
	I1229 07:16:06.569948  260780 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22353-9207/.minikube/cert.pem
	I1229 07:16:06.569976  260780 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22353-9207/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22353-9207/.minikube/cert.pem (1123 bytes)
	I1229 07:16:06.570057  260780 exec_runner.go:144] found /home/jenkins/minikube-integration/22353-9207/.minikube/key.pem, removing ...
	I1229 07:16:06.570068  260780 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22353-9207/.minikube/key.pem
	I1229 07:16:06.570106  260780 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22353-9207/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22353-9207/.minikube/key.pem (1675 bytes)
	I1229 07:16:06.570174  260780 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22353-9207/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22353-9207/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22353-9207/.minikube/certs/ca-key.pem org=jenkins.no-preload-122332 san=[127.0.0.1 192.168.94.2 localhost minikube no-preload-122332]
	I1229 07:16:06.818389  260780 provision.go:177] copyRemoteCerts
	I1229 07:16:06.818449  260780 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1229 07:16:06.818482  260780 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-122332
	I1229 07:16:06.837040  260780 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33078 SSHKeyPath:/home/jenkins/minikube-integration/22353-9207/.minikube/machines/no-preload-122332/id_rsa Username:docker}
	I1229 07:16:06.935125  260780 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-9207/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1229 07:16:06.952619  260780 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-9207/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1229 07:16:06.969746  260780 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-9207/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1229 07:16:06.986963  260780 provision.go:87] duration metric: took 435.143894ms to configureAuth
	I1229 07:16:06.987000  260780 ubuntu.go:206] setting minikube options for container-runtime
	I1229 07:16:06.987194  260780 config.go:182] Loaded profile config "no-preload-122332": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1229 07:16:06.987348  260780 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-122332
	I1229 07:16:07.005799  260780 main.go:144] libmachine: Using SSH client type: native
	I1229 07:16:07.006103  260780 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e300] 0x850fa0 <nil>  [] 0s} 127.0.0.1 33078 <nil> <nil>}
	I1229 07:16:07.006133  260780 main.go:144] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1229 07:16:04.749168  252990 addons.go:530] duration metric: took 546.374384ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I1229 07:16:04.999393  252990 kapi.go:214] "coredns" deployment in "kube-system" namespace and "embed-certs-739827" context rescaled to 1 replicas
	W1229 07:16:06.499864  252990 node_ready.go:57] node "embed-certs-739827" has "Ready":"False" status (will retry)
	I1229 07:16:07.493300  260780 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1229 07:16:07.493322  260780 machine.go:97] duration metric: took 4.445958531s to provisionDockerMachine
	I1229 07:16:07.493335  260780 start.go:293] postStartSetup for "no-preload-122332" (driver="docker")
	I1229 07:16:07.493349  260780 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1229 07:16:07.493418  260780 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1229 07:16:07.493453  260780 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-122332
	I1229 07:16:07.514630  260780 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33078 SSHKeyPath:/home/jenkins/minikube-integration/22353-9207/.minikube/machines/no-preload-122332/id_rsa Username:docker}
	I1229 07:16:07.613194  260780 ssh_runner.go:195] Run: cat /etc/os-release
	I1229 07:16:07.616995  260780 main.go:144] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1229 07:16:07.617024  260780 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1229 07:16:07.617034  260780 filesync.go:126] Scanning /home/jenkins/minikube-integration/22353-9207/.minikube/addons for local assets ...
	I1229 07:16:07.617074  260780 filesync.go:126] Scanning /home/jenkins/minikube-integration/22353-9207/.minikube/files for local assets ...
	I1229 07:16:07.617156  260780 filesync.go:149] local asset: /home/jenkins/minikube-integration/22353-9207/.minikube/files/etc/ssl/certs/127332.pem -> 127332.pem in /etc/ssl/certs
	I1229 07:16:07.617256  260780 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1229 07:16:07.625107  260780 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-9207/.minikube/files/etc/ssl/certs/127332.pem --> /etc/ssl/certs/127332.pem (1708 bytes)
	I1229 07:16:07.642849  260780 start.go:296] duration metric: took 149.498351ms for postStartSetup
	I1229 07:16:07.642936  260780 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1229 07:16:07.642983  260780 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-122332
	I1229 07:16:07.664279  260780 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33078 SSHKeyPath:/home/jenkins/minikube-integration/22353-9207/.minikube/machines/no-preload-122332/id_rsa Username:docker}
	I1229 07:16:07.762413  260780 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1229 07:16:07.766991  260780 fix.go:56] duration metric: took 5.102787642s for fixHost
	I1229 07:16:07.767017  260780 start.go:83] releasing machines lock for "no-preload-122332", held for 5.102835017s
	I1229 07:16:07.767090  260780 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-122332
	I1229 07:16:07.786134  260780 ssh_runner.go:195] Run: cat /version.json
	I1229 07:16:07.786181  260780 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-122332
	I1229 07:16:07.786213  260780 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1229 07:16:07.786309  260780 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-122332
	I1229 07:16:07.804023  260780 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33078 SSHKeyPath:/home/jenkins/minikube-integration/22353-9207/.minikube/machines/no-preload-122332/id_rsa Username:docker}
	I1229 07:16:07.804023  260780 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33078 SSHKeyPath:/home/jenkins/minikube-integration/22353-9207/.minikube/machines/no-preload-122332/id_rsa Username:docker}
	I1229 07:16:07.897336  260780 ssh_runner.go:195] Run: systemctl --version
	I1229 07:16:07.950849  260780 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1229 07:16:07.987111  260780 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1229 07:16:07.991621  260780 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1229 07:16:07.991699  260780 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1229 07:16:07.999673  260780 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1229 07:16:07.999694  260780 start.go:496] detecting cgroup driver to use...
	I1229 07:16:07.999725  260780 detect.go:190] detected "systemd" cgroup driver on host os
	I1229 07:16:07.999775  260780 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1229 07:16:08.013677  260780 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1229 07:16:08.025257  260780 docker.go:218] disabling cri-docker service (if available) ...
	I1229 07:16:08.025308  260780 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1229 07:16:08.040318  260780 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1229 07:16:08.052098  260780 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1229 07:16:08.130486  260780 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1229 07:16:08.216857  260780 docker.go:234] disabling docker service ...
	I1229 07:16:08.216913  260780 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1229 07:16:08.231380  260780 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1229 07:16:08.243659  260780 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1229 07:16:08.325705  260780 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1229 07:16:08.406352  260780 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1229 07:16:08.419666  260780 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1229 07:16:08.437006  260780 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1229 07:16:08.437068  260780 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1229 07:16:08.445952  260780 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1229 07:16:08.446004  260780 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1229 07:16:08.454594  260780 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1229 07:16:08.462937  260780 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1229 07:16:08.471778  260780 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1229 07:16:08.479676  260780 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1229 07:16:08.488706  260780 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1229 07:16:08.497036  260780 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1229 07:16:08.506200  260780 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1229 07:16:08.514213  260780 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1229 07:16:08.521389  260780 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1229 07:16:08.602735  260780 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1229 07:16:08.737390  260780 start.go:553] Will wait 60s for socket path /var/run/crio/crio.sock
	I1229 07:16:08.737446  260780 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1229 07:16:08.741340  260780 start.go:574] Will wait 60s for crictl version
	I1229 07:16:08.741384  260780 ssh_runner.go:195] Run: which crictl
	I1229 07:16:08.745157  260780 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1229 07:16:08.768460  260780 start.go:590] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.35.0
	RuntimeApiVersion:  v1
	I1229 07:16:08.768544  260780 ssh_runner.go:195] Run: crio --version
	I1229 07:16:08.795631  260780 ssh_runner.go:195] Run: crio --version
	I1229 07:16:08.824111  260780 out.go:179] * Preparing Kubernetes v1.35.0 on CRI-O 1.35.0 ...
	I1229 07:16:05.414177  225445 api_server.go:299] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1229 07:16:05.414637  225445 api_server.go:315] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1229 07:16:05.414688  225445 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1229 07:16:05.414733  225445 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1229 07:16:05.444555  225445 cri.go:96] found id: "8b80982715281e14912ea726d2a5a182323febb349dfab3bcb2a610db7fa1d11"
	I1229 07:16:05.444574  225445 cri.go:96] found id: "3a9c461f6549cedfcb902d493a98dfdfd45573c28ec554e6b1d2ce904ee1fd67"
	I1229 07:16:05.444578  225445 cri.go:96] found id: ""
	I1229 07:16:05.444585  225445 logs.go:282] 2 containers: [8b80982715281e14912ea726d2a5a182323febb349dfab3bcb2a610db7fa1d11 3a9c461f6549cedfcb902d493a98dfdfd45573c28ec554e6b1d2ce904ee1fd67]
	I1229 07:16:05.444640  225445 ssh_runner.go:195] Run: which crictl
	I1229 07:16:05.448489  225445 ssh_runner.go:195] Run: which crictl
	I1229 07:16:05.452440  225445 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1229 07:16:05.452513  225445 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1229 07:16:05.483010  225445 cri.go:96] found id: "02694b727a7ef5615b676832bc306017d23da15eff0a3aa9c1e32630ffe9e1bd"
	I1229 07:16:05.483033  225445 cri.go:96] found id: ""
	I1229 07:16:05.483042  225445 logs.go:282] 1 containers: [02694b727a7ef5615b676832bc306017d23da15eff0a3aa9c1e32630ffe9e1bd]
	I1229 07:16:05.483109  225445 ssh_runner.go:195] Run: which crictl
	I1229 07:16:05.487159  225445 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1229 07:16:05.487242  225445 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1229 07:16:05.516757  225445 cri.go:96] found id: ""
	I1229 07:16:05.516783  225445 logs.go:282] 0 containers: []
	W1229 07:16:05.516791  225445 logs.go:284] No container was found matching "coredns"
	I1229 07:16:05.516797  225445 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1229 07:16:05.516846  225445 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1229 07:16:05.545493  225445 cri.go:96] found id: "bcf9a8233b7fe61acd5d7b0071cecd58e00a26cec4247f4ead31d75316b79078"
	I1229 07:16:05.545512  225445 cri.go:96] found id: "1a2a5ec730671fe2fec7f6b801e54afe17547e25677f0f6dc85052e44f7374d4"
	I1229 07:16:05.545516  225445 cri.go:96] found id: "83f0b45b14d0f5c0f2428e774b2c22cc1cbf1cdc631be13cb8017de1791ed9c1"
	I1229 07:16:05.545519  225445 cri.go:96] found id: ""
	I1229 07:16:05.545526  225445 logs.go:282] 3 containers: [bcf9a8233b7fe61acd5d7b0071cecd58e00a26cec4247f4ead31d75316b79078 1a2a5ec730671fe2fec7f6b801e54afe17547e25677f0f6dc85052e44f7374d4 83f0b45b14d0f5c0f2428e774b2c22cc1cbf1cdc631be13cb8017de1791ed9c1]
	I1229 07:16:05.545570  225445 ssh_runner.go:195] Run: which crictl
	I1229 07:16:05.549488  225445 ssh_runner.go:195] Run: which crictl
	I1229 07:16:05.552977  225445 ssh_runner.go:195] Run: which crictl
	I1229 07:16:05.556385  225445 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1229 07:16:05.556435  225445 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1229 07:16:05.583362  225445 cri.go:96] found id: ""
	I1229 07:16:05.583383  225445 logs.go:282] 0 containers: []
	W1229 07:16:05.583391  225445 logs.go:284] No container was found matching "kube-proxy"
	I1229 07:16:05.583396  225445 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1229 07:16:05.583452  225445 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1229 07:16:05.613371  225445 cri.go:96] found id: "a67832fbf27ab39d26d7cd60ce6b9b5a496df64418a8f0557221862df96d6685"
	I1229 07:16:05.613391  225445 cri.go:96] found id: "c6904a1539983ab61b05abbaa0001209617c96ddc1921ff83557531b68191f66"
	I1229 07:16:05.613395  225445 cri.go:96] found id: ""
	I1229 07:16:05.613403  225445 logs.go:282] 2 containers: [a67832fbf27ab39d26d7cd60ce6b9b5a496df64418a8f0557221862df96d6685 c6904a1539983ab61b05abbaa0001209617c96ddc1921ff83557531b68191f66]
	I1229 07:16:05.613446  225445 ssh_runner.go:195] Run: which crictl
	I1229 07:16:05.617582  225445 ssh_runner.go:195] Run: which crictl
	I1229 07:16:05.621252  225445 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1229 07:16:05.621310  225445 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1229 07:16:05.649489  225445 cri.go:96] found id: ""
	I1229 07:16:05.649514  225445 logs.go:282] 0 containers: []
	W1229 07:16:05.649526  225445 logs.go:284] No container was found matching "kindnet"
	I1229 07:16:05.649533  225445 cri.go:61] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1229 07:16:05.649588  225445 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=storage-provisioner
	I1229 07:16:05.676676  225445 cri.go:96] found id: ""
	I1229 07:16:05.676699  225445 logs.go:282] 0 containers: []
	W1229 07:16:05.676706  225445 logs.go:284] No container was found matching "storage-provisioner"
	I1229 07:16:05.676714  225445 logs.go:123] Gathering logs for kube-controller-manager [c6904a1539983ab61b05abbaa0001209617c96ddc1921ff83557531b68191f66] ...
	I1229 07:16:05.676724  225445 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c6904a1539983ab61b05abbaa0001209617c96ddc1921ff83557531b68191f66"
	I1229 07:16:05.703589  225445 logs.go:123] Gathering logs for CRI-O ...
	I1229 07:16:05.703628  225445 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1229 07:16:05.779177  225445 logs.go:123] Gathering logs for kubelet ...
	I1229 07:16:05.779215  225445 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1229 07:16:05.879985  225445 logs.go:123] Gathering logs for kube-scheduler [1a2a5ec730671fe2fec7f6b801e54afe17547e25677f0f6dc85052e44f7374d4] ...
	I1229 07:16:05.880020  225445 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 1a2a5ec730671fe2fec7f6b801e54afe17547e25677f0f6dc85052e44f7374d4"
	I1229 07:16:05.953121  225445 logs.go:123] Gathering logs for kube-scheduler [83f0b45b14d0f5c0f2428e774b2c22cc1cbf1cdc631be13cb8017de1791ed9c1] ...
	I1229 07:16:05.953156  225445 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 83f0b45b14d0f5c0f2428e774b2c22cc1cbf1cdc631be13cb8017de1791ed9c1"
	I1229 07:16:05.983806  225445 logs.go:123] Gathering logs for kube-controller-manager [a67832fbf27ab39d26d7cd60ce6b9b5a496df64418a8f0557221862df96d6685] ...
	I1229 07:16:05.983842  225445 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a67832fbf27ab39d26d7cd60ce6b9b5a496df64418a8f0557221862df96d6685"
	I1229 07:16:06.015711  225445 logs.go:123] Gathering logs for container status ...
	I1229 07:16:06.015744  225445 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1229 07:16:06.052557  225445 logs.go:123] Gathering logs for dmesg ...
	I1229 07:16:06.052593  225445 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1229 07:16:06.067154  225445 logs.go:123] Gathering logs for describe nodes ...
	I1229 07:16:06.067181  225445 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1229 07:16:06.149329  225445 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1229 07:16:06.149356  225445 logs.go:123] Gathering logs for kube-apiserver [8b80982715281e14912ea726d2a5a182323febb349dfab3bcb2a610db7fa1d11] ...
	I1229 07:16:06.149373  225445 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 8b80982715281e14912ea726d2a5a182323febb349dfab3bcb2a610db7fa1d11"
	I1229 07:16:06.185562  225445 logs.go:123] Gathering logs for kube-apiserver [3a9c461f6549cedfcb902d493a98dfdfd45573c28ec554e6b1d2ce904ee1fd67] ...
	I1229 07:16:06.185590  225445 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3a9c461f6549cedfcb902d493a98dfdfd45573c28ec554e6b1d2ce904ee1fd67"
	I1229 07:16:06.217252  225445 logs.go:123] Gathering logs for etcd [02694b727a7ef5615b676832bc306017d23da15eff0a3aa9c1e32630ffe9e1bd] ...
	I1229 07:16:06.217278  225445 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 02694b727a7ef5615b676832bc306017d23da15eff0a3aa9c1e32630ffe9e1bd"
	I1229 07:16:06.253854  225445 logs.go:123] Gathering logs for kube-scheduler [bcf9a8233b7fe61acd5d7b0071cecd58e00a26cec4247f4ead31d75316b79078] ...
	I1229 07:16:06.253886  225445 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 bcf9a8233b7fe61acd5d7b0071cecd58e00a26cec4247f4ead31d75316b79078"
	W1229 07:16:06.280521  225445 logs.go:138] Found kube-scheduler [bcf9a8233b7fe61acd5d7b0071cecd58e00a26cec4247f4ead31d75316b79078] problem: E1229 07:14:51.301899       1 run.go:72] "command failed" err="failed to create listener: failed to listen on 127.0.0.1:10259: listen tcp 127.0.0.1:10259: bind: address already in use"
	I1229 07:16:06.280551  225445 out.go:374] Setting ErrFile to fd 2...
	I1229 07:16:06.280563  225445 out.go:408] TERM=,COLORTERM=, which probably does not support color
	W1229 07:16:06.280616  225445 out.go:285] X Problems detected in kube-scheduler [bcf9a8233b7fe61acd5d7b0071cecd58e00a26cec4247f4ead31d75316b79078]:
	W1229 07:16:06.280628  225445 out.go:285]   E1229 07:14:51.301899       1 run.go:72] "command failed" err="failed to create listener: failed to listen on 127.0.0.1:10259: listen tcp 127.0.0.1:10259: bind: address already in use"
	I1229 07:16:06.280633  225445 out.go:374] Setting ErrFile to fd 2...
	I1229 07:16:06.280637  225445 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1229 07:16:05.779739  257698 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1229 07:16:05.784323  257698 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.35.0/kubectl ...
	I1229 07:16:05.784340  257698 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2620 bytes)
	I1229 07:16:05.798520  257698 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1229 07:16:06.021729  257698 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1229 07:16:06.021853  257698 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1229 07:16:06.021877  257698 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes default-k8s-diff-port-798607 minikube.k8s.io/updated_at=2025_12_29T07_16_06_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=306c26d738eaa3534a776d7c684ed563998a48b8 minikube.k8s.io/name=default-k8s-diff-port-798607 minikube.k8s.io/primary=true
	I1229 07:16:06.032297  257698 ops.go:34] apiserver oom_adj: -16
	I1229 07:16:06.123180  257698 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1229 07:16:06.624136  257698 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1229 07:16:07.123583  257698 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1229 07:16:07.623394  257698 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1229 07:16:08.123788  257698 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1229 07:16:08.623977  257698 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1229 07:16:09.123172  257698 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1229 07:16:09.623339  257698 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1229 07:16:08.825511  260780 cli_runner.go:164] Run: docker network inspect no-preload-122332 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1229 07:16:08.843046  260780 ssh_runner.go:195] Run: grep 192.168.94.1	host.minikube.internal$ /etc/hosts
	I1229 07:16:08.847362  260780 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.94.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1229 07:16:08.857654  260780 kubeadm.go:884] updating cluster {Name:no-preload-122332 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766979815-22353@sha256:20dad5895b49b986a1253c0faab60865204843ac97fd3a6e6210da5896244409 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:no-preload-122332 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker
BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false} ...
	I1229 07:16:08.857747  260780 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime crio
	I1229 07:16:08.857778  260780 ssh_runner.go:195] Run: sudo crictl images --output json
	I1229 07:16:08.892104  260780 crio.go:561] all images are preloaded for cri-o runtime.
	I1229 07:16:08.892123  260780 cache_images.go:86] Images are preloaded, skipping loading
	I1229 07:16:08.892130  260780 kubeadm.go:935] updating node { 192.168.94.2 8443 v1.35.0 crio true true} ...
	I1229 07:16:08.892242  260780 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=no-preload-122332 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.94.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0 ClusterName:no-preload-122332 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1229 07:16:08.892308  260780 ssh_runner.go:195] Run: crio config
	I1229 07:16:08.938576  260780 cni.go:84] Creating CNI manager for ""
	I1229 07:16:08.938594  260780 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1229 07:16:08.938607  260780 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1229 07:16:08.938635  260780 kubeadm.go:197] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.94.2 APIServerPort:8443 KubernetesVersion:v1.35.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-122332 NodeName:no-preload-122332 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.94.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.94.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/
kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1229 07:16:08.938777  260780 kubeadm.go:203] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.94.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-122332"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.94.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.94.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1229 07:16:08.938840  260780 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0
	I1229 07:16:08.946885  260780 binaries.go:51] Found k8s binaries, skipping transfer
	I1229 07:16:08.946947  260780 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1229 07:16:08.954902  260780 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I1229 07:16:08.967474  260780 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1229 07:16:08.980397  260780 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2213 bytes)
	I1229 07:16:08.992871  260780 ssh_runner.go:195] Run: grep 192.168.94.2	control-plane.minikube.internal$ /etc/hosts
	I1229 07:16:08.996432  260780 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.94.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1229 07:16:09.006542  260780 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1229 07:16:09.087746  260780 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1229 07:16:09.112470  260780 certs.go:69] Setting up /home/jenkins/minikube-integration/22353-9207/.minikube/profiles/no-preload-122332 for IP: 192.168.94.2
	I1229 07:16:09.112493  260780 certs.go:195] generating shared ca certs ...
	I1229 07:16:09.112511  260780 certs.go:227] acquiring lock for ca certs: {Name:mk9ea2caa33885d3eff30cd6b31b0954826457bd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1229 07:16:09.112678  260780 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22353-9207/.minikube/ca.key
	I1229 07:16:09.112731  260780 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22353-9207/.minikube/proxy-client-ca.key
	I1229 07:16:09.112746  260780 certs.go:257] generating profile certs ...
	I1229 07:16:09.112845  260780 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22353-9207/.minikube/profiles/no-preload-122332/client.key
	I1229 07:16:09.112928  260780 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22353-9207/.minikube/profiles/no-preload-122332/apiserver.key.8c20c595
	I1229 07:16:09.112984  260780 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22353-9207/.minikube/profiles/no-preload-122332/proxy-client.key
	I1229 07:16:09.113144  260780 certs.go:484] found cert: /home/jenkins/minikube-integration/22353-9207/.minikube/certs/12733.pem (1338 bytes)
	W1229 07:16:09.113190  260780 certs.go:480] ignoring /home/jenkins/minikube-integration/22353-9207/.minikube/certs/12733_empty.pem, impossibly tiny 0 bytes
	I1229 07:16:09.113204  260780 certs.go:484] found cert: /home/jenkins/minikube-integration/22353-9207/.minikube/certs/ca-key.pem (1675 bytes)
	I1229 07:16:09.113256  260780 certs.go:484] found cert: /home/jenkins/minikube-integration/22353-9207/.minikube/certs/ca.pem (1082 bytes)
	I1229 07:16:09.113304  260780 certs.go:484] found cert: /home/jenkins/minikube-integration/22353-9207/.minikube/certs/cert.pem (1123 bytes)
	I1229 07:16:09.113336  260780 certs.go:484] found cert: /home/jenkins/minikube-integration/22353-9207/.minikube/certs/key.pem (1675 bytes)
	I1229 07:16:09.113392  260780 certs.go:484] found cert: /home/jenkins/minikube-integration/22353-9207/.minikube/files/etc/ssl/certs/127332.pem (1708 bytes)
	I1229 07:16:09.114188  260780 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-9207/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1229 07:16:09.135589  260780 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-9207/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1229 07:16:09.155082  260780 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-9207/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1229 07:16:09.174510  260780 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-9207/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1229 07:16:09.200211  260780 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-9207/.minikube/profiles/no-preload-122332/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1229 07:16:09.220502  260780 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-9207/.minikube/profiles/no-preload-122332/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1229 07:16:09.237694  260780 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-9207/.minikube/profiles/no-preload-122332/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1229 07:16:09.254571  260780 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-9207/.minikube/profiles/no-preload-122332/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1229 07:16:09.270728  260780 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-9207/.minikube/certs/12733.pem --> /usr/share/ca-certificates/12733.pem (1338 bytes)
	I1229 07:16:09.288431  260780 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-9207/.minikube/files/etc/ssl/certs/127332.pem --> /usr/share/ca-certificates/127332.pem (1708 bytes)
	I1229 07:16:09.305730  260780 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-9207/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1229 07:16:09.323684  260780 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (722 bytes)
	I1229 07:16:09.336197  260780 ssh_runner.go:195] Run: openssl version
	I1229 07:16:09.342075  260780 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/12733.pem
	I1229 07:16:09.348961  260780 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/12733.pem /etc/ssl/certs/12733.pem
	I1229 07:16:09.356166  260780 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12733.pem
	I1229 07:16:09.359621  260780 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 29 06:49 /usr/share/ca-certificates/12733.pem
	I1229 07:16:09.359673  260780 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12733.pem
	I1229 07:16:09.395522  260780 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1229 07:16:09.403023  260780 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/127332.pem
	I1229 07:16:09.410206  260780 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/127332.pem /etc/ssl/certs/127332.pem
	I1229 07:16:09.417565  260780 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/127332.pem
	I1229 07:16:09.421113  260780 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 29 06:49 /usr/share/ca-certificates/127332.pem
	I1229 07:16:09.421166  260780 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/127332.pem
	I1229 07:16:09.465713  260780 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1229 07:16:09.473479  260780 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1229 07:16:09.480734  260780 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1229 07:16:09.488057  260780 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1229 07:16:09.491795  260780 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 29 06:46 /usr/share/ca-certificates/minikubeCA.pem
	I1229 07:16:09.491842  260780 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1229 07:16:09.529280  260780 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1229 07:16:09.538665  260780 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1229 07:16:09.544676  260780 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1229 07:16:09.584472  260780 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1229 07:16:09.622357  260780 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1229 07:16:09.670719  260780 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1229 07:16:09.720180  260780 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1229 07:16:09.779489  260780 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1229 07:16:09.819391  260780 kubeadm.go:401] StartCluster: {Name:no-preload-122332 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766979815-22353@sha256:20dad5895b49b986a1253c0faab60865204843ac97fd3a6e6210da5896244409 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:no-preload-122332 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Bi
naryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1229 07:16:09.819485  260780 cri.go:61] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1229 07:16:09.819533  260780 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1229 07:16:09.849341  260780 cri.go:96] found id: "182221ab78b63253e283f5b17e6c4eefd8ff0cf8a867399484c79718b382becd"
	I1229 07:16:09.849360  260780 cri.go:96] found id: "3c840a729524e5af9fc1ab0924ee6323875c1b5066189ad27582f5313c496cbc"
	I1229 07:16:09.849364  260780 cri.go:96] found id: "482322719dad640690982288c2258e90836d194891b2179cab964e1340265902"
	I1229 07:16:09.849371  260780 cri.go:96] found id: "013472dcacb3dee11074415629264465301e3f2be8dd69785de033ac3c97d206"
	I1229 07:16:09.849374  260780 cri.go:96] found id: ""
	I1229 07:16:09.849413  260780 ssh_runner.go:195] Run: sudo runc list -f json
	W1229 07:16:09.861190  260780 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-29T07:16:09Z" level=error msg="open /run/runc: no such file or directory"
	I1229 07:16:09.861272  260780 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1229 07:16:09.869803  260780 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1229 07:16:09.869825  260780 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1229 07:16:09.869882  260780 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1229 07:16:09.878191  260780 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1229 07:16:09.878941  260780 kubeconfig.go:47] verify endpoint returned: get endpoint: "no-preload-122332" does not appear in /home/jenkins/minikube-integration/22353-9207/kubeconfig
	I1229 07:16:09.879427  260780 kubeconfig.go:62] /home/jenkins/minikube-integration/22353-9207/kubeconfig needs updating (will repair): [kubeconfig missing "no-preload-122332" cluster setting kubeconfig missing "no-preload-122332" context setting]
	I1229 07:16:09.880162  260780 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22353-9207/kubeconfig: {Name:mkdd37af1bd6d0f9cc034a64c07c8d33ee5c10ac Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1229 07:16:09.881753  260780 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1229 07:16:09.890244  260780 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.94.2
	I1229 07:16:09.890272  260780 kubeadm.go:602] duration metric: took 20.440452ms to restartPrimaryControlPlane
	I1229 07:16:09.890282  260780 kubeadm.go:403] duration metric: took 70.898981ms to StartCluster
	I1229 07:16:09.890298  260780 settings.go:142] acquiring lock: {Name:mkc0a676e8e9f981283a1b55a28ae5261bf35d87 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1229 07:16:09.890361  260780 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22353-9207/kubeconfig
	I1229 07:16:09.891559  260780 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22353-9207/kubeconfig: {Name:mkdd37af1bd6d0f9cc034a64c07c8d33ee5c10ac Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1229 07:16:09.891789  260780 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1229 07:16:09.891886  260780 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1229 07:16:09.891989  260780 addons.go:70] Setting storage-provisioner=true in profile "no-preload-122332"
	I1229 07:16:09.892005  260780 addons.go:70] Setting dashboard=true in profile "no-preload-122332"
	I1229 07:16:09.892011  260780 addons.go:239] Setting addon storage-provisioner=true in "no-preload-122332"
	I1229 07:16:09.892018  260780 addons.go:239] Setting addon dashboard=true in "no-preload-122332"
	W1229 07:16:09.892020  260780 addons.go:248] addon storage-provisioner should already be in state true
	W1229 07:16:09.892026  260780 addons.go:248] addon dashboard should already be in state true
	I1229 07:16:09.892043  260780 host.go:66] Checking if "no-preload-122332" exists ...
	I1229 07:16:09.892047  260780 host.go:66] Checking if "no-preload-122332" exists ...
	I1229 07:16:09.892042  260780 addons.go:70] Setting default-storageclass=true in profile "no-preload-122332"
	I1229 07:16:09.892067  260780 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "no-preload-122332"
	I1229 07:16:09.892403  260780 cli_runner.go:164] Run: docker container inspect no-preload-122332 --format={{.State.Status}}
	I1229 07:16:09.892516  260780 cli_runner.go:164] Run: docker container inspect no-preload-122332 --format={{.State.Status}}
	I1229 07:16:09.891991  260780 config.go:182] Loaded profile config "no-preload-122332": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1229 07:16:09.892615  260780 cli_runner.go:164] Run: docker container inspect no-preload-122332 --format={{.State.Status}}
	I1229 07:16:09.894792  260780 out.go:179] * Verifying Kubernetes components...
	I1229 07:16:09.896115  260780 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1229 07:16:09.918174  260780 addons.go:239] Setting addon default-storageclass=true in "no-preload-122332"
	W1229 07:16:09.918202  260780 addons.go:248] addon default-storageclass should already be in state true
	I1229 07:16:09.918241  260780 host.go:66] Checking if "no-preload-122332" exists ...
	I1229 07:16:09.918679  260780 cli_runner.go:164] Run: docker container inspect no-preload-122332 --format={{.State.Status}}
	I1229 07:16:09.921808  260780 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1229 07:16:09.922519  260780 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1229 07:16:09.924237  260780 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1229 07:16:10.123978  257698 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1229 07:16:10.623420  257698 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1229 07:16:10.704283  257698 kubeadm.go:1114] duration metric: took 4.682487614s to wait for elevateKubeSystemPrivileges
	I1229 07:16:10.704324  257698 kubeadm.go:403] duration metric: took 11.943493884s to StartCluster
	I1229 07:16:10.704346  257698 settings.go:142] acquiring lock: {Name:mkc0a676e8e9f981283a1b55a28ae5261bf35d87 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1229 07:16:10.704418  257698 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22353-9207/kubeconfig
	I1229 07:16:10.707046  257698 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22353-9207/kubeconfig: {Name:mkdd37af1bd6d0f9cc034a64c07c8d33ee5c10ac Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1229 07:16:10.707375  257698 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1229 07:16:10.707386  257698 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1229 07:16:10.707463  257698 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1229 07:16:10.707555  257698 addons.go:70] Setting storage-provisioner=true in profile "default-k8s-diff-port-798607"
	I1229 07:16:10.707570  257698 config.go:182] Loaded profile config "default-k8s-diff-port-798607": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1229 07:16:10.707574  257698 addons.go:239] Setting addon storage-provisioner=true in "default-k8s-diff-port-798607"
	I1229 07:16:10.707647  257698 host.go:66] Checking if "default-k8s-diff-port-798607" exists ...
	I1229 07:16:10.707572  257698 addons.go:70] Setting default-storageclass=true in profile "default-k8s-diff-port-798607"
	I1229 07:16:10.707699  257698 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-798607"
	I1229 07:16:10.708066  257698 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-798607 --format={{.State.Status}}
	I1229 07:16:10.708302  257698 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-798607 --format={{.State.Status}}
	I1229 07:16:10.709703  257698 out.go:179] * Verifying Kubernetes components...
	I1229 07:16:10.710951  257698 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1229 07:16:10.735779  257698 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1229 07:16:10.736976  257698 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1229 07:16:10.736995  257698 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1229 07:16:10.737045  257698 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-798607
	I1229 07:16:10.737451  257698 addons.go:239] Setting addon default-storageclass=true in "default-k8s-diff-port-798607"
	I1229 07:16:10.737533  257698 host.go:66] Checking if "default-k8s-diff-port-798607" exists ...
	I1229 07:16:10.737917  257698 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-798607 --format={{.State.Status}}
	I1229 07:16:10.767368  257698 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1229 07:16:10.767411  257698 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1229 07:16:10.767465  257698 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-798607
	I1229 07:16:10.769643  257698 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33073 SSHKeyPath:/home/jenkins/minikube-integration/22353-9207/.minikube/machines/default-k8s-diff-port-798607/id_rsa Username:docker}
	I1229 07:16:10.807313  257698 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33073 SSHKeyPath:/home/jenkins/minikube-integration/22353-9207/.minikube/machines/default-k8s-diff-port-798607/id_rsa Username:docker}
	I1229 07:16:10.843092  257698 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.85.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.35.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1229 07:16:10.865866  257698 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1229 07:16:10.903166  257698 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1229 07:16:10.938458  257698 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1229 07:16:11.033447  257698 start.go:987] {"host.minikube.internal": 192.168.85.1} host record injected into CoreDNS's ConfigMap
	I1229 07:16:11.037700  257698 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-798607" to be "Ready" ...
	I1229 07:16:11.242066  257698 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1229 07:16:09.924367  260780 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1229 07:16:09.924385  260780 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1229 07:16:09.924453  260780 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-122332
	I1229 07:16:09.927664  260780 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1229 07:16:09.927690  260780 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1229 07:16:09.927748  260780 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-122332
	I1229 07:16:09.947784  260780 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1229 07:16:09.947810  260780 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1229 07:16:09.947872  260780 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-122332
	I1229 07:16:09.962102  260780 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33078 SSHKeyPath:/home/jenkins/minikube-integration/22353-9207/.minikube/machines/no-preload-122332/id_rsa Username:docker}
	I1229 07:16:09.964066  260780 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33078 SSHKeyPath:/home/jenkins/minikube-integration/22353-9207/.minikube/machines/no-preload-122332/id_rsa Username:docker}
	I1229 07:16:09.976057  260780 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33078 SSHKeyPath:/home/jenkins/minikube-integration/22353-9207/.minikube/machines/no-preload-122332/id_rsa Username:docker}
	I1229 07:16:10.047398  260780 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1229 07:16:10.062250  260780 node_ready.go:35] waiting up to 6m0s for node "no-preload-122332" to be "Ready" ...
	I1229 07:16:10.079533  260780 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1229 07:16:10.080154  260780 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1229 07:16:10.080176  260780 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1229 07:16:10.088176  260780 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1229 07:16:10.094590  260780 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1229 07:16:10.094611  260780 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1229 07:16:10.109751  260780 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1229 07:16:10.109777  260780 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1229 07:16:10.123436  260780 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1229 07:16:10.123461  260780 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1229 07:16:10.138346  260780 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1229 07:16:10.138372  260780 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1229 07:16:10.152445  260780 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1229 07:16:10.152468  260780 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1229 07:16:10.165143  260780 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1229 07:16:10.165160  260780 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1229 07:16:10.178658  260780 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1229 07:16:10.178684  260780 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1229 07:16:10.193528  260780 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1229 07:16:10.193557  260780 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1229 07:16:10.205978  260780 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1229 07:16:11.530362  260780 node_ready.go:49] node "no-preload-122332" is "Ready"
	I1229 07:16:11.530405  260780 node_ready.go:38] duration metric: took 1.468113831s for node "no-preload-122332" to be "Ready" ...
	I1229 07:16:11.530423  260780 api_server.go:52] waiting for apiserver process to appear ...
	I1229 07:16:11.530481  260780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1229 07:16:12.096131  260780 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.016563566s)
	I1229 07:16:12.096237  260780 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.008009241s)
	I1229 07:16:12.096429  260780 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.890415578s)
	I1229 07:16:12.096486  260780 api_server.go:72] duration metric: took 2.204662826s to wait for apiserver process to appear ...
	I1229 07:16:12.096503  260780 api_server.go:88] waiting for apiserver healthz status ...
	I1229 07:16:12.096522  260780 api_server.go:299] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1229 07:16:12.097856  260780 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p no-preload-122332 addons enable metrics-server
	
	I1229 07:16:12.102178  260780 api_server.go:325] https://192.168.94.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1229 07:16:12.102206  260780 api_server.go:103] status: https://192.168.94.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1229 07:16:12.104099  260780 out.go:179] * Enabled addons: storage-provisioner, dashboard, default-storageclass
	I1229 07:16:12.105156  260780 addons.go:530] duration metric: took 2.213279034s for enable addons: enabled=[storage-provisioner dashboard default-storageclass]
	W1229 07:16:08.999265  252990 node_ready.go:57] node "embed-certs-739827" has "Ready":"False" status (will retry)
	W1229 07:16:11.000314  252990 node_ready.go:57] node "embed-certs-739827" has "Ready":"False" status (will retry)
	I1229 07:16:11.243139  257698 addons.go:530] duration metric: took 535.674583ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I1229 07:16:11.539081  257698 kapi.go:214] "coredns" deployment in "kube-system" namespace and "default-k8s-diff-port-798607" context rescaled to 1 replicas
	W1229 07:16:13.040968  257698 node_ready.go:57] node "default-k8s-diff-port-798607" has "Ready":"False" status (will retry)
	I1229 07:16:12.596646  260780 api_server.go:299] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1229 07:16:12.607232  260780 api_server.go:325] https://192.168.94.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1229 07:16:12.607265  260780 api_server.go:103] status: https://192.168.94.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1229 07:16:13.096738  260780 api_server.go:299] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1229 07:16:13.101713  260780 api_server.go:325] https://192.168.94.2:8443/healthz returned 200:
	ok
	I1229 07:16:13.103040  260780 api_server.go:141] control plane version: v1.35.0
	I1229 07:16:13.103069  260780 api_server.go:131] duration metric: took 1.006559392s to wait for apiserver health ...
	I1229 07:16:13.103077  260780 system_pods.go:43] waiting for kube-system pods to appear ...
	I1229 07:16:13.107159  260780 system_pods.go:59] 8 kube-system pods found
	I1229 07:16:13.107205  260780 system_pods.go:61] "coredns-7d764666f9-6rcr2" [51ba32ec-f0c4-4dbd-b555-a3a3f8f02319] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1229 07:16:13.107236  260780 system_pods.go:61] "etcd-no-preload-122332" [5a8423b5-2e58-4a29-86c5-e8ea350f48c0] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1229 07:16:13.107251  260780 system_pods.go:61] "kindnet-rq99f" [bb2b7600-b85c-4a5b-aa87-b495394b1749] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1229 07:16:13.107265  260780 system_pods.go:61] "kube-apiserver-no-preload-122332" [1186072e-56b1-4fd6-b028-b99efba982c8] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1229 07:16:13.107278  260780 system_pods.go:61] "kube-controller-manager-no-preload-122332" [ac595152-44f9-4812-843b-29329fd7c659] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1229 07:16:13.107290  260780 system_pods.go:61] "kube-proxy-qvww2" [01123e19-62cc-4666-8d46-8e51a274f6c9] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1229 07:16:13.107311  260780 system_pods.go:61] "kube-scheduler-no-preload-122332" [69d66c3a-fc72-44e8-8d5a-3a4914e8705b] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1229 07:16:13.107324  260780 system_pods.go:61] "storage-provisioner" [37396a97-f1db-4026-af7d-551f0fec188f] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1229 07:16:13.107332  260780 system_pods.go:74] duration metric: took 4.248721ms to wait for pod list to return data ...
	I1229 07:16:13.107643  260780 default_sa.go:34] waiting for default service account to be created ...
	I1229 07:16:13.111795  260780 default_sa.go:45] found service account: "default"
	I1229 07:16:13.111819  260780 default_sa.go:55] duration metric: took 4.157923ms for default service account to be created ...
	I1229 07:16:13.111830  260780 system_pods.go:116] waiting for k8s-apps to be running ...
	I1229 07:16:13.114940  260780 system_pods.go:86] 8 kube-system pods found
	I1229 07:16:13.114974  260780 system_pods.go:89] "coredns-7d764666f9-6rcr2" [51ba32ec-f0c4-4dbd-b555-a3a3f8f02319] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1229 07:16:13.114985  260780 system_pods.go:89] "etcd-no-preload-122332" [5a8423b5-2e58-4a29-86c5-e8ea350f48c0] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1229 07:16:13.115001  260780 system_pods.go:89] "kindnet-rq99f" [bb2b7600-b85c-4a5b-aa87-b495394b1749] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1229 07:16:13.115019  260780 system_pods.go:89] "kube-apiserver-no-preload-122332" [1186072e-56b1-4fd6-b028-b99efba982c8] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1229 07:16:13.115031  260780 system_pods.go:89] "kube-controller-manager-no-preload-122332" [ac595152-44f9-4812-843b-29329fd7c659] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1229 07:16:13.115040  260780 system_pods.go:89] "kube-proxy-qvww2" [01123e19-62cc-4666-8d46-8e51a274f6c9] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1229 07:16:13.115049  260780 system_pods.go:89] "kube-scheduler-no-preload-122332" [69d66c3a-fc72-44e8-8d5a-3a4914e8705b] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1229 07:16:13.115059  260780 system_pods.go:89] "storage-provisioner" [37396a97-f1db-4026-af7d-551f0fec188f] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1229 07:16:13.115068  260780 system_pods.go:126] duration metric: took 3.231622ms to wait for k8s-apps to be running ...
	I1229 07:16:13.115080  260780 system_svc.go:44] waiting for kubelet service to be running ....
	I1229 07:16:13.115134  260780 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1229 07:16:13.129420  260780 system_svc.go:56] duration metric: took 14.330066ms WaitForService to wait for kubelet
	I1229 07:16:13.129450  260780 kubeadm.go:587] duration metric: took 3.23762937s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1229 07:16:13.129471  260780 node_conditions.go:102] verifying NodePressure condition ...
	I1229 07:16:13.132922  260780 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1229 07:16:13.132958  260780 node_conditions.go:123] node cpu capacity is 8
	I1229 07:16:13.132986  260780 node_conditions.go:105] duration metric: took 3.508619ms to run NodePressure ...
	I1229 07:16:13.133002  260780 start.go:242] waiting for startup goroutines ...
	I1229 07:16:13.133013  260780 start.go:247] waiting for cluster config update ...
	I1229 07:16:13.133027  260780 start.go:256] writing updated cluster config ...
	I1229 07:16:13.133395  260780 ssh_runner.go:195] Run: rm -f paused
	I1229 07:16:13.137637  260780 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1229 07:16:13.141632  260780 pod_ready.go:83] waiting for pod "coredns-7d764666f9-6rcr2" in "kube-system" namespace to be "Ready" or be gone ...
	W1229 07:16:15.146677  260780 pod_ready.go:104] pod "coredns-7d764666f9-6rcr2" is not "Ready", error: <nil>
	W1229 07:16:17.147111  260780 pod_ready.go:104] pod "coredns-7d764666f9-6rcr2" is not "Ready", error: <nil>
	W1229 07:16:13.499098  252990 node_ready.go:57] node "embed-certs-739827" has "Ready":"False" status (will retry)
	W1229 07:16:15.999532  252990 node_ready.go:57] node "embed-certs-739827" has "Ready":"False" status (will retry)
	I1229 07:16:16.502072  252990 node_ready.go:49] node "embed-certs-739827" is "Ready"
	I1229 07:16:16.502105  252990 node_ready.go:38] duration metric: took 12.006247326s for node "embed-certs-739827" to be "Ready" ...
	I1229 07:16:16.502128  252990 api_server.go:52] waiting for apiserver process to appear ...
	I1229 07:16:16.502196  252990 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1229 07:16:16.522089  252990 api_server.go:72] duration metric: took 12.319199575s to wait for apiserver process to appear ...
	I1229 07:16:16.522121  252990 api_server.go:88] waiting for apiserver healthz status ...
	I1229 07:16:16.522169  252990 api_server.go:299] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1229 07:16:16.529618  252990 api_server.go:325] https://192.168.103.2:8443/healthz returned 200:
	ok
	I1229 07:16:16.530753  252990 api_server.go:141] control plane version: v1.35.0
	I1229 07:16:16.530774  252990 api_server.go:131] duration metric: took 8.646632ms to wait for apiserver health ...
	I1229 07:16:16.530782  252990 system_pods.go:43] waiting for kube-system pods to appear ...
	I1229 07:16:16.534314  252990 system_pods.go:59] 8 kube-system pods found
	I1229 07:16:16.534355  252990 system_pods.go:61] "coredns-7d764666f9-55529" [279a41bb-4bd1-4a8d-9999-27eb0a996229] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1229 07:16:16.534363  252990 system_pods.go:61] "etcd-embed-certs-739827" [c31026b1-ea55-4f4c-a4da-7acec9849459] Running
	I1229 07:16:16.534375  252990 system_pods.go:61] "kindnet-l6mxr" [8c745434-7be8-4f4a-9685-0b2ebdcd1a6f] Running
	I1229 07:16:16.534381  252990 system_pods.go:61] "kube-apiserver-embed-certs-739827" [0bc8eb4f-d400-4844-930d-20bd4547241d] Running
	I1229 07:16:16.534393  252990 system_pods.go:61] "kube-controller-manager-embed-certs-739827" [c97351b5-e77c-4fdf-909e-5953b4bf6a71] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1229 07:16:16.534408  252990 system_pods.go:61] "kube-proxy-hdmp6" [ddf343da-e4e2-4ea1-a49d-02ad395abdaa] Running
	I1229 07:16:16.534414  252990 system_pods.go:61] "kube-scheduler-embed-certs-739827" [db6cf8d3-cae8-4601-ae36-c2003c4d368d] Running
	I1229 07:16:16.534421  252990 system_pods.go:61] "storage-provisioner" [b7edc06a-181d-4c30-b979-9aa3f1f50ecb] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1229 07:16:16.534428  252990 system_pods.go:74] duration metric: took 3.64069ms to wait for pod list to return data ...
	I1229 07:16:16.534437  252990 default_sa.go:34] waiting for default service account to be created ...
	I1229 07:16:16.536899  252990 default_sa.go:45] found service account: "default"
	I1229 07:16:16.536918  252990 default_sa.go:55] duration metric: took 2.474071ms for default service account to be created ...
	I1229 07:16:16.536928  252990 system_pods.go:116] waiting for k8s-apps to be running ...
	I1229 07:16:16.540806  252990 system_pods.go:86] 8 kube-system pods found
	I1229 07:16:16.540861  252990 system_pods.go:89] "coredns-7d764666f9-55529" [279a41bb-4bd1-4a8d-9999-27eb0a996229] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1229 07:16:16.540873  252990 system_pods.go:89] "etcd-embed-certs-739827" [c31026b1-ea55-4f4c-a4da-7acec9849459] Running
	I1229 07:16:16.540881  252990 system_pods.go:89] "kindnet-l6mxr" [8c745434-7be8-4f4a-9685-0b2ebdcd1a6f] Running
	I1229 07:16:16.540890  252990 system_pods.go:89] "kube-apiserver-embed-certs-739827" [0bc8eb4f-d400-4844-930d-20bd4547241d] Running
	I1229 07:16:16.540899  252990 system_pods.go:89] "kube-controller-manager-embed-certs-739827" [c97351b5-e77c-4fdf-909e-5953b4bf6a71] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1229 07:16:16.540905  252990 system_pods.go:89] "kube-proxy-hdmp6" [ddf343da-e4e2-4ea1-a49d-02ad395abdaa] Running
	I1229 07:16:16.540920  252990 system_pods.go:89] "kube-scheduler-embed-certs-739827" [db6cf8d3-cae8-4601-ae36-c2003c4d368d] Running
	I1229 07:16:16.540934  252990 system_pods.go:89] "storage-provisioner" [b7edc06a-181d-4c30-b979-9aa3f1f50ecb] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1229 07:16:16.540969  252990 retry.go:84] will retry after 300ms: missing components: kube-dns
	I1229 07:16:16.809060  252990 system_pods.go:86] 8 kube-system pods found
	I1229 07:16:16.809101  252990 system_pods.go:89] "coredns-7d764666f9-55529" [279a41bb-4bd1-4a8d-9999-27eb0a996229] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1229 07:16:16.809110  252990 system_pods.go:89] "etcd-embed-certs-739827" [c31026b1-ea55-4f4c-a4da-7acec9849459] Running
	I1229 07:16:16.809120  252990 system_pods.go:89] "kindnet-l6mxr" [8c745434-7be8-4f4a-9685-0b2ebdcd1a6f] Running
	I1229 07:16:16.809127  252990 system_pods.go:89] "kube-apiserver-embed-certs-739827" [0bc8eb4f-d400-4844-930d-20bd4547241d] Running
	I1229 07:16:16.809137  252990 system_pods.go:89] "kube-controller-manager-embed-certs-739827" [c97351b5-e77c-4fdf-909e-5953b4bf6a71] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1229 07:16:16.809146  252990 system_pods.go:89] "kube-proxy-hdmp6" [ddf343da-e4e2-4ea1-a49d-02ad395abdaa] Running
	I1229 07:16:16.809154  252990 system_pods.go:89] "kube-scheduler-embed-certs-739827" [db6cf8d3-cae8-4601-ae36-c2003c4d368d] Running
	I1229 07:16:16.809164  252990 system_pods.go:89] "storage-provisioner" [b7edc06a-181d-4c30-b979-9aa3f1f50ecb] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1229 07:16:17.193827  252990 system_pods.go:86] 8 kube-system pods found
	I1229 07:16:17.193866  252990 system_pods.go:89] "coredns-7d764666f9-55529" [279a41bb-4bd1-4a8d-9999-27eb0a996229] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1229 07:16:17.193876  252990 system_pods.go:89] "etcd-embed-certs-739827" [c31026b1-ea55-4f4c-a4da-7acec9849459] Running
	I1229 07:16:17.193884  252990 system_pods.go:89] "kindnet-l6mxr" [8c745434-7be8-4f4a-9685-0b2ebdcd1a6f] Running
	I1229 07:16:17.193889  252990 system_pods.go:89] "kube-apiserver-embed-certs-739827" [0bc8eb4f-d400-4844-930d-20bd4547241d] Running
	I1229 07:16:17.193919  252990 system_pods.go:89] "kube-controller-manager-embed-certs-739827" [c97351b5-e77c-4fdf-909e-5953b4bf6a71] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1229 07:16:17.193931  252990 system_pods.go:89] "kube-proxy-hdmp6" [ddf343da-e4e2-4ea1-a49d-02ad395abdaa] Running
	I1229 07:16:17.193939  252990 system_pods.go:89] "kube-scheduler-embed-certs-739827" [db6cf8d3-cae8-4601-ae36-c2003c4d368d] Running
	I1229 07:16:17.193946  252990 system_pods.go:89] "storage-provisioner" [b7edc06a-181d-4c30-b979-9aa3f1f50ecb] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1229 07:16:17.538060  252990 system_pods.go:86] 8 kube-system pods found
	I1229 07:16:17.538103  252990 system_pods.go:89] "coredns-7d764666f9-55529" [279a41bb-4bd1-4a8d-9999-27eb0a996229] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1229 07:16:17.538112  252990 system_pods.go:89] "etcd-embed-certs-739827" [c31026b1-ea55-4f4c-a4da-7acec9849459] Running
	I1229 07:16:17.538119  252990 system_pods.go:89] "kindnet-l6mxr" [8c745434-7be8-4f4a-9685-0b2ebdcd1a6f] Running
	I1229 07:16:17.538125  252990 system_pods.go:89] "kube-apiserver-embed-certs-739827" [0bc8eb4f-d400-4844-930d-20bd4547241d] Running
	I1229 07:16:17.538133  252990 system_pods.go:89] "kube-controller-manager-embed-certs-739827" [c97351b5-e77c-4fdf-909e-5953b4bf6a71] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1229 07:16:17.538139  252990 system_pods.go:89] "kube-proxy-hdmp6" [ddf343da-e4e2-4ea1-a49d-02ad395abdaa] Running
	I1229 07:16:17.538145  252990 system_pods.go:89] "kube-scheduler-embed-certs-739827" [db6cf8d3-cae8-4601-ae36-c2003c4d368d] Running
	I1229 07:16:17.538153  252990 system_pods.go:89] "storage-provisioner" [b7edc06a-181d-4c30-b979-9aa3f1f50ecb] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1229 07:16:16.286301  225445 api_server.go:299] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1229 07:16:16.286785  225445 api_server.go:315] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1229 07:16:16.286845  225445 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1229 07:16:16.286914  225445 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1229 07:16:16.322121  225445 cri.go:96] found id: "8b80982715281e14912ea726d2a5a182323febb349dfab3bcb2a610db7fa1d11"
	I1229 07:16:16.322148  225445 cri.go:96] found id: "3a9c461f6549cedfcb902d493a98dfdfd45573c28ec554e6b1d2ce904ee1fd67"
	I1229 07:16:16.322153  225445 cri.go:96] found id: ""
	I1229 07:16:16.322163  225445 logs.go:282] 2 containers: [8b80982715281e14912ea726d2a5a182323febb349dfab3bcb2a610db7fa1d11 3a9c461f6549cedfcb902d493a98dfdfd45573c28ec554e6b1d2ce904ee1fd67]
	I1229 07:16:16.322251  225445 ssh_runner.go:195] Run: which crictl
	I1229 07:16:16.327395  225445 ssh_runner.go:195] Run: which crictl
	I1229 07:16:16.332794  225445 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1229 07:16:16.332858  225445 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1229 07:16:16.369417  225445 cri.go:96] found id: "02694b727a7ef5615b676832bc306017d23da15eff0a3aa9c1e32630ffe9e1bd"
	I1229 07:16:16.369445  225445 cri.go:96] found id: ""
	I1229 07:16:16.369457  225445 logs.go:282] 1 containers: [02694b727a7ef5615b676832bc306017d23da15eff0a3aa9c1e32630ffe9e1bd]
	I1229 07:16:16.369520  225445 ssh_runner.go:195] Run: which crictl
	I1229 07:16:16.374332  225445 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1229 07:16:16.374395  225445 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1229 07:16:16.408673  225445 cri.go:96] found id: ""
	I1229 07:16:16.408703  225445 logs.go:282] 0 containers: []
	W1229 07:16:16.408715  225445 logs.go:284] No container was found matching "coredns"
	I1229 07:16:16.408722  225445 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1229 07:16:16.408777  225445 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1229 07:16:16.440753  225445 cri.go:96] found id: "bcf9a8233b7fe61acd5d7b0071cecd58e00a26cec4247f4ead31d75316b79078"
	I1229 07:16:16.440777  225445 cri.go:96] found id: "1a2a5ec730671fe2fec7f6b801e54afe17547e25677f0f6dc85052e44f7374d4"
	I1229 07:16:16.440782  225445 cri.go:96] found id: "83f0b45b14d0f5c0f2428e774b2c22cc1cbf1cdc631be13cb8017de1791ed9c1"
	I1229 07:16:16.440786  225445 cri.go:96] found id: ""
	I1229 07:16:16.440794  225445 logs.go:282] 3 containers: [bcf9a8233b7fe61acd5d7b0071cecd58e00a26cec4247f4ead31d75316b79078 1a2a5ec730671fe2fec7f6b801e54afe17547e25677f0f6dc85052e44f7374d4 83f0b45b14d0f5c0f2428e774b2c22cc1cbf1cdc631be13cb8017de1791ed9c1]
	I1229 07:16:16.440857  225445 ssh_runner.go:195] Run: which crictl
	I1229 07:16:16.445989  225445 ssh_runner.go:195] Run: which crictl
	I1229 07:16:16.450432  225445 ssh_runner.go:195] Run: which crictl
	I1229 07:16:16.454706  225445 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1229 07:16:16.454763  225445 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1229 07:16:16.489203  225445 cri.go:96] found id: ""
	I1229 07:16:16.489239  225445 logs.go:282] 0 containers: []
	W1229 07:16:16.489250  225445 logs.go:284] No container was found matching "kube-proxy"
	I1229 07:16:16.489257  225445 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1229 07:16:16.489318  225445 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1229 07:16:16.528556  225445 cri.go:96] found id: "a67832fbf27ab39d26d7cd60ce6b9b5a496df64418a8f0557221862df96d6685"
	I1229 07:16:16.528577  225445 cri.go:96] found id: "c6904a1539983ab61b05abbaa0001209617c96ddc1921ff83557531b68191f66"
	I1229 07:16:16.528583  225445 cri.go:96] found id: ""
	I1229 07:16:16.528592  225445 logs.go:282] 2 containers: [a67832fbf27ab39d26d7cd60ce6b9b5a496df64418a8f0557221862df96d6685 c6904a1539983ab61b05abbaa0001209617c96ddc1921ff83557531b68191f66]
	I1229 07:16:16.528645  225445 ssh_runner.go:195] Run: which crictl
	I1229 07:16:16.534131  225445 ssh_runner.go:195] Run: which crictl
	I1229 07:16:16.538795  225445 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1229 07:16:16.538858  225445 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1229 07:16:16.573286  225445 cri.go:96] found id: ""
	I1229 07:16:16.573315  225445 logs.go:282] 0 containers: []
	W1229 07:16:16.573325  225445 logs.go:284] No container was found matching "kindnet"
	I1229 07:16:16.573333  225445 cri.go:61] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1229 07:16:16.573394  225445 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=storage-provisioner
	I1229 07:16:16.610365  225445 cri.go:96] found id: ""
	I1229 07:16:16.610393  225445 logs.go:282] 0 containers: []
	W1229 07:16:16.610406  225445 logs.go:284] No container was found matching "storage-provisioner"
	I1229 07:16:16.610419  225445 logs.go:123] Gathering logs for kube-scheduler [bcf9a8233b7fe61acd5d7b0071cecd58e00a26cec4247f4ead31d75316b79078] ...
	I1229 07:16:16.610437  225445 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 bcf9a8233b7fe61acd5d7b0071cecd58e00a26cec4247f4ead31d75316b79078"
	W1229 07:16:16.642063  225445 logs.go:138] Found kube-scheduler [bcf9a8233b7fe61acd5d7b0071cecd58e00a26cec4247f4ead31d75316b79078] problem: E1229 07:14:51.301899       1 run.go:72] "command failed" err="failed to create listener: failed to listen on 127.0.0.1:10259: listen tcp 127.0.0.1:10259: bind: address already in use"
	I1229 07:16:16.642090  225445 logs.go:123] Gathering logs for CRI-O ...
	I1229 07:16:16.642104  225445 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1229 07:16:16.738108  225445 logs.go:123] Gathering logs for describe nodes ...
	I1229 07:16:16.738204  225445 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1229 07:16:16.807730  225445 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1229 07:16:16.807758  225445 logs.go:123] Gathering logs for kube-apiserver [8b80982715281e14912ea726d2a5a182323febb349dfab3bcb2a610db7fa1d11] ...
	I1229 07:16:16.807774  225445 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 8b80982715281e14912ea726d2a5a182323febb349dfab3bcb2a610db7fa1d11"
	I1229 07:16:16.848558  225445 logs.go:123] Gathering logs for kube-scheduler [1a2a5ec730671fe2fec7f6b801e54afe17547e25677f0f6dc85052e44f7374d4] ...
	I1229 07:16:16.848589  225445 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 1a2a5ec730671fe2fec7f6b801e54afe17547e25677f0f6dc85052e44f7374d4"
	I1229 07:16:16.940984  225445 logs.go:123] Gathering logs for kube-scheduler [83f0b45b14d0f5c0f2428e774b2c22cc1cbf1cdc631be13cb8017de1791ed9c1] ...
	I1229 07:16:16.941019  225445 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 83f0b45b14d0f5c0f2428e774b2c22cc1cbf1cdc631be13cb8017de1791ed9c1"
	I1229 07:16:16.975456  225445 logs.go:123] Gathering logs for kube-controller-manager [a67832fbf27ab39d26d7cd60ce6b9b5a496df64418a8f0557221862df96d6685] ...
	I1229 07:16:16.975490  225445 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a67832fbf27ab39d26d7cd60ce6b9b5a496df64418a8f0557221862df96d6685"
	I1229 07:16:17.010586  225445 logs.go:123] Gathering logs for kube-controller-manager [c6904a1539983ab61b05abbaa0001209617c96ddc1921ff83557531b68191f66] ...
	I1229 07:16:17.010619  225445 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c6904a1539983ab61b05abbaa0001209617c96ddc1921ff83557531b68191f66"
	I1229 07:16:17.046987  225445 logs.go:123] Gathering logs for container status ...
	I1229 07:16:17.047022  225445 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1229 07:16:17.088590  225445 logs.go:123] Gathering logs for kubelet ...
	I1229 07:16:17.088624  225445 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1229 07:16:17.218079  225445 logs.go:123] Gathering logs for dmesg ...
	I1229 07:16:17.218119  225445 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1229 07:16:17.236043  225445 logs.go:123] Gathering logs for kube-apiserver [3a9c461f6549cedfcb902d493a98dfdfd45573c28ec554e6b1d2ce904ee1fd67] ...
	I1229 07:16:17.236078  225445 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3a9c461f6549cedfcb902d493a98dfdfd45573c28ec554e6b1d2ce904ee1fd67"
	I1229 07:16:17.275451  225445 logs.go:123] Gathering logs for etcd [02694b727a7ef5615b676832bc306017d23da15eff0a3aa9c1e32630ffe9e1bd] ...
	I1229 07:16:17.275485  225445 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 02694b727a7ef5615b676832bc306017d23da15eff0a3aa9c1e32630ffe9e1bd"
	I1229 07:16:17.318709  225445 out.go:374] Setting ErrFile to fd 2...
	I1229 07:16:17.318742  225445 out.go:408] TERM=,COLORTERM=, which probably does not support color
	W1229 07:16:17.318809  225445 out.go:285] X Problems detected in kube-scheduler [bcf9a8233b7fe61acd5d7b0071cecd58e00a26cec4247f4ead31d75316b79078]:
	W1229 07:16:17.318824  225445 out.go:285]   E1229 07:14:51.301899       1 run.go:72] "command failed" err="failed to create listener: failed to listen on 127.0.0.1:10259: listen tcp 127.0.0.1:10259: bind: address already in use"
	I1229 07:16:17.318830  225445 out.go:374] Setting ErrFile to fd 2...
	I1229 07:16:17.318836  225445 out.go:408] TERM=,COLORTERM=, which probably does not support color
	W1229 07:16:15.041173  257698 node_ready.go:57] node "default-k8s-diff-port-798607" has "Ready":"False" status (will retry)
	W1229 07:16:17.041830  257698 node_ready.go:57] node "default-k8s-diff-port-798607" has "Ready":"False" status (will retry)
	W1229 07:16:19.042177  257698 node_ready.go:57] node "default-k8s-diff-port-798607" has "Ready":"False" status (will retry)
	I1229 07:16:18.120694  252990 system_pods.go:86] 8 kube-system pods found
	I1229 07:16:18.120733  252990 system_pods.go:89] "coredns-7d764666f9-55529" [279a41bb-4bd1-4a8d-9999-27eb0a996229] Running
	I1229 07:16:18.120744  252990 system_pods.go:89] "etcd-embed-certs-739827" [c31026b1-ea55-4f4c-a4da-7acec9849459] Running
	I1229 07:16:18.120750  252990 system_pods.go:89] "kindnet-l6mxr" [8c745434-7be8-4f4a-9685-0b2ebdcd1a6f] Running
	I1229 07:16:18.120757  252990 system_pods.go:89] "kube-apiserver-embed-certs-739827" [0bc8eb4f-d400-4844-930d-20bd4547241d] Running
	I1229 07:16:18.120769  252990 system_pods.go:89] "kube-controller-manager-embed-certs-739827" [c97351b5-e77c-4fdf-909e-5953b4bf6a71] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1229 07:16:18.120784  252990 system_pods.go:89] "kube-proxy-hdmp6" [ddf343da-e4e2-4ea1-a49d-02ad395abdaa] Running
	I1229 07:16:18.120792  252990 system_pods.go:89] "kube-scheduler-embed-certs-739827" [db6cf8d3-cae8-4601-ae36-c2003c4d368d] Running
	I1229 07:16:18.120799  252990 system_pods.go:89] "storage-provisioner" [b7edc06a-181d-4c30-b979-9aa3f1f50ecb] Running
	I1229 07:16:18.120809  252990 system_pods.go:126] duration metric: took 1.583874938s to wait for k8s-apps to be running ...
	I1229 07:16:18.120820  252990 system_svc.go:44] waiting for kubelet service to be running ....
	I1229 07:16:18.120875  252990 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1229 07:16:18.138516  252990 system_svc.go:56] duration metric: took 17.687868ms WaitForService to wait for kubelet
	I1229 07:16:18.138549  252990 kubeadm.go:587] duration metric: took 13.935664043s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1229 07:16:18.138571  252990 node_conditions.go:102] verifying NodePressure condition ...
	I1229 07:16:18.141761  252990 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1229 07:16:18.141791  252990 node_conditions.go:123] node cpu capacity is 8
	I1229 07:16:18.141814  252990 node_conditions.go:105] duration metric: took 3.23376ms to run NodePressure ...
	I1229 07:16:18.141829  252990 start.go:242] waiting for startup goroutines ...
	I1229 07:16:18.141843  252990 start.go:247] waiting for cluster config update ...
	I1229 07:16:18.141856  252990 start.go:256] writing updated cluster config ...
	I1229 07:16:18.142150  252990 ssh_runner.go:195] Run: rm -f paused
	I1229 07:16:18.147256  252990 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1229 07:16:18.151738  252990 pod_ready.go:83] waiting for pod "coredns-7d764666f9-55529" in "kube-system" namespace to be "Ready" or be gone ...
	I1229 07:16:18.156521  252990 pod_ready.go:94] pod "coredns-7d764666f9-55529" is "Ready"
	I1229 07:16:18.156544  252990 pod_ready.go:86] duration metric: took 4.780643ms for pod "coredns-7d764666f9-55529" in "kube-system" namespace to be "Ready" or be gone ...
	I1229 07:16:18.158730  252990 pod_ready.go:83] waiting for pod "etcd-embed-certs-739827" in "kube-system" namespace to be "Ready" or be gone ...
	I1229 07:16:18.163175  252990 pod_ready.go:94] pod "etcd-embed-certs-739827" is "Ready"
	I1229 07:16:18.163204  252990 pod_ready.go:86] duration metric: took 4.452227ms for pod "etcd-embed-certs-739827" in "kube-system" namespace to be "Ready" or be gone ...
	I1229 07:16:18.165573  252990 pod_ready.go:83] waiting for pod "kube-apiserver-embed-certs-739827" in "kube-system" namespace to be "Ready" or be gone ...
	I1229 07:16:18.169958  252990 pod_ready.go:94] pod "kube-apiserver-embed-certs-739827" is "Ready"
	I1229 07:16:18.169980  252990 pod_ready.go:86] duration metric: took 4.385251ms for pod "kube-apiserver-embed-certs-739827" in "kube-system" namespace to be "Ready" or be gone ...
	I1229 07:16:18.172262  252990 pod_ready.go:83] waiting for pod "kube-controller-manager-embed-certs-739827" in "kube-system" namespace to be "Ready" or be gone ...
	I1229 07:16:18.952863  252990 pod_ready.go:94] pod "kube-controller-manager-embed-certs-739827" is "Ready"
	I1229 07:16:18.952902  252990 pod_ready.go:86] duration metric: took 780.618636ms for pod "kube-controller-manager-embed-certs-739827" in "kube-system" namespace to be "Ready" or be gone ...
	I1229 07:16:19.153103  252990 pod_ready.go:83] waiting for pod "kube-proxy-hdmp6" in "kube-system" namespace to be "Ready" or be gone ...
	I1229 07:16:19.552450  252990 pod_ready.go:94] pod "kube-proxy-hdmp6" is "Ready"
	I1229 07:16:19.552474  252990 pod_ready.go:86] duration metric: took 399.346089ms for pod "kube-proxy-hdmp6" in "kube-system" namespace to be "Ready" or be gone ...
	I1229 07:16:19.752024  252990 pod_ready.go:83] waiting for pod "kube-scheduler-embed-certs-739827" in "kube-system" namespace to be "Ready" or be gone ...
	I1229 07:16:20.151703  252990 pod_ready.go:94] pod "kube-scheduler-embed-certs-739827" is "Ready"
	I1229 07:16:20.151737  252990 pod_ready.go:86] duration metric: took 399.681992ms for pod "kube-scheduler-embed-certs-739827" in "kube-system" namespace to be "Ready" or be gone ...
	I1229 07:16:20.151753  252990 pod_ready.go:40] duration metric: took 2.004461757s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1229 07:16:20.197550  252990 start.go:625] kubectl: 1.35.0, cluster: 1.35.0 (minor skew: 0)
	I1229 07:16:20.228678  252990 out.go:179] * Done! kubectl is now configured to use "embed-certs-739827" cluster and "default" namespace by default
	W1229 07:16:19.147566  260780 pod_ready.go:104] pod "coredns-7d764666f9-6rcr2" is not "Ready", error: <nil>
	W1229 07:16:21.147802  260780 pod_ready.go:104] pod "coredns-7d764666f9-6rcr2" is not "Ready", error: <nil>
	W1229 07:16:21.541192  257698 node_ready.go:57] node "default-k8s-diff-port-798607" has "Ready":"False" status (will retry)
	W1229 07:16:24.041265  257698 node_ready.go:57] node "default-k8s-diff-port-798607" has "Ready":"False" status (will retry)
	I1229 07:16:24.541697  257698 node_ready.go:49] node "default-k8s-diff-port-798607" is "Ready"
	I1229 07:16:24.541740  257698 node_ready.go:38] duration metric: took 13.504008187s for node "default-k8s-diff-port-798607" to be "Ready" ...
	I1229 07:16:24.541757  257698 api_server.go:52] waiting for apiserver process to appear ...
	I1229 07:16:24.541817  257698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1229 07:16:24.555349  257698 api_server.go:72] duration metric: took 13.847927079s to wait for apiserver process to appear ...
	I1229 07:16:24.555380  257698 api_server.go:88] waiting for apiserver healthz status ...
	I1229 07:16:24.555397  257698 api_server.go:299] Checking apiserver healthz at https://192.168.85.2:8444/healthz ...
	I1229 07:16:24.560461  257698 api_server.go:325] https://192.168.85.2:8444/healthz returned 200:
	ok
	I1229 07:16:24.561323  257698 api_server.go:141] control plane version: v1.35.0
	I1229 07:16:24.561348  257698 api_server.go:131] duration metric: took 5.961012ms to wait for apiserver health ...
	I1229 07:16:24.561358  257698 system_pods.go:43] waiting for kube-system pods to appear ...
	I1229 07:16:24.564806  257698 system_pods.go:59] 8 kube-system pods found
	I1229 07:16:24.564850  257698 system_pods.go:61] "coredns-7d764666f9-jwmww" [1ab5b614-62d4-4118-9c4b-2e12e7ae7aec] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1229 07:16:24.564865  257698 system_pods.go:61] "etcd-default-k8s-diff-port-798607" [e1c1af51-4014-4c32-bcff-e34907986cbd] Running
	I1229 07:16:24.564876  257698 system_pods.go:61] "kindnet-m6jd2" [eae39509-802b-4a6e-b436-904c44761153] Running
	I1229 07:16:24.564884  257698 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-798607" [45d77ffe-320b-4e0c-b70c-c8f5c10e462f] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1229 07:16:24.564891  257698 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-798607" [8babc737-acfc-4cad-9bd0-3f28bf89533b] Running
	I1229 07:16:24.564900  257698 system_pods.go:61] "kube-proxy-4mnzc" [c322649a-8539-4264-9165-2a2522f06078] Running
	I1229 07:16:24.564906  257698 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-798607" [89336461-1b92-451b-b02f-3fe54f3b6309] Running
	I1229 07:16:24.564923  257698 system_pods.go:61] "storage-provisioner" [77ec6576-1cba-401f-8b20-e6e97d7be45d] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1229 07:16:24.564935  257698 system_pods.go:74] duration metric: took 3.569715ms to wait for pod list to return data ...
	I1229 07:16:24.564947  257698 default_sa.go:34] waiting for default service account to be created ...
	I1229 07:16:24.568054  257698 default_sa.go:45] found service account: "default"
	I1229 07:16:24.568071  257698 default_sa.go:55] duration metric: took 3.116968ms for default service account to be created ...
	I1229 07:16:24.568078  257698 system_pods.go:116] waiting for k8s-apps to be running ...
	I1229 07:16:24.571042  257698 system_pods.go:86] 8 kube-system pods found
	I1229 07:16:24.571074  257698 system_pods.go:89] "coredns-7d764666f9-jwmww" [1ab5b614-62d4-4118-9c4b-2e12e7ae7aec] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1229 07:16:24.571082  257698 system_pods.go:89] "etcd-default-k8s-diff-port-798607" [e1c1af51-4014-4c32-bcff-e34907986cbd] Running
	I1229 07:16:24.571091  257698 system_pods.go:89] "kindnet-m6jd2" [eae39509-802b-4a6e-b436-904c44761153] Running
	I1229 07:16:24.571101  257698 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-798607" [45d77ffe-320b-4e0c-b70c-c8f5c10e462f] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1229 07:16:24.571111  257698 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-798607" [8babc737-acfc-4cad-9bd0-3f28bf89533b] Running
	I1229 07:16:24.571117  257698 system_pods.go:89] "kube-proxy-4mnzc" [c322649a-8539-4264-9165-2a2522f06078] Running
	I1229 07:16:24.571123  257698 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-798607" [89336461-1b92-451b-b02f-3fe54f3b6309] Running
	I1229 07:16:24.571132  257698 system_pods.go:89] "storage-provisioner" [77ec6576-1cba-401f-8b20-e6e97d7be45d] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1229 07:16:24.571171  257698 retry.go:84] will retry after 200ms: missing components: kube-dns
	I1229 07:16:24.766859  257698 system_pods.go:86] 8 kube-system pods found
	I1229 07:16:24.766915  257698 system_pods.go:89] "coredns-7d764666f9-jwmww" [1ab5b614-62d4-4118-9c4b-2e12e7ae7aec] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1229 07:16:24.766924  257698 system_pods.go:89] "etcd-default-k8s-diff-port-798607" [e1c1af51-4014-4c32-bcff-e34907986cbd] Running
	I1229 07:16:24.766933  257698 system_pods.go:89] "kindnet-m6jd2" [eae39509-802b-4a6e-b436-904c44761153] Running
	I1229 07:16:24.766941  257698 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-798607" [45d77ffe-320b-4e0c-b70c-c8f5c10e462f] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1229 07:16:24.766951  257698 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-798607" [8babc737-acfc-4cad-9bd0-3f28bf89533b] Running
	I1229 07:16:24.766957  257698 system_pods.go:89] "kube-proxy-4mnzc" [c322649a-8539-4264-9165-2a2522f06078] Running
	I1229 07:16:24.766962  257698 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-798607" [89336461-1b92-451b-b02f-3fe54f3b6309] Running
	I1229 07:16:24.766973  257698 system_pods.go:89] "storage-provisioner" [77ec6576-1cba-401f-8b20-e6e97d7be45d] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1229 07:16:25.156403  257698 system_pods.go:86] 8 kube-system pods found
	I1229 07:16:25.156439  257698 system_pods.go:89] "coredns-7d764666f9-jwmww" [1ab5b614-62d4-4118-9c4b-2e12e7ae7aec] Running
	I1229 07:16:25.156448  257698 system_pods.go:89] "etcd-default-k8s-diff-port-798607" [e1c1af51-4014-4c32-bcff-e34907986cbd] Running
	I1229 07:16:25.156455  257698 system_pods.go:89] "kindnet-m6jd2" [eae39509-802b-4a6e-b436-904c44761153] Running
	I1229 07:16:25.156460  257698 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-798607" [45d77ffe-320b-4e0c-b70c-c8f5c10e462f] Running
	I1229 07:16:25.156466  257698 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-798607" [8babc737-acfc-4cad-9bd0-3f28bf89533b] Running
	I1229 07:16:25.156472  257698 system_pods.go:89] "kube-proxy-4mnzc" [c322649a-8539-4264-9165-2a2522f06078] Running
	I1229 07:16:25.156478  257698 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-798607" [89336461-1b92-451b-b02f-3fe54f3b6309] Running
	I1229 07:16:25.156483  257698 system_pods.go:89] "storage-provisioner" [77ec6576-1cba-401f-8b20-e6e97d7be45d] Running
	I1229 07:16:25.156493  257698 system_pods.go:126] duration metric: took 588.408771ms to wait for k8s-apps to be running ...
	I1229 07:16:25.156506  257698 system_svc.go:44] waiting for kubelet service to be running ....
	I1229 07:16:25.156558  257698 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1229 07:16:25.170169  257698 system_svc.go:56] duration metric: took 13.655258ms WaitForService to wait for kubelet
	I1229 07:16:25.170197  257698 kubeadm.go:587] duration metric: took 14.462778971s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1229 07:16:25.170213  257698 node_conditions.go:102] verifying NodePressure condition ...
	I1229 07:16:25.174799  257698 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1229 07:16:25.174825  257698 node_conditions.go:123] node cpu capacity is 8
	I1229 07:16:25.174841  257698 node_conditions.go:105] duration metric: took 4.624004ms to run NodePressure ...
	I1229 07:16:25.174852  257698 start.go:242] waiting for startup goroutines ...
	I1229 07:16:25.174858  257698 start.go:247] waiting for cluster config update ...
	I1229 07:16:25.174868  257698 start.go:256] writing updated cluster config ...
	I1229 07:16:25.175140  257698 ssh_runner.go:195] Run: rm -f paused
	I1229 07:16:25.178840  257698 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1229 07:16:25.256252  257698 pod_ready.go:83] waiting for pod "coredns-7d764666f9-jwmww" in "kube-system" namespace to be "Ready" or be gone ...
	I1229 07:16:25.260813  257698 pod_ready.go:94] pod "coredns-7d764666f9-jwmww" is "Ready"
	I1229 07:16:25.260841  257698 pod_ready.go:86] duration metric: took 4.558151ms for pod "coredns-7d764666f9-jwmww" in "kube-system" namespace to be "Ready" or be gone ...
	I1229 07:16:25.262830  257698 pod_ready.go:83] waiting for pod "etcd-default-k8s-diff-port-798607" in "kube-system" namespace to be "Ready" or be gone ...
	I1229 07:16:25.266727  257698 pod_ready.go:94] pod "etcd-default-k8s-diff-port-798607" is "Ready"
	I1229 07:16:25.266752  257698 pod_ready.go:86] duration metric: took 3.893811ms for pod "etcd-default-k8s-diff-port-798607" in "kube-system" namespace to be "Ready" or be gone ...
	I1229 07:16:25.268446  257698 pod_ready.go:83] waiting for pod "kube-apiserver-default-k8s-diff-port-798607" in "kube-system" namespace to be "Ready" or be gone ...
	I1229 07:16:25.271916  257698 pod_ready.go:94] pod "kube-apiserver-default-k8s-diff-port-798607" is "Ready"
	I1229 07:16:25.271945  257698 pod_ready.go:86] duration metric: took 3.478604ms for pod "kube-apiserver-default-k8s-diff-port-798607" in "kube-system" namespace to be "Ready" or be gone ...
	I1229 07:16:25.273743  257698 pod_ready.go:83] waiting for pod "kube-controller-manager-default-k8s-diff-port-798607" in "kube-system" namespace to be "Ready" or be gone ...
	I1229 07:16:25.583106  257698 pod_ready.go:94] pod "kube-controller-manager-default-k8s-diff-port-798607" is "Ready"
	I1229 07:16:25.583133  257698 pod_ready.go:86] duration metric: took 309.370002ms for pod "kube-controller-manager-default-k8s-diff-port-798607" in "kube-system" namespace to be "Ready" or be gone ...
	I1229 07:16:25.783207  257698 pod_ready.go:83] waiting for pod "kube-proxy-4mnzc" in "kube-system" namespace to be "Ready" or be gone ...
	I1229 07:16:26.183509  257698 pod_ready.go:94] pod "kube-proxy-4mnzc" is "Ready"
	I1229 07:16:26.183537  257698 pod_ready.go:86] duration metric: took 400.277554ms for pod "kube-proxy-4mnzc" in "kube-system" namespace to be "Ready" or be gone ...
	I1229 07:16:26.383639  257698 pod_ready.go:83] waiting for pod "kube-scheduler-default-k8s-diff-port-798607" in "kube-system" namespace to be "Ready" or be gone ...
	I1229 07:16:26.782731  257698 pod_ready.go:94] pod "kube-scheduler-default-k8s-diff-port-798607" is "Ready"
	I1229 07:16:26.782755  257698 pod_ready.go:86] duration metric: took 399.089831ms for pod "kube-scheduler-default-k8s-diff-port-798607" in "kube-system" namespace to be "Ready" or be gone ...
	I1229 07:16:26.782766  257698 pod_ready.go:40] duration metric: took 1.603900332s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1229 07:16:26.826665  257698 start.go:625] kubectl: 1.35.0, cluster: 1.35.0 (minor skew: 0)
	I1229 07:16:26.828685  257698 out.go:179] * Done! kubectl is now configured to use "default-k8s-diff-port-798607" cluster and "default" namespace by default
	W1229 07:16:23.647149  260780 pod_ready.go:104] pod "coredns-7d764666f9-6rcr2" is not "Ready", error: <nil>
	W1229 07:16:26.147485  260780 pod_ready.go:104] pod "coredns-7d764666f9-6rcr2" is not "Ready", error: <nil>
	I1229 07:16:27.320716  225445 api_server.go:299] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1229 07:16:27.321052  225445 api_server.go:315] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1229 07:16:27.321103  225445 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1229 07:16:27.321144  225445 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1229 07:16:27.350441  225445 cri.go:96] found id: "8b80982715281e14912ea726d2a5a182323febb349dfab3bcb2a610db7fa1d11"
	I1229 07:16:27.350465  225445 cri.go:96] found id: "3a9c461f6549cedfcb902d493a98dfdfd45573c28ec554e6b1d2ce904ee1fd67"
	I1229 07:16:27.350471  225445 cri.go:96] found id: ""
	I1229 07:16:27.350480  225445 logs.go:282] 2 containers: [8b80982715281e14912ea726d2a5a182323febb349dfab3bcb2a610db7fa1d11 3a9c461f6549cedfcb902d493a98dfdfd45573c28ec554e6b1d2ce904ee1fd67]
	I1229 07:16:27.350537  225445 ssh_runner.go:195] Run: which crictl
	I1229 07:16:27.354565  225445 ssh_runner.go:195] Run: which crictl
	I1229 07:16:27.358053  225445 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1229 07:16:27.358107  225445 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1229 07:16:27.383946  225445 cri.go:96] found id: "02694b727a7ef5615b676832bc306017d23da15eff0a3aa9c1e32630ffe9e1bd"
	I1229 07:16:27.383967  225445 cri.go:96] found id: ""
	I1229 07:16:27.383977  225445 logs.go:282] 1 containers: [02694b727a7ef5615b676832bc306017d23da15eff0a3aa9c1e32630ffe9e1bd]
	I1229 07:16:27.384027  225445 ssh_runner.go:195] Run: which crictl
	I1229 07:16:27.387929  225445 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1229 07:16:27.387982  225445 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1229 07:16:27.415193  225445 cri.go:96] found id: ""
	I1229 07:16:27.415214  225445 logs.go:282] 0 containers: []
	W1229 07:16:27.415236  225445 logs.go:284] No container was found matching "coredns"
	I1229 07:16:27.415244  225445 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1229 07:16:27.415300  225445 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1229 07:16:27.442113  225445 cri.go:96] found id: "14199977eb133b2984f339d2e84d171a190e387623bebdf7ed21e27d8cf71d90"
	I1229 07:16:27.442133  225445 cri.go:96] found id: "1a2a5ec730671fe2fec7f6b801e54afe17547e25677f0f6dc85052e44f7374d4"
	I1229 07:16:27.442152  225445 cri.go:96] found id: "83f0b45b14d0f5c0f2428e774b2c22cc1cbf1cdc631be13cb8017de1791ed9c1"
	I1229 07:16:27.442156  225445 cri.go:96] found id: ""
	I1229 07:16:27.442163  225445 logs.go:282] 3 containers: [14199977eb133b2984f339d2e84d171a190e387623bebdf7ed21e27d8cf71d90 1a2a5ec730671fe2fec7f6b801e54afe17547e25677f0f6dc85052e44f7374d4 83f0b45b14d0f5c0f2428e774b2c22cc1cbf1cdc631be13cb8017de1791ed9c1]
	I1229 07:16:27.442245  225445 ssh_runner.go:195] Run: which crictl
	I1229 07:16:27.446341  225445 ssh_runner.go:195] Run: which crictl
	I1229 07:16:27.449897  225445 ssh_runner.go:195] Run: which crictl
	I1229 07:16:27.453338  225445 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1229 07:16:27.453396  225445 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1229 07:16:27.480706  225445 cri.go:96] found id: ""
	I1229 07:16:27.480735  225445 logs.go:282] 0 containers: []
	W1229 07:16:27.480746  225445 logs.go:284] No container was found matching "kube-proxy"
	I1229 07:16:27.480754  225445 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1229 07:16:27.480811  225445 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1229 07:16:27.508753  225445 cri.go:96] found id: "a67832fbf27ab39d26d7cd60ce6b9b5a496df64418a8f0557221862df96d6685"
	I1229 07:16:27.508778  225445 cri.go:96] found id: "c6904a1539983ab61b05abbaa0001209617c96ddc1921ff83557531b68191f66"
	I1229 07:16:27.508783  225445 cri.go:96] found id: ""
	I1229 07:16:27.508789  225445 logs.go:282] 2 containers: [a67832fbf27ab39d26d7cd60ce6b9b5a496df64418a8f0557221862df96d6685 c6904a1539983ab61b05abbaa0001209617c96ddc1921ff83557531b68191f66]
	I1229 07:16:27.508833  225445 ssh_runner.go:195] Run: which crictl
	I1229 07:16:27.513001  225445 ssh_runner.go:195] Run: which crictl
	I1229 07:16:27.517076  225445 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1229 07:16:27.517136  225445 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1229 07:16:27.543841  225445 cri.go:96] found id: ""
	I1229 07:16:27.543869  225445 logs.go:282] 0 containers: []
	W1229 07:16:27.543881  225445 logs.go:284] No container was found matching "kindnet"
	I1229 07:16:27.543911  225445 cri.go:61] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1229 07:16:27.543965  225445 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=storage-provisioner
	I1229 07:16:27.571613  225445 cri.go:96] found id: ""
	I1229 07:16:27.571640  225445 logs.go:282] 0 containers: []
	W1229 07:16:27.571650  225445 logs.go:284] No container was found matching "storage-provisioner"
	I1229 07:16:27.571662  225445 logs.go:123] Gathering logs for kube-controller-manager [a67832fbf27ab39d26d7cd60ce6b9b5a496df64418a8f0557221862df96d6685] ...
	I1229 07:16:27.571679  225445 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a67832fbf27ab39d26d7cd60ce6b9b5a496df64418a8f0557221862df96d6685"
	I1229 07:16:27.598341  225445 logs.go:123] Gathering logs for kube-controller-manager [c6904a1539983ab61b05abbaa0001209617c96ddc1921ff83557531b68191f66] ...
	I1229 07:16:27.598373  225445 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c6904a1539983ab61b05abbaa0001209617c96ddc1921ff83557531b68191f66"
	I1229 07:16:27.625760  225445 logs.go:123] Gathering logs for CRI-O ...
	I1229 07:16:27.625787  225445 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1229 07:16:27.695839  225445 logs.go:123] Gathering logs for describe nodes ...
	I1229 07:16:27.695882  225445 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1229 07:16:27.752835  225445 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1229 07:16:27.752855  225445 logs.go:123] Gathering logs for kube-apiserver [8b80982715281e14912ea726d2a5a182323febb349dfab3bcb2a610db7fa1d11] ...
	I1229 07:16:27.752867  225445 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 8b80982715281e14912ea726d2a5a182323febb349dfab3bcb2a610db7fa1d11"
	I1229 07:16:27.784333  225445 logs.go:123] Gathering logs for kube-apiserver [3a9c461f6549cedfcb902d493a98dfdfd45573c28ec554e6b1d2ce904ee1fd67] ...
	I1229 07:16:27.784371  225445 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3a9c461f6549cedfcb902d493a98dfdfd45573c28ec554e6b1d2ce904ee1fd67"
	I1229 07:16:27.814527  225445 logs.go:123] Gathering logs for kube-scheduler [83f0b45b14d0f5c0f2428e774b2c22cc1cbf1cdc631be13cb8017de1791ed9c1] ...
	I1229 07:16:27.814559  225445 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 83f0b45b14d0f5c0f2428e774b2c22cc1cbf1cdc631be13cb8017de1791ed9c1"
	I1229 07:16:27.841004  225445 logs.go:123] Gathering logs for container status ...
	I1229 07:16:27.841034  225445 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1229 07:16:27.871420  225445 logs.go:123] Gathering logs for kubelet ...
	I1229 07:16:27.871446  225445 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1229 07:16:27.961848  225445 logs.go:123] Gathering logs for dmesg ...
	I1229 07:16:27.961880  225445 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1229 07:16:27.975778  225445 logs.go:123] Gathering logs for etcd [02694b727a7ef5615b676832bc306017d23da15eff0a3aa9c1e32630ffe9e1bd] ...
	I1229 07:16:27.975810  225445 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 02694b727a7ef5615b676832bc306017d23da15eff0a3aa9c1e32630ffe9e1bd"
	I1229 07:16:28.009006  225445 logs.go:123] Gathering logs for kube-scheduler [14199977eb133b2984f339d2e84d171a190e387623bebdf7ed21e27d8cf71d90] ...
	I1229 07:16:28.009032  225445 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 14199977eb133b2984f339d2e84d171a190e387623bebdf7ed21e27d8cf71d90"
	W1229 07:16:28.034661  225445 logs.go:138] Found kube-scheduler [14199977eb133b2984f339d2e84d171a190e387623bebdf7ed21e27d8cf71d90] problem: E1229 07:16:24.252692       1 run.go:72] "command failed" err="failed to create listener: failed to listen on 127.0.0.1:10259: listen tcp 127.0.0.1:10259: bind: address already in use"
	I1229 07:16:28.034684  225445 logs.go:123] Gathering logs for kube-scheduler [1a2a5ec730671fe2fec7f6b801e54afe17547e25677f0f6dc85052e44f7374d4] ...
	I1229 07:16:28.034695  225445 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 1a2a5ec730671fe2fec7f6b801e54afe17547e25677f0f6dc85052e44f7374d4"
	I1229 07:16:28.103089  225445 out.go:374] Setting ErrFile to fd 2...
	I1229 07:16:28.103115  225445 out.go:408] TERM=,COLORTERM=, which probably does not support color
	W1229 07:16:28.103169  225445 out.go:285] X Problems detected in kube-scheduler [14199977eb133b2984f339d2e84d171a190e387623bebdf7ed21e27d8cf71d90]:
	W1229 07:16:28.103181  225445 out.go:285]   E1229 07:16:24.252692       1 run.go:72] "command failed" err="failed to create listener: failed to listen on 127.0.0.1:10259: listen tcp 127.0.0.1:10259: bind: address already in use"
	I1229 07:16:28.103188  225445 out.go:374] Setting ErrFile to fd 2...
	I1229 07:16:28.103194  225445 out.go:408] TERM=,COLORTERM=, which probably does not support color
	W1229 07:16:28.647324  260780 pod_ready.go:104] pod "coredns-7d764666f9-6rcr2" is not "Ready", error: <nil>
	W1229 07:16:30.647774  260780 pod_ready.go:104] pod "coredns-7d764666f9-6rcr2" is not "Ready", error: <nil>
	
	
	==> CRI-O <==
	Dec 29 07:16:24 default-k8s-diff-port-798607 crio[772]: time="2025-12-29T07:16:24.505125428Z" level=info msg="Starting container: 3af6257fde5be073efa1d4458f554fb8c682e980a557d55a787e8ffa3bf1415a" id=517d2ada-807c-4465-a39d-61baf3d202e0 name=/runtime.v1.RuntimeService/StartContainer
	Dec 29 07:16:24 default-k8s-diff-port-798607 crio[772]: time="2025-12-29T07:16:24.50757472Z" level=info msg="Started container" PID=1891 containerID=3af6257fde5be073efa1d4458f554fb8c682e980a557d55a787e8ffa3bf1415a description=kube-system/coredns-7d764666f9-jwmww/coredns id=517d2ada-807c-4465-a39d-61baf3d202e0 name=/runtime.v1.RuntimeService/StartContainer sandboxID=612fb27e875be67e58744ab965516a50d63844b36a03056bcb04ce1948cbb65d
	Dec 29 07:16:27 default-k8s-diff-port-798607 crio[772]: time="2025-12-29T07:16:27.289034473Z" level=info msg="Running pod sandbox: default/busybox/POD" id=f1f0c674-9845-4b82-a839-863e2baa4f57 name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 29 07:16:27 default-k8s-diff-port-798607 crio[772]: time="2025-12-29T07:16:27.289096876Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 29 07:16:27 default-k8s-diff-port-798607 crio[772]: time="2025-12-29T07:16:27.294090258Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:33de0da942f4124e05de9c41bf7fb300f6ab811e2eb2e7148c2a6ed1ed3ebb94 UID:3c0d7004-d857-4d4d-847d-4122d3514fc2 NetNS:/var/run/netns/10e74c54-7bb0-437a-89a4-1ccb8aaff470 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc0004a2e50}] Aliases:map[]}"
	Dec 29 07:16:27 default-k8s-diff-port-798607 crio[772]: time="2025-12-29T07:16:27.294131118Z" level=info msg="Adding pod default_busybox to CNI network \"kindnet\" (type=ptp)"
	Dec 29 07:16:27 default-k8s-diff-port-798607 crio[772]: time="2025-12-29T07:16:27.30993152Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:33de0da942f4124e05de9c41bf7fb300f6ab811e2eb2e7148c2a6ed1ed3ebb94 UID:3c0d7004-d857-4d4d-847d-4122d3514fc2 NetNS:/var/run/netns/10e74c54-7bb0-437a-89a4-1ccb8aaff470 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc0004a2e50}] Aliases:map[]}"
	Dec 29 07:16:27 default-k8s-diff-port-798607 crio[772]: time="2025-12-29T07:16:27.310052956Z" level=info msg="Checking pod default_busybox for CNI network kindnet (type=ptp)"
	Dec 29 07:16:27 default-k8s-diff-port-798607 crio[772]: time="2025-12-29T07:16:27.310843635Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Dec 29 07:16:27 default-k8s-diff-port-798607 crio[772]: time="2025-12-29T07:16:27.31167219Z" level=info msg="Ran pod sandbox 33de0da942f4124e05de9c41bf7fb300f6ab811e2eb2e7148c2a6ed1ed3ebb94 with infra container: default/busybox/POD" id=f1f0c674-9845-4b82-a839-863e2baa4f57 name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 29 07:16:27 default-k8s-diff-port-798607 crio[772]: time="2025-12-29T07:16:27.312936575Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=39288678-ea0e-4e1d-8be6-dda6ea5d6b76 name=/runtime.v1.ImageService/ImageStatus
	Dec 29 07:16:27 default-k8s-diff-port-798607 crio[772]: time="2025-12-29T07:16:27.313071345Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=39288678-ea0e-4e1d-8be6-dda6ea5d6b76 name=/runtime.v1.ImageService/ImageStatus
	Dec 29 07:16:27 default-k8s-diff-port-798607 crio[772]: time="2025-12-29T07:16:27.313159598Z" level=info msg="Neither image nor artfiact gcr.io/k8s-minikube/busybox:1.28.4-glibc found" id=39288678-ea0e-4e1d-8be6-dda6ea5d6b76 name=/runtime.v1.ImageService/ImageStatus
	Dec 29 07:16:27 default-k8s-diff-port-798607 crio[772]: time="2025-12-29T07:16:27.313927293Z" level=info msg="Pulling image: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=745d51d8-8989-4dd4-bd72-64295878b6c4 name=/runtime.v1.ImageService/PullImage
	Dec 29 07:16:27 default-k8s-diff-port-798607 crio[772]: time="2025-12-29T07:16:27.31426482Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Dec 29 07:16:28 default-k8s-diff-port-798607 crio[772]: time="2025-12-29T07:16:28.44599166Z" level=info msg="Pulled image: gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998" id=745d51d8-8989-4dd4-bd72-64295878b6c4 name=/runtime.v1.ImageService/PullImage
	Dec 29 07:16:28 default-k8s-diff-port-798607 crio[772]: time="2025-12-29T07:16:28.446618227Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=99f12d67-b279-4b29-8a38-32bf6c51870c name=/runtime.v1.ImageService/ImageStatus
	Dec 29 07:16:28 default-k8s-diff-port-798607 crio[772]: time="2025-12-29T07:16:28.448332787Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=f8546b39-918d-410f-867c-ee4bb6efb571 name=/runtime.v1.ImageService/ImageStatus
	Dec 29 07:16:28 default-k8s-diff-port-798607 crio[772]: time="2025-12-29T07:16:28.451287657Z" level=info msg="Creating container: default/busybox/busybox" id=de0e490c-e9e7-4558-bb7d-25c1c9a0b4e9 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 29 07:16:28 default-k8s-diff-port-798607 crio[772]: time="2025-12-29T07:16:28.451413391Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 29 07:16:28 default-k8s-diff-port-798607 crio[772]: time="2025-12-29T07:16:28.454970282Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 29 07:16:28 default-k8s-diff-port-798607 crio[772]: time="2025-12-29T07:16:28.455386201Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 29 07:16:28 default-k8s-diff-port-798607 crio[772]: time="2025-12-29T07:16:28.485956438Z" level=info msg="Created container f5784134a0f8f204724dc3587577aaf3e3b059f1f61ce7e4ca3f1b2a1a615ff9: default/busybox/busybox" id=de0e490c-e9e7-4558-bb7d-25c1c9a0b4e9 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 29 07:16:28 default-k8s-diff-port-798607 crio[772]: time="2025-12-29T07:16:28.486574189Z" level=info msg="Starting container: f5784134a0f8f204724dc3587577aaf3e3b059f1f61ce7e4ca3f1b2a1a615ff9" id=9aedc77d-7668-4b5b-9a3e-62147a74ad89 name=/runtime.v1.RuntimeService/StartContainer
	Dec 29 07:16:28 default-k8s-diff-port-798607 crio[772]: time="2025-12-29T07:16:28.488424449Z" level=info msg="Started container" PID=1972 containerID=f5784134a0f8f204724dc3587577aaf3e3b059f1f61ce7e4ca3f1b2a1a615ff9 description=default/busybox/busybox id=9aedc77d-7668-4b5b-9a3e-62147a74ad89 name=/runtime.v1.RuntimeService/StartContainer sandboxID=33de0da942f4124e05de9c41bf7fb300f6ab811e2eb2e7148c2a6ed1ed3ebb94
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                                    NAMESPACE
	f5784134a0f8f       gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998   7 seconds ago       Running             busybox                   0                   33de0da942f41       busybox                                                default
	3af6257fde5be       aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139                                      11 seconds ago      Running             coredns                   0                   612fb27e875be       coredns-7d764666f9-jwmww                               kube-system
	80ed3d49c0632       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      11 seconds ago      Running             storage-provisioner       0                   192ea4944224b       storage-provisioner                                    kube-system
	7a43e67daa3a3       docker.io/kindest/kindnetd@sha256:7c22558dc06a570d46ea6e8a73b23cdc754eb81f7c08d3441a3171ad359ffc27    23 seconds ago      Running             kindnet-cni               0                   124b180178b68       kindnet-m6jd2                                          kube-system
	f47bd6d1daa96       32652ff1bbe6b6df16a3dc621fcad4e6e185a2852bffc5847aab79e36c54bfb8                                      24 seconds ago      Running             kube-proxy                0                   e739ff253fed1       kube-proxy-4mnzc                                       kube-system
	91d589fb2fbee       5c6acd67e9cd10eb60a246cd233db251ef62ea97e6572f897e873f0cb648f499                                      35 seconds ago      Running             kube-apiserver            0                   d9e2f47bc0cbc       kube-apiserver-default-k8s-diff-port-798607            kube-system
	be44462dc8ac3       2c9a4b058bd7e6ec479c38f9e8a1dac2f5ee5b0a3ebda6dfac968e8720229508                                      35 seconds ago      Running             kube-controller-manager   0                   f038e5403acdd       kube-controller-manager-default-k8s-diff-port-798607   kube-system
	439d3f124a8e9       0a108f7189562e99793bdecab61fdf1a7c9d913af3385de9da17fb9d6ff430e2                                      35 seconds ago      Running             etcd                      0                   492fbe90648e4       etcd-default-k8s-diff-port-798607                      kube-system
	19454af1a475d       550794e3b12ac21ec7fd940bdfb45f7c8dae3c52acd8c70e580b882de20c3dcc                                      35 seconds ago      Running             kube-scheduler            0                   f9033b9a8c749       kube-scheduler-default-k8s-diff-port-798607            kube-system
	
	
	==> coredns [3af6257fde5be073efa1d4458f554fb8c682e980a557d55a787e8ffa3bf1415a] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.13.1
	linux/amd64, go1.25.2, 1db4568
	[INFO] 127.0.0.1:43676 - 62663 "HINFO IN 1542253745206119436.6319108234290484347. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.03667673s
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-798607
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=default-k8s-diff-port-798607
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=306c26d738eaa3534a776d7c684ed563998a48b8
	                    minikube.k8s.io/name=default-k8s-diff-port-798607
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_29T07_16_06_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 29 Dec 2025 07:16:02 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-798607
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 29 Dec 2025 07:16:35 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 29 Dec 2025 07:16:35 +0000   Mon, 29 Dec 2025 07:16:01 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 29 Dec 2025 07:16:35 +0000   Mon, 29 Dec 2025 07:16:01 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 29 Dec 2025 07:16:35 +0000   Mon, 29 Dec 2025 07:16:01 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 29 Dec 2025 07:16:35 +0000   Mon, 29 Dec 2025 07:16:24 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    default-k8s-diff-port-798607
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863360Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863360Ki
	  pods:               110
	System Info:
	  Machine ID:                 44f990608ba801cb32708aeb6951f96d
	  System UUID:                b24ee258-37aa-4e3b-b0b9-8a7f17d3bb24
	  Boot ID:                    38b59c2f-2e15-4538-b2f7-46a8b6545a02
	  Kernel Version:             6.8.0-1045-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.35.0
	  Kubelet Version:            v1.35.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         10s
	  kube-system                 coredns-7d764666f9-jwmww                                100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     26s
	  kube-system                 etcd-default-k8s-diff-port-798607                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         31s
	  kube-system                 kindnet-m6jd2                                           100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      26s
	  kube-system                 kube-apiserver-default-k8s-diff-port-798607             250m (3%)     0 (0%)      0 (0%)           0 (0%)         31s
	  kube-system                 kube-controller-manager-default-k8s-diff-port-798607    200m (2%)     0 (0%)      0 (0%)           0 (0%)         31s
	  kube-system                 kube-proxy-4mnzc                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         26s
	  kube-system                 kube-scheduler-default-k8s-diff-port-798607             100m (1%)     0 (0%)      0 (0%)           0 (0%)         32s
	  kube-system                 storage-provisioner                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         25s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason          Age   From             Message
	  ----    ------          ----  ----             -------
	  Normal  RegisteredNode  27s   node-controller  Node default-k8s-diff-port-798607 event: Registered Node default-k8s-diff-port-798607 in Controller
	
	
	==> dmesg <==
	[Dec29 06:17] MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details.
	[  +0.001714] TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details.
	[  +0.001000] MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details.
	[  +0.084008] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge
	[  +0.380672] i8042: Warning: Keylock active
	[  +0.013979] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.493300] block sda: the capability attribute has been deprecated.
	[  +0.084603] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.024785] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +4.737934] kauditd_printk_skb: 47 callbacks suppressed
	
	
	==> etcd [439d3f124a8e9804c03b8aed957a2d2710f12030202c3e7a632b29a8c591fd62] <==
	{"level":"info","ts":"2025-12-29T07:16:01.238704Z","caller":"membership/cluster.go:424","msg":"added member","cluster-id":"68eaea490fab4e05","local-member-id":"9f0758e1c58a86ed","added-peer-id":"9f0758e1c58a86ed","added-peer-peer-urls":["https://192.168.85.2:2380"],"added-peer-is-learner":false}
	{"level":"info","ts":"2025-12-29T07:16:01.730110Z","logger":"raft","caller":"v3@v3.6.0/raft.go:988","msg":"9f0758e1c58a86ed is starting a new election at term 1"}
	{"level":"info","ts":"2025-12-29T07:16:01.730163Z","logger":"raft","caller":"v3@v3.6.0/raft.go:930","msg":"9f0758e1c58a86ed became pre-candidate at term 1"}
	{"level":"info","ts":"2025-12-29T07:16:01.730269Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1077","msg":"9f0758e1c58a86ed received MsgPreVoteResp from 9f0758e1c58a86ed at term 1"}
	{"level":"info","ts":"2025-12-29T07:16:01.730300Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1693","msg":"9f0758e1c58a86ed has received 1 MsgPreVoteResp votes and 0 vote rejections"}
	{"level":"info","ts":"2025-12-29T07:16:01.730322Z","logger":"raft","caller":"v3@v3.6.0/raft.go:912","msg":"9f0758e1c58a86ed became candidate at term 2"}
	{"level":"info","ts":"2025-12-29T07:16:01.731014Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1077","msg":"9f0758e1c58a86ed received MsgVoteResp from 9f0758e1c58a86ed at term 2"}
	{"level":"info","ts":"2025-12-29T07:16:01.731041Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1693","msg":"9f0758e1c58a86ed has received 1 MsgVoteResp votes and 0 vote rejections"}
	{"level":"info","ts":"2025-12-29T07:16:01.731061Z","logger":"raft","caller":"v3@v3.6.0/raft.go:970","msg":"9f0758e1c58a86ed became leader at term 2"}
	{"level":"info","ts":"2025-12-29T07:16:01.731071Z","logger":"raft","caller":"v3@v3.6.0/node.go:370","msg":"raft.node: 9f0758e1c58a86ed elected leader 9f0758e1c58a86ed at term 2"}
	{"level":"info","ts":"2025-12-29T07:16:01.731925Z","caller":"etcdserver/server.go:1820","msg":"published local member to cluster through raft","local-member-id":"9f0758e1c58a86ed","local-member-attributes":"{Name:default-k8s-diff-port-798607 ClientURLs:[https://192.168.85.2:2379]}","cluster-id":"68eaea490fab4e05","publish-timeout":"7s"}
	{"level":"info","ts":"2025-12-29T07:16:01.731932Z","caller":"embed/serve.go:138","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-12-29T07:16:01.731953Z","caller":"embed/serve.go:138","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-12-29T07:16:01.732017Z","caller":"etcdserver/server.go:2420","msg":"setting up initial cluster version using v3 API","cluster-version":"3.6"}
	{"level":"info","ts":"2025-12-29T07:16:01.732212Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-12-29T07:16:01.732273Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-12-29T07:16:01.732527Z","caller":"membership/cluster.go:682","msg":"set initial cluster version","cluster-id":"68eaea490fab4e05","local-member-id":"9f0758e1c58a86ed","cluster-version":"3.6"}
	{"level":"info","ts":"2025-12-29T07:16:01.732622Z","caller":"api/capability.go:76","msg":"enabled capabilities for version","cluster-version":"3.6"}
	{"level":"info","ts":"2025-12-29T07:16:01.732678Z","caller":"etcdserver/server.go:2440","msg":"cluster version is updated","cluster-version":"3.6"}
	{"level":"info","ts":"2025-12-29T07:16:01.732746Z","caller":"version/monitor.go:116","msg":"cluster version differs from storage version.","cluster-version":"3.6.0","storage-version":"3.5.0"}
	{"level":"info","ts":"2025-12-29T07:16:01.732958Z","caller":"schema/migration.go:65","msg":"updated storage version","new-storage-version":"3.6.0"}
	{"level":"info","ts":"2025-12-29T07:16:01.733449Z","caller":"v3rpc/health.go:63","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-12-29T07:16:01.733687Z","caller":"v3rpc/health.go:63","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-12-29T07:16:01.737067Z","caller":"embed/serve.go:283","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2025-12-29T07:16:01.737121Z","caller":"embed/serve.go:283","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.85.2:2379"}
	
	
	==> kernel <==
	 07:16:36 up 59 min,  0 user,  load average: 2.94, 2.76, 2.01
	Linux default-k8s-diff-port-798607 6.8.0-1045-gcp #48~22.04.1-Ubuntu SMP Tue Nov 25 13:07:56 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [7a43e67daa3a3d5d35ada3e446ca04f57c4f76017c8292e42f2a0baeac862125] <==
	I1229 07:16:13.294974       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1229 07:16:13.295252       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1229 07:16:13.295383       1 main.go:148] setting mtu 1500 for CNI 
	I1229 07:16:13.295403       1 main.go:178] kindnetd IP family: "ipv4"
	I1229 07:16:13.295424       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-12-29T07:16:13Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1229 07:16:13.567292       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1229 07:16:13.567341       1 controller.go:381] "Waiting for informer caches to sync"
	I1229 07:16:13.567355       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1229 07:16:13.590616       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1229 07:16:13.991545       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1229 07:16:13.991573       1 metrics.go:72] Registering metrics
	I1229 07:16:13.991646       1 controller.go:711] "Syncing nftables rules"
	I1229 07:16:23.568828       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1229 07:16:23.568905       1 main.go:301] handling current node
	I1229 07:16:33.570658       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1229 07:16:33.570697       1 main.go:301] handling current node
	
	
	==> kube-apiserver [91d589fb2fbee4f033286ae449932cfa5b3fb46d2a6462294090121169f9c2a2] <==
	I1229 07:16:02.791926       1 cache.go:39] Caches are synced for autoregister controller
	I1229 07:16:02.802690       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1229 07:16:02.805322       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1229 07:16:02.805399       1 default_servicecidr_controller.go:169] Creating default ServiceCIDR with CIDRs: [10.96.0.0/12]
	I1229 07:16:02.810809       1 cidrallocator.go:302] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1229 07:16:02.811016       1 default_servicecidr_controller.go:231] Setting default ServiceCIDR condition Ready to True
	I1229 07:16:02.815155       1 cidrallocator.go:278] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1229 07:16:03.695661       1 storage_scheduling.go:123] created PriorityClass system-node-critical with value 2000001000
	I1229 07:16:03.699490       1 storage_scheduling.go:123] created PriorityClass system-cluster-critical with value 2000000000
	I1229 07:16:03.699513       1 storage_scheduling.go:139] all system priority classes are created successfully or already exist.
	I1229 07:16:04.159529       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1229 07:16:04.205994       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1229 07:16:04.299159       1 alloc.go:329] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1229 07:16:04.307368       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.85.2]
	I1229 07:16:04.308732       1 controller.go:667] quota admission added evaluator for: endpoints
	I1229 07:16:04.315553       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1229 07:16:04.730262       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1229 07:16:05.196750       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1229 07:16:05.207078       1 alloc.go:329] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1229 07:16:05.215927       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1229 07:16:10.281764       1 cidrallocator.go:278] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1229 07:16:10.286333       1 cidrallocator.go:278] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1229 07:16:10.430118       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1229 07:16:10.736874       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	E1229 07:16:35.072503       1 conn.go:339] Error on socket receive: read tcp 192.168.85.2:8444->192.168.85.1:59312: use of closed network connection
	
	
	==> kube-controller-manager [be44462dc8ac3a37f706dc7a89388d3ee6a5e71fe0707633e3fe1f9dd1500a6c] <==
	I1229 07:16:09.536813       1 shared_informer.go:377] "Caches are synced"
	I1229 07:16:09.536861       1 shared_informer.go:377] "Caches are synced"
	I1229 07:16:09.536866       1 shared_informer.go:377] "Caches are synced"
	I1229 07:16:09.536879       1 shared_informer.go:377] "Caches are synced"
	I1229 07:16:09.536983       1 node_lifecycle_controller.go:1234] "Initializing eviction metric for zone" zone=""
	I1229 07:16:09.537350       1 shared_informer.go:377] "Caches are synced"
	I1229 07:16:09.537430       1 shared_informer.go:377] "Caches are synced"
	I1229 07:16:09.536989       1 shared_informer.go:377] "Caches are synced"
	I1229 07:16:09.537478       1 shared_informer.go:377] "Caches are synced"
	I1229 07:16:09.536250       1 shared_informer.go:377] "Caches are synced"
	I1229 07:16:09.537387       1 node_lifecycle_controller.go:886] "Missing timestamp for Node. Assuming now as a timestamp" node="default-k8s-diff-port-798607"
	I1229 07:16:09.537686       1 node_lifecycle_controller.go:1038] "Controller detected that all Nodes are not-Ready. Entering master disruption mode"
	I1229 07:16:09.537007       1 shared_informer.go:377] "Caches are synced"
	I1229 07:16:09.538076       1 shared_informer.go:377] "Caches are synced"
	I1229 07:16:09.538464       1 shared_informer.go:377] "Caches are synced"
	I1229 07:16:09.539693       1 shared_informer.go:370] "Waiting for caches to sync"
	I1229 07:16:09.537016       1 shared_informer.go:377] "Caches are synced"
	I1229 07:16:09.537243       1 shared_informer.go:377] "Caches are synced"
	I1229 07:16:09.541313       1 shared_informer.go:377] "Caches are synced"
	I1229 07:16:09.553452       1 range_allocator.go:433] "Set node PodCIDR" node="default-k8s-diff-port-798607" podCIDRs=["10.244.0.0/24"]
	I1229 07:16:09.636695       1 shared_informer.go:377] "Caches are synced"
	I1229 07:16:09.636720       1 garbagecollector.go:166] "Garbage collector: all resource monitors have synced"
	I1229 07:16:09.636726       1 garbagecollector.go:169] "Proceeding to collect garbage"
	I1229 07:16:09.640018       1 shared_informer.go:377] "Caches are synced"
	I1229 07:16:24.540754       1 node_lifecycle_controller.go:1057] "Controller detected that some Nodes are Ready. Exiting master disruption mode"
	
	
	==> kube-proxy [f47bd6d1daa9635d4c7f0108b466976860e2b530a9adb5920d7f3e53d8f6e179] <==
	I1229 07:16:12.113313       1 server_linux.go:53] "Using iptables proxy"
	I1229 07:16:12.190004       1 shared_informer.go:370] "Waiting for caches to sync"
	I1229 07:16:12.290901       1 shared_informer.go:377] "Caches are synced"
	I1229 07:16:12.290945       1 server.go:218] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1229 07:16:12.291038       1 server.go:255] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1229 07:16:12.317805       1 server.go:264] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1229 07:16:12.317873       1 server_linux.go:136] "Using iptables Proxier"
	I1229 07:16:12.324281       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1229 07:16:12.324805       1 server.go:529] "Version info" version="v1.35.0"
	I1229 07:16:12.324869       1 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1229 07:16:12.326418       1 config.go:403] "Starting serviceCIDR config controller"
	I1229 07:16:12.326499       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1229 07:16:12.326447       1 config.go:200] "Starting service config controller"
	I1229 07:16:12.326589       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1229 07:16:12.326469       1 config.go:106] "Starting endpoint slice config controller"
	I1229 07:16:12.326602       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1229 07:16:12.326685       1 config.go:309] "Starting node config controller"
	I1229 07:16:12.326718       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1229 07:16:12.427311       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1229 07:16:12.427330       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1229 07:16:12.427361       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1229 07:16:12.427421       1 shared_informer.go:356] "Caches are synced" controller="node config"
	
	
	==> kube-scheduler [19454af1a475ddf56ea2cdb208d8873983279d51fe3b49fdb8846c4728e6136d] <==
	E1229 07:16:02.783691       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ReplicaSet"
	E1229 07:16:02.783725       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ResourceSlice"
	E1229 07:16:02.783804       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Pod"
	E1229 07:16:02.783818       1 reflector.go:204] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.CSIStorageCapacity"
	E1229 07:16:02.783854       1 reflector.go:204] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.CSIDriver"
	E1229 07:16:02.783930       1 reflector.go:204] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.CSINode"
	E1229 07:16:02.783994       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Service"
	E1229 07:16:02.784067       1 reflector.go:204] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.StorageClass"
	E1229 07:16:02.784756       1 reflector.go:204] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.VolumeAttachment"
	E1229 07:16:02.784851       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ReplicationController"
	E1229 07:16:02.784850       1 reflector.go:204] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.PersistentVolumeClaim"
	E1229 07:16:03.635534       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Node"
	E1229 07:16:03.635534       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ResourceClaim"
	E1229 07:16:03.691936       1 reflector.go:204] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.CSIStorageCapacity"
	E1229 07:16:03.699140       1 reflector.go:204] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.PersistentVolume"
	E1229 07:16:03.724207       1 reflector.go:204] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.CSINode"
	E1229 07:16:03.813564       1 reflector.go:204] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.PersistentVolumeClaim"
	E1229 07:16:03.813863       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Service"
	E1229 07:16:03.829499       1 reflector.go:204] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.StatefulSet"
	E1229 07:16:03.836870       1 reflector.go:204] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.CSIDriver"
	E1229 07:16:03.842002       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ResourceSlice"
	E1229 07:16:03.894724       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Pod"
	E1229 07:16:03.932112       1 reflector.go:204] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.PodDisruptionBudget"
	E1229 07:16:03.975627       1 reflector.go:204] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.VolumeAttachment"
	I1229 07:16:04.465854       1 shared_informer.go:377] "Caches are synced"
	
	
	==> kubelet <==
	Dec 29 07:16:10 default-k8s-diff-port-798607 kubelet[1296]: I1229 07:16:10.876569    1296 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n6t9z\" (UniqueName: \"kubernetes.io/projected/c322649a-8539-4264-9165-2a2522f06078-kube-api-access-n6t9z\") pod \"kube-proxy-4mnzc\" (UID: \"c322649a-8539-4264-9165-2a2522f06078\") " pod="kube-system/kube-proxy-4mnzc"
	Dec 29 07:16:10 default-k8s-diff-port-798607 kubelet[1296]: I1229 07:16:10.876599    1296 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/eae39509-802b-4a6e-b436-904c44761153-cni-cfg\") pod \"kindnet-m6jd2\" (UID: \"eae39509-802b-4a6e-b436-904c44761153\") " pod="kube-system/kindnet-m6jd2"
	Dec 29 07:16:10 default-k8s-diff-port-798607 kubelet[1296]: I1229 07:16:10.876634    1296 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/eae39509-802b-4a6e-b436-904c44761153-xtables-lock\") pod \"kindnet-m6jd2\" (UID: \"eae39509-802b-4a6e-b436-904c44761153\") " pod="kube-system/kindnet-m6jd2"
	Dec 29 07:16:10 default-k8s-diff-port-798607 kubelet[1296]: I1229 07:16:10.876685    1296 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c322649a-8539-4264-9165-2a2522f06078-lib-modules\") pod \"kube-proxy-4mnzc\" (UID: \"c322649a-8539-4264-9165-2a2522f06078\") " pod="kube-system/kube-proxy-4mnzc"
	Dec 29 07:16:14 default-k8s-diff-port-798607 kubelet[1296]: I1229 07:16:14.102353    1296 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kube-proxy-4mnzc" podStartSLOduration=4.102331638 podStartE2EDuration="4.102331638s" podCreationTimestamp="2025-12-29 07:16:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-29 07:16:12.099628299 +0000 UTC m=+7.144348307" watchObservedRunningTime="2025-12-29 07:16:14.102331638 +0000 UTC m=+9.147051644"
	Dec 29 07:16:14 default-k8s-diff-port-798607 kubelet[1296]: E1229 07:16:14.807383    1296 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-default-k8s-diff-port-798607" containerName="kube-apiserver"
	Dec 29 07:16:14 default-k8s-diff-port-798607 kubelet[1296]: I1229 07:16:14.819089    1296 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kindnet-m6jd2" podStartSLOduration=3.446820534 podStartE2EDuration="4.819068743s" podCreationTimestamp="2025-12-29 07:16:10 +0000 UTC" firstStartedPulling="2025-12-29 07:16:11.701706003 +0000 UTC m=+6.746426002" lastFinishedPulling="2025-12-29 07:16:13.073954213 +0000 UTC m=+8.118674211" observedRunningTime="2025-12-29 07:16:14.102485175 +0000 UTC m=+9.147205182" watchObservedRunningTime="2025-12-29 07:16:14.819068743 +0000 UTC m=+9.863788750"
	Dec 29 07:16:15 default-k8s-diff-port-798607 kubelet[1296]: E1229 07:16:15.755283    1296 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/etcd-default-k8s-diff-port-798607" containerName="etcd"
	Dec 29 07:16:16 default-k8s-diff-port-798607 kubelet[1296]: E1229 07:16:16.094634    1296 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/etcd-default-k8s-diff-port-798607" containerName="etcd"
	Dec 29 07:16:16 default-k8s-diff-port-798607 kubelet[1296]: E1229 07:16:16.495193    1296 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-default-k8s-diff-port-798607" containerName="kube-scheduler"
	Dec 29 07:16:17 default-k8s-diff-port-798607 kubelet[1296]: E1229 07:16:17.096676    1296 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-default-k8s-diff-port-798607" containerName="kube-scheduler"
	Dec 29 07:16:17 default-k8s-diff-port-798607 kubelet[1296]: E1229 07:16:17.269003    1296 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-controller-manager-default-k8s-diff-port-798607" containerName="kube-controller-manager"
	Dec 29 07:16:24 default-k8s-diff-port-798607 kubelet[1296]: I1229 07:16:24.106873    1296 kubelet_node_status.go:427] "Fast updating node status as it just became ready"
	Dec 29 07:16:24 default-k8s-diff-port-798607 kubelet[1296]: I1229 07:16:24.173211    1296 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4r4tk\" (UniqueName: \"kubernetes.io/projected/77ec6576-1cba-401f-8b20-e6e97d7be45d-kube-api-access-4r4tk\") pod \"storage-provisioner\" (UID: \"77ec6576-1cba-401f-8b20-e6e97d7be45d\") " pod="kube-system/storage-provisioner"
	Dec 29 07:16:24 default-k8s-diff-port-798607 kubelet[1296]: I1229 07:16:24.173323    1296 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gwzbq\" (UniqueName: \"kubernetes.io/projected/1ab5b614-62d4-4118-9c4b-2e12e7ae7aec-kube-api-access-gwzbq\") pod \"coredns-7d764666f9-jwmww\" (UID: \"1ab5b614-62d4-4118-9c4b-2e12e7ae7aec\") " pod="kube-system/coredns-7d764666f9-jwmww"
	Dec 29 07:16:24 default-k8s-diff-port-798607 kubelet[1296]: I1229 07:16:24.173384    1296 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/77ec6576-1cba-401f-8b20-e6e97d7be45d-tmp\") pod \"storage-provisioner\" (UID: \"77ec6576-1cba-401f-8b20-e6e97d7be45d\") " pod="kube-system/storage-provisioner"
	Dec 29 07:16:24 default-k8s-diff-port-798607 kubelet[1296]: I1229 07:16:24.173405    1296 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1ab5b614-62d4-4118-9c4b-2e12e7ae7aec-config-volume\") pod \"coredns-7d764666f9-jwmww\" (UID: \"1ab5b614-62d4-4118-9c4b-2e12e7ae7aec\") " pod="kube-system/coredns-7d764666f9-jwmww"
	Dec 29 07:16:24 default-k8s-diff-port-798607 kubelet[1296]: E1229 07:16:24.812386    1296 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-default-k8s-diff-port-798607" containerName="kube-apiserver"
	Dec 29 07:16:25 default-k8s-diff-port-798607 kubelet[1296]: E1229 07:16:25.114625    1296 prober_manager.go:209] "Readiness probe already exists for container" pod="kube-system/coredns-7d764666f9-jwmww" containerName="coredns"
	Dec 29 07:16:25 default-k8s-diff-port-798607 kubelet[1296]: I1229 07:16:25.125969    1296 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/coredns-7d764666f9-jwmww" podStartSLOduration=15.125946843 podStartE2EDuration="15.125946843s" podCreationTimestamp="2025-12-29 07:16:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-29 07:16:25.125828691 +0000 UTC m=+20.170548699" watchObservedRunningTime="2025-12-29 07:16:25.125946843 +0000 UTC m=+20.170666849"
	Dec 29 07:16:25 default-k8s-diff-port-798607 kubelet[1296]: I1229 07:16:25.134794    1296 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=14.134774576 podStartE2EDuration="14.134774576s" podCreationTimestamp="2025-12-29 07:16:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-29 07:16:25.134513204 +0000 UTC m=+20.179233210" watchObservedRunningTime="2025-12-29 07:16:25.134774576 +0000 UTC m=+20.179494582"
	Dec 29 07:16:26 default-k8s-diff-port-798607 kubelet[1296]: E1229 07:16:26.118007    1296 prober_manager.go:209] "Readiness probe already exists for container" pod="kube-system/coredns-7d764666f9-jwmww" containerName="coredns"
	Dec 29 07:16:27 default-k8s-diff-port-798607 kubelet[1296]: I1229 07:16:27.089388    1296 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bd9h5\" (UniqueName: \"kubernetes.io/projected/3c0d7004-d857-4d4d-847d-4122d3514fc2-kube-api-access-bd9h5\") pod \"busybox\" (UID: \"3c0d7004-d857-4d4d-847d-4122d3514fc2\") " pod="default/busybox"
	Dec 29 07:16:27 default-k8s-diff-port-798607 kubelet[1296]: E1229 07:16:27.120149    1296 prober_manager.go:209] "Readiness probe already exists for container" pod="kube-system/coredns-7d764666f9-jwmww" containerName="coredns"
	Dec 29 07:16:29 default-k8s-diff-port-798607 kubelet[1296]: I1229 07:16:29.138510    1296 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="default/busybox" podStartSLOduration=2.004435874 podStartE2EDuration="3.138487222s" podCreationTimestamp="2025-12-29 07:16:26 +0000 UTC" firstStartedPulling="2025-12-29 07:16:27.313566343 +0000 UTC m=+22.358286341" lastFinishedPulling="2025-12-29 07:16:28.447617704 +0000 UTC m=+23.492337689" observedRunningTime="2025-12-29 07:16:29.138166679 +0000 UTC m=+24.182886685" watchObservedRunningTime="2025-12-29 07:16:29.138487222 +0000 UTC m=+24.183207227"
	
	
	==> storage-provisioner [80ed3d49c0632c3eaa73bafafd6473244873ce978bba5c020340cd1eabdda042] <==
	I1229 07:16:24.491271       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1229 07:16:24.498626       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1229 07:16:24.498682       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1229 07:16:24.501252       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1229 07:16:24.506362       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 07:16:24.506590       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1229 07:16:24.506672       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"5078cbe3-2c7d-4503-aba9-6d953718bd88", APIVersion:"v1", ResourceVersion:"414", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-798607_bef8574e-b834-45ea-8ac4-04b3769e90f1 became leader
	I1229 07:16:24.506761       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-798607_bef8574e-b834-45ea-8ac4-04b3769e90f1!
	W1229 07:16:24.509315       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1229 07:16:24.513923       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 07:16:24.607562       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-798607_bef8574e-b834-45ea-8ac4-04b3769e90f1!
	W1229 07:16:26.517570       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1229 07:16:26.524890       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1229 07:16:28.529338       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1229 07:16:28.536079       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1229 07:16:30.539521       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1229 07:16:30.545387       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1229 07:16:32.548254       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1229 07:16:32.552394       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1229 07:16:34.555658       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1229 07:16:34.560775       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1229 07:16:36.564589       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1229 07:16:36.569279       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-798607 -n default-k8s-diff-port-798607
helpers_test.go:270: (dbg) Run:  kubectl --context default-k8s-diff-port-798607 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:294: <<< TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive FAILED: end of post-mortem logs <<<
helpers_test.go:295: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (2.08s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (6.69s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p no-preload-122332 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 pause -p no-preload-122332 --alsologtostderr -v=1: exit status 80 (1.995972277s)

                                                
                                                
-- stdout --
	* Pausing node no-preload-122332 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1229 07:17:04.413941  272892 out.go:360] Setting OutFile to fd 1 ...
	I1229 07:17:04.414058  272892 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1229 07:17:04.414069  272892 out.go:374] Setting ErrFile to fd 2...
	I1229 07:17:04.414076  272892 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1229 07:17:04.414360  272892 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22353-9207/.minikube/bin
	I1229 07:17:04.414685  272892 out.go:368] Setting JSON to false
	I1229 07:17:04.414702  272892 mustload.go:66] Loading cluster: no-preload-122332
	I1229 07:17:04.415170  272892 config.go:182] Loaded profile config "no-preload-122332": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1229 07:17:04.415740  272892 cli_runner.go:164] Run: docker container inspect no-preload-122332 --format={{.State.Status}}
	I1229 07:17:04.439420  272892 host.go:66] Checking if "no-preload-122332" exists ...
	I1229 07:17:04.439761  272892 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1229 07:17:04.512115  272892 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:4 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:78 OomKillDisable:false NGoroutines:86 SystemTime:2025-12-29 07:17:04.499726093 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:dea7da592f5d1d2b7755e3a161be07f43fad8f75 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1229 07:17:04.513024  272892 pause.go:60] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766979815-22353@sha256:20dad5895b49b986a1253c0faab60865204843ac97fd3a6e6210da5896244409 binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/22353/minikube-v1.37.0-1766979747-22353-amd64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1766979747-22353/minikube-v1.37.0-1766979747-22353-amd64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1766979747-22353-amd64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) preload-source:auto profile:no-preload-122332 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) rosetta:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool
=true) wantupdatenotification:%!s(bool=true) wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1229 07:17:04.515825  272892 out.go:179] * Pausing node no-preload-122332 ... 
	I1229 07:17:04.519095  272892 host.go:66] Checking if "no-preload-122332" exists ...
	I1229 07:17:04.519490  272892 ssh_runner.go:195] Run: systemctl --version
	I1229 07:17:04.519546  272892 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-122332
	I1229 07:17:04.542863  272892 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33078 SSHKeyPath:/home/jenkins/minikube-integration/22353-9207/.minikube/machines/no-preload-122332/id_rsa Username:docker}
	I1229 07:17:04.651146  272892 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1229 07:17:04.677300  272892 pause.go:52] kubelet running: true
	I1229 07:17:04.677372  272892 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1229 07:17:04.905643  272892 cri.go:61] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1229 07:17:04.905770  272892 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1229 07:17:05.004431  272892 cri.go:96] found id: "545c1cbee5f14da1e2b27f7f896e2dc4c58720ea9d59b706ffa64166d5bb9f96"
	I1229 07:17:05.004460  272892 cri.go:96] found id: "0da01eca9a562b5fe8053fa35b1c01007594c183cf9335c44971775cd1ec09d0"
	I1229 07:17:05.004466  272892 cri.go:96] found id: "f6bda588574168156c2fbabe167417553897fbea83ffd12be951a62f9ebeef8b"
	I1229 07:17:05.004471  272892 cri.go:96] found id: "83ebe55fd0c5939b15566c7fa2cb8186d179a5062dc285850807eb6f771c21bb"
	I1229 07:17:05.004476  272892 cri.go:96] found id: "4749520de1b726f631eef5a9218e09908cae4d296fcd6920b8b44725efffa5f9"
	I1229 07:17:05.004480  272892 cri.go:96] found id: "182221ab78b63253e283f5b17e6c4eefd8ff0cf8a867399484c79718b382becd"
	I1229 07:17:05.004484  272892 cri.go:96] found id: "3c840a729524e5af9fc1ab0924ee6323875c1b5066189ad27582f5313c496cbc"
	I1229 07:17:05.004489  272892 cri.go:96] found id: "482322719dad640690982288c2258e90836d194891b2179cab964e1340265902"
	I1229 07:17:05.004492  272892 cri.go:96] found id: "013472dcacb3dee11074415629264465301e3f2be8dd69785de033ac3c97d206"
	I1229 07:17:05.004501  272892 cri.go:96] found id: "322e7b29c6c5691659866c9876262fb3eee6007fc245f7ce7d575d2de9068828"
	I1229 07:17:05.004505  272892 cri.go:96] found id: "b4ab1c883154a271188d140f15f54d642fc3b90bc67d3be7f26173073eed79c9"
	I1229 07:17:05.004509  272892 cri.go:96] found id: ""
	I1229 07:17:05.004555  272892 ssh_runner.go:195] Run: sudo runc list -f json
	I1229 07:17:05.019716  272892 retry.go:84] will retry after 100ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-29T07:17:05Z" level=error msg="open /run/runc: no such file or directory"
	I1229 07:17:05.149025  272892 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1229 07:17:05.167179  272892 pause.go:52] kubelet running: false
	I1229 07:17:05.167278  272892 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1229 07:17:05.367262  272892 cri.go:61] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1229 07:17:05.367365  272892 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1229 07:17:05.461966  272892 cri.go:96] found id: "545c1cbee5f14da1e2b27f7f896e2dc4c58720ea9d59b706ffa64166d5bb9f96"
	I1229 07:17:05.462002  272892 cri.go:96] found id: "0da01eca9a562b5fe8053fa35b1c01007594c183cf9335c44971775cd1ec09d0"
	I1229 07:17:05.462009  272892 cri.go:96] found id: "f6bda588574168156c2fbabe167417553897fbea83ffd12be951a62f9ebeef8b"
	I1229 07:17:05.462014  272892 cri.go:96] found id: "83ebe55fd0c5939b15566c7fa2cb8186d179a5062dc285850807eb6f771c21bb"
	I1229 07:17:05.462019  272892 cri.go:96] found id: "4749520de1b726f631eef5a9218e09908cae4d296fcd6920b8b44725efffa5f9"
	I1229 07:17:05.462024  272892 cri.go:96] found id: "182221ab78b63253e283f5b17e6c4eefd8ff0cf8a867399484c79718b382becd"
	I1229 07:17:05.462028  272892 cri.go:96] found id: "3c840a729524e5af9fc1ab0924ee6323875c1b5066189ad27582f5313c496cbc"
	I1229 07:17:05.462032  272892 cri.go:96] found id: "482322719dad640690982288c2258e90836d194891b2179cab964e1340265902"
	I1229 07:17:05.462036  272892 cri.go:96] found id: "013472dcacb3dee11074415629264465301e3f2be8dd69785de033ac3c97d206"
	I1229 07:17:05.462048  272892 cri.go:96] found id: "322e7b29c6c5691659866c9876262fb3eee6007fc245f7ce7d575d2de9068828"
	I1229 07:17:05.462057  272892 cri.go:96] found id: "b4ab1c883154a271188d140f15f54d642fc3b90bc67d3be7f26173073eed79c9"
	I1229 07:17:05.462061  272892 cri.go:96] found id: ""
	I1229 07:17:05.462107  272892 ssh_runner.go:195] Run: sudo runc list -f json
	I1229 07:17:06.001737  272892 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1229 07:17:06.017609  272892 pause.go:52] kubelet running: false
	I1229 07:17:06.017677  272892 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1229 07:17:06.231199  272892 cri.go:61] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1229 07:17:06.231307  272892 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1229 07:17:06.313255  272892 cri.go:96] found id: "545c1cbee5f14da1e2b27f7f896e2dc4c58720ea9d59b706ffa64166d5bb9f96"
	I1229 07:17:06.313278  272892 cri.go:96] found id: "0da01eca9a562b5fe8053fa35b1c01007594c183cf9335c44971775cd1ec09d0"
	I1229 07:17:06.313283  272892 cri.go:96] found id: "f6bda588574168156c2fbabe167417553897fbea83ffd12be951a62f9ebeef8b"
	I1229 07:17:06.313286  272892 cri.go:96] found id: "83ebe55fd0c5939b15566c7fa2cb8186d179a5062dc285850807eb6f771c21bb"
	I1229 07:17:06.313289  272892 cri.go:96] found id: "4749520de1b726f631eef5a9218e09908cae4d296fcd6920b8b44725efffa5f9"
	I1229 07:17:06.313294  272892 cri.go:96] found id: "182221ab78b63253e283f5b17e6c4eefd8ff0cf8a867399484c79718b382becd"
	I1229 07:17:06.313298  272892 cri.go:96] found id: "3c840a729524e5af9fc1ab0924ee6323875c1b5066189ad27582f5313c496cbc"
	I1229 07:17:06.313302  272892 cri.go:96] found id: "482322719dad640690982288c2258e90836d194891b2179cab964e1340265902"
	I1229 07:17:06.313306  272892 cri.go:96] found id: "013472dcacb3dee11074415629264465301e3f2be8dd69785de033ac3c97d206"
	I1229 07:17:06.313313  272892 cri.go:96] found id: "322e7b29c6c5691659866c9876262fb3eee6007fc245f7ce7d575d2de9068828"
	I1229 07:17:06.313318  272892 cri.go:96] found id: "b4ab1c883154a271188d140f15f54d642fc3b90bc67d3be7f26173073eed79c9"
	I1229 07:17:06.313323  272892 cri.go:96] found id: ""
	I1229 07:17:06.313389  272892 ssh_runner.go:195] Run: sudo runc list -f json
	I1229 07:17:06.327239  272892 out.go:203] 
	W1229 07:17:06.328539  272892 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-29T07:17:06Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-29T07:17:06Z" level=error msg="open /run/runc: no such file or directory"
	
	W1229 07:17:06.328554  272892 out.go:285] * 
	* 
	W1229 07:17:06.330200  272892 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1229 07:17:06.332232  272892 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:309: out/minikube-linux-amd64 pause -p no-preload-122332 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect no-preload-122332
helpers_test.go:244: (dbg) docker inspect no-preload-122332:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "9aa41434eb0f8df6141a3e51dfd50ea3fd6a2b2f3e194918c950f2f55445a90f",
	        "Created": "2025-12-29T07:14:49.513032226Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 260987,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-29T07:16:02.728248002Z",
	            "FinishedAt": "2025-12-29T07:16:01.740795191Z"
	        },
	        "Image": "sha256:a2c772d8c9235c5e8a66a3d0af7f5184436aa141d74f90a5659fe0d594ea14d5",
	        "ResolvConfPath": "/var/lib/docker/containers/9aa41434eb0f8df6141a3e51dfd50ea3fd6a2b2f3e194918c950f2f55445a90f/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/9aa41434eb0f8df6141a3e51dfd50ea3fd6a2b2f3e194918c950f2f55445a90f/hostname",
	        "HostsPath": "/var/lib/docker/containers/9aa41434eb0f8df6141a3e51dfd50ea3fd6a2b2f3e194918c950f2f55445a90f/hosts",
	        "LogPath": "/var/lib/docker/containers/9aa41434eb0f8df6141a3e51dfd50ea3fd6a2b2f3e194918c950f2f55445a90f/9aa41434eb0f8df6141a3e51dfd50ea3fd6a2b2f3e194918c950f2f55445a90f-json.log",
	        "Name": "/no-preload-122332",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "no-preload-122332:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "no-preload-122332",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "9aa41434eb0f8df6141a3e51dfd50ea3fd6a2b2f3e194918c950f2f55445a90f",
	                "LowerDir": "/var/lib/docker/overlay2/e2357c0b79397c13786788b28fea63035db3d475bb6e264a508668d9a8bb0046-init/diff:/var/lib/docker/overlay2/3c9e424a3298c821d4195922375c499065668db3230de879006c717dc268a220/diff",
	                "MergedDir": "/var/lib/docker/overlay2/e2357c0b79397c13786788b28fea63035db3d475bb6e264a508668d9a8bb0046/merged",
	                "UpperDir": "/var/lib/docker/overlay2/e2357c0b79397c13786788b28fea63035db3d475bb6e264a508668d9a8bb0046/diff",
	                "WorkDir": "/var/lib/docker/overlay2/e2357c0b79397c13786788b28fea63035db3d475bb6e264a508668d9a8bb0046/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "no-preload-122332",
	                "Source": "/var/lib/docker/volumes/no-preload-122332/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-122332",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766979815-22353@sha256:20dad5895b49b986a1253c0faab60865204843ac97fd3a6e6210da5896244409",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-122332",
	                "name.minikube.sigs.k8s.io": "no-preload-122332",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "fdc5efbc0e97a736dd762cf66d30f6e464dbfae8bd3796ec62650f5da62d14c4",
	            "SandboxKey": "/var/run/docker/netns/fdc5efbc0e97",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33078"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33079"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33082"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33080"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33081"
	                    }
	                ]
	            },
	            "Networks": {
	                "no-preload-122332": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.94.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "18727729929e09903a8602637fce4f42992b3e819228d475208a35800e81902c",
	                    "EndpointID": "7317cf54a0d5aab79aedbfcc4c5ee1e2268991f46c7a1b5a559990df8d67574f",
	                    "Gateway": "192.168.94.1",
	                    "IPAddress": "192.168.94.2",
	                    "MacAddress": "3e:89:14:71:83:9e",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "no-preload-122332",
	                        "9aa41434eb0f"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-122332 -n no-preload-122332
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-122332 -n no-preload-122332: exit status 2 (387.696392ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestStartStop/group/no-preload/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-122332 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-amd64 -p no-preload-122332 logs -n 25: (1.552377885s)
helpers_test.go:261: TestStartStop/group/no-preload/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                   ARGS                                                                                   │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ delete  │ -p stopped-upgrade-518014                                                                                                                                                │ stopped-upgrade-518014       │ jenkins │ v1.37.0 │ 29 Dec 25 07:14 UTC │ 29 Dec 25 07:14 UTC │
	│ start   │ -p no-preload-122332 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0                  │ no-preload-122332            │ jenkins │ v1.37.0 │ 29 Dec 25 07:14 UTC │ 29 Dec 25 07:15 UTC │
	│ image   │ old-k8s-version-876718 image list --format=json                                                                                                                          │ old-k8s-version-876718       │ jenkins │ v1.37.0 │ 29 Dec 25 07:15 UTC │ 29 Dec 25 07:15 UTC │
	│ pause   │ -p old-k8s-version-876718 --alsologtostderr -v=1                                                                                                                         │ old-k8s-version-876718       │ jenkins │ v1.37.0 │ 29 Dec 25 07:15 UTC │                     │
	│ delete  │ -p old-k8s-version-876718                                                                                                                                                │ old-k8s-version-876718       │ jenkins │ v1.37.0 │ 29 Dec 25 07:15 UTC │ 29 Dec 25 07:15 UTC │
	│ delete  │ -p old-k8s-version-876718                                                                                                                                                │ old-k8s-version-876718       │ jenkins │ v1.37.0 │ 29 Dec 25 07:15 UTC │ 29 Dec 25 07:15 UTC │
	│ start   │ -p embed-certs-739827 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0                   │ embed-certs-739827           │ jenkins │ v1.37.0 │ 29 Dec 25 07:15 UTC │ 29 Dec 25 07:16 UTC │
	│ addons  │ enable metrics-server -p no-preload-122332 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                  │ no-preload-122332            │ jenkins │ v1.37.0 │ 29 Dec 25 07:15 UTC │                     │
	│ start   │ -p cert-expiration-452455 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio                                                                │ cert-expiration-452455       │ jenkins │ v1.37.0 │ 29 Dec 25 07:15 UTC │ 29 Dec 25 07:15 UTC │
	│ stop    │ -p no-preload-122332 --alsologtostderr -v=3                                                                                                                              │ no-preload-122332            │ jenkins │ v1.37.0 │ 29 Dec 25 07:15 UTC │ 29 Dec 25 07:16 UTC │
	│ delete  │ -p cert-expiration-452455                                                                                                                                                │ cert-expiration-452455       │ jenkins │ v1.37.0 │ 29 Dec 25 07:15 UTC │ 29 Dec 25 07:15 UTC │
	│ delete  │ -p disable-driver-mounts-708770                                                                                                                                          │ disable-driver-mounts-708770 │ jenkins │ v1.37.0 │ 29 Dec 25 07:15 UTC │ 29 Dec 25 07:15 UTC │
	│ start   │ -p default-k8s-diff-port-798607 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0 │ default-k8s-diff-port-798607 │ jenkins │ v1.37.0 │ 29 Dec 25 07:15 UTC │ 29 Dec 25 07:16 UTC │
	│ addons  │ enable dashboard -p no-preload-122332 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                             │ no-preload-122332            │ jenkins │ v1.37.0 │ 29 Dec 25 07:16 UTC │ 29 Dec 25 07:16 UTC │
	│ start   │ -p no-preload-122332 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0                  │ no-preload-122332            │ jenkins │ v1.37.0 │ 29 Dec 25 07:16 UTC │ 29 Dec 25 07:16 UTC │
	│ addons  │ enable metrics-server -p embed-certs-739827 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                 │ embed-certs-739827           │ jenkins │ v1.37.0 │ 29 Dec 25 07:16 UTC │                     │
	│ stop    │ -p embed-certs-739827 --alsologtostderr -v=3                                                                                                                             │ embed-certs-739827           │ jenkins │ v1.37.0 │ 29 Dec 25 07:16 UTC │ 29 Dec 25 07:16 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-798607 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                       │ default-k8s-diff-port-798607 │ jenkins │ v1.37.0 │ 29 Dec 25 07:16 UTC │                     │
	│ stop    │ -p default-k8s-diff-port-798607 --alsologtostderr -v=3                                                                                                                   │ default-k8s-diff-port-798607 │ jenkins │ v1.37.0 │ 29 Dec 25 07:16 UTC │ 29 Dec 25 07:16 UTC │
	│ addons  │ enable dashboard -p embed-certs-739827 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                            │ embed-certs-739827           │ jenkins │ v1.37.0 │ 29 Dec 25 07:16 UTC │ 29 Dec 25 07:16 UTC │
	│ start   │ -p embed-certs-739827 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0                   │ embed-certs-739827           │ jenkins │ v1.37.0 │ 29 Dec 25 07:16 UTC │                     │
	│ addons  │ enable dashboard -p default-k8s-diff-port-798607 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                  │ default-k8s-diff-port-798607 │ jenkins │ v1.37.0 │ 29 Dec 25 07:16 UTC │ 29 Dec 25 07:16 UTC │
	│ start   │ -p default-k8s-diff-port-798607 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0 │ default-k8s-diff-port-798607 │ jenkins │ v1.37.0 │ 29 Dec 25 07:16 UTC │                     │
	│ image   │ no-preload-122332 image list --format=json                                                                                                                               │ no-preload-122332            │ jenkins │ v1.37.0 │ 29 Dec 25 07:17 UTC │ 29 Dec 25 07:17 UTC │
	│ pause   │ -p no-preload-122332 --alsologtostderr -v=1                                                                                                                              │ no-preload-122332            │ jenkins │ v1.37.0 │ 29 Dec 25 07:17 UTC │                     │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/29 07:16:53
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1229 07:16:53.674737  269280 out.go:360] Setting OutFile to fd 1 ...
	I1229 07:16:53.674829  269280 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1229 07:16:53.674840  269280 out.go:374] Setting ErrFile to fd 2...
	I1229 07:16:53.674846  269280 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1229 07:16:53.675081  269280 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22353-9207/.minikube/bin
	I1229 07:16:53.675541  269280 out.go:368] Setting JSON to false
	I1229 07:16:53.676755  269280 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":3566,"bootTime":1766989048,"procs":287,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1229 07:16:53.676830  269280 start.go:143] virtualization: kvm guest
	I1229 07:16:53.678604  269280 out.go:179] * [default-k8s-diff-port-798607] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1229 07:16:53.679809  269280 out.go:179]   - MINIKUBE_LOCATION=22353
	I1229 07:16:53.679852  269280 notify.go:221] Checking for updates...
	I1229 07:16:53.682193  269280 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1229 07:16:53.683273  269280 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22353-9207/kubeconfig
	I1229 07:16:53.684195  269280 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22353-9207/.minikube
	I1229 07:16:53.685317  269280 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1229 07:16:53.686392  269280 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1229 07:16:53.688062  269280 config.go:182] Loaded profile config "default-k8s-diff-port-798607": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1229 07:16:53.688582  269280 driver.go:422] Setting default libvirt URI to qemu:///system
	I1229 07:16:53.713654  269280 docker.go:124] docker version: linux-29.1.3:Docker Engine - Community
	I1229 07:16:53.713735  269280 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1229 07:16:53.773722  269280 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:64 OomKillDisable:false NGoroutines:75 SystemTime:2025-12-29 07:16:53.763812458 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:dea7da592f5d1d2b7755e3a161be07f43fad8f75 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1229 07:16:53.773832  269280 docker.go:319] overlay module found
	I1229 07:16:53.777031  269280 out.go:179] * Using the docker driver based on existing profile
	I1229 07:16:53.778574  269280 start.go:309] selected driver: docker
	I1229 07:16:53.778590  269280 start.go:928] validating driver "docker" against &{Name:default-k8s-diff-port-798607 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766979815-22353@sha256:20dad5895b49b986a1253c0faab60865204843ac97fd3a6e6210da5896244409 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:default-k8s-diff-port-798607 Namespace:default APISe
rverHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] M
ountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1229 07:16:53.778676  269280 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1229 07:16:53.779254  269280 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1229 07:16:53.837961  269280 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:64 OomKillDisable:false NGoroutines:75 SystemTime:2025-12-29 07:16:53.826615969 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:dea7da592f5d1d2b7755e3a161be07f43fad8f75 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1229 07:16:53.838279  269280 start_flags.go:1019] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1229 07:16:53.838323  269280 cni.go:84] Creating CNI manager for ""
	I1229 07:16:53.838396  269280 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1229 07:16:53.838463  269280 start.go:353] cluster config:
	{Name:default-k8s-diff-port-798607 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766979815-22353@sha256:20dad5895b49b986a1253c0faab60865204843ac97fd3a6e6210da5896244409 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:default-k8s-diff-port-798607 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:
cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false
DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1229 07:16:53.841727  269280 out.go:179] * Starting "default-k8s-diff-port-798607" primary control-plane node in "default-k8s-diff-port-798607" cluster
	I1229 07:16:53.842789  269280 cache.go:134] Beginning downloading kic base image for docker with crio
	I1229 07:16:53.844012  269280 out.go:179] * Pulling base image v0.0.48-1766979815-22353 ...
	I1229 07:16:53.845087  269280 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime crio
	I1229 07:16:53.845124  269280 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22353-9207/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-cri-o-overlay-amd64.tar.lz4
	I1229 07:16:53.845134  269280 cache.go:65] Caching tarball of preloaded images
	I1229 07:16:53.845191  269280 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766979815-22353@sha256:20dad5895b49b986a1253c0faab60865204843ac97fd3a6e6210da5896244409 in local docker daemon
	I1229 07:16:53.845268  269280 preload.go:251] Found /home/jenkins/minikube-integration/22353-9207/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1229 07:16:53.845284  269280 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0 on crio
	I1229 07:16:53.845418  269280 profile.go:143] Saving config to /home/jenkins/minikube-integration/22353-9207/.minikube/profiles/default-k8s-diff-port-798607/config.json ...
	I1229 07:16:53.868568  269280 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766979815-22353@sha256:20dad5895b49b986a1253c0faab60865204843ac97fd3a6e6210da5896244409 in local docker daemon, skipping pull
	I1229 07:16:53.868586  269280 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766979815-22353@sha256:20dad5895b49b986a1253c0faab60865204843ac97fd3a6e6210da5896244409 exists in daemon, skipping load
	I1229 07:16:53.868603  269280 cache.go:243] Successfully downloaded all kic artifacts
	I1229 07:16:53.868641  269280 start.go:360] acquireMachinesLock for default-k8s-diff-port-798607: {Name:mk70c0b726e0ebb1a3d037018e7b56d52af0e215 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1229 07:16:53.868704  269280 start.go:364] duration metric: took 41.245µs to acquireMachinesLock for "default-k8s-diff-port-798607"
	I1229 07:16:53.868736  269280 start.go:96] Skipping create...Using existing machine configuration
	I1229 07:16:53.868745  269280 fix.go:54] fixHost starting: 
	I1229 07:16:53.869055  269280 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-798607 --format={{.State.Status}}
	I1229 07:16:53.886644  269280 fix.go:112] recreateIfNeeded on default-k8s-diff-port-798607: state=Stopped err=<nil>
	W1229 07:16:53.886676  269280 fix.go:138] unexpected machine state, will restart: <nil>
	I1229 07:16:49.243708  268071 out.go:252] * Restarting existing docker container for "embed-certs-739827" ...
	I1229 07:16:49.243787  268071 cli_runner.go:164] Run: docker start embed-certs-739827
	I1229 07:16:49.502325  268071 cli_runner.go:164] Run: docker container inspect embed-certs-739827 --format={{.State.Status}}
	I1229 07:16:49.522981  268071 kic.go:430] container "embed-certs-739827" state is running.
	I1229 07:16:49.523543  268071 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-739827
	I1229 07:16:49.544230  268071 profile.go:143] Saving config to /home/jenkins/minikube-integration/22353-9207/.minikube/profiles/embed-certs-739827/config.json ...
	I1229 07:16:49.544511  268071 machine.go:94] provisionDockerMachine start ...
	I1229 07:16:49.544606  268071 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-739827
	I1229 07:16:49.565179  268071 main.go:144] libmachine: Using SSH client type: native
	I1229 07:16:49.565520  268071 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e300] 0x850fa0 <nil>  [] 0s} 127.0.0.1 33083 <nil> <nil>}
	I1229 07:16:49.565543  268071 main.go:144] libmachine: About to run SSH command:
	hostname
	I1229 07:16:49.566394  268071 main.go:144] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:41862->127.0.0.1:33083: read: connection reset by peer
	I1229 07:16:52.706163  268071 main.go:144] libmachine: SSH cmd err, output: <nil>: embed-certs-739827
	
	I1229 07:16:52.706207  268071 ubuntu.go:182] provisioning hostname "embed-certs-739827"
	I1229 07:16:52.706295  268071 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-739827
	I1229 07:16:52.725495  268071 main.go:144] libmachine: Using SSH client type: native
	I1229 07:16:52.725770  268071 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e300] 0x850fa0 <nil>  [] 0s} 127.0.0.1 33083 <nil> <nil>}
	I1229 07:16:52.725790  268071 main.go:144] libmachine: About to run SSH command:
	sudo hostname embed-certs-739827 && echo "embed-certs-739827" | sudo tee /etc/hostname
	I1229 07:16:52.873147  268071 main.go:144] libmachine: SSH cmd err, output: <nil>: embed-certs-739827
	
	I1229 07:16:52.873248  268071 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-739827
	I1229 07:16:52.894665  268071 main.go:144] libmachine: Using SSH client type: native
	I1229 07:16:52.894977  268071 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e300] 0x850fa0 <nil>  [] 0s} 127.0.0.1 33083 <nil> <nil>}
	I1229 07:16:52.895004  268071 main.go:144] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-739827' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-739827/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-739827' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1229 07:16:53.034410  268071 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	I1229 07:16:53.034438  268071 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22353-9207/.minikube CaCertPath:/home/jenkins/minikube-integration/22353-9207/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22353-9207/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22353-9207/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22353-9207/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22353-9207/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22353-9207/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22353-9207/.minikube}
	I1229 07:16:53.034469  268071 ubuntu.go:190] setting up certificates
	I1229 07:16:53.034487  268071 provision.go:84] configureAuth start
	I1229 07:16:53.034551  268071 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-739827
	I1229 07:16:53.055953  268071 provision.go:143] copyHostCerts
	I1229 07:16:53.056033  268071 exec_runner.go:144] found /home/jenkins/minikube-integration/22353-9207/.minikube/ca.pem, removing ...
	I1229 07:16:53.056048  268071 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22353-9207/.minikube/ca.pem
	I1229 07:16:53.056126  268071 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22353-9207/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22353-9207/.minikube/ca.pem (1082 bytes)
	I1229 07:16:53.056266  268071 exec_runner.go:144] found /home/jenkins/minikube-integration/22353-9207/.minikube/cert.pem, removing ...
	I1229 07:16:53.056278  268071 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22353-9207/.minikube/cert.pem
	I1229 07:16:53.056315  268071 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22353-9207/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22353-9207/.minikube/cert.pem (1123 bytes)
	I1229 07:16:53.056386  268071 exec_runner.go:144] found /home/jenkins/minikube-integration/22353-9207/.minikube/key.pem, removing ...
	I1229 07:16:53.056395  268071 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22353-9207/.minikube/key.pem
	I1229 07:16:53.056419  268071 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22353-9207/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22353-9207/.minikube/key.pem (1675 bytes)
	I1229 07:16:53.056500  268071 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22353-9207/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22353-9207/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22353-9207/.minikube/certs/ca-key.pem org=jenkins.embed-certs-739827 san=[127.0.0.1 192.168.103.2 embed-certs-739827 localhost minikube]
	I1229 07:16:53.296848  268071 provision.go:177] copyRemoteCerts
	I1229 07:16:53.296902  268071 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1229 07:16:53.296935  268071 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-739827
	I1229 07:16:53.317808  268071 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33083 SSHKeyPath:/home/jenkins/minikube-integration/22353-9207/.minikube/machines/embed-certs-739827/id_rsa Username:docker}
	I1229 07:16:53.422280  268071 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-9207/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1229 07:16:53.441814  268071 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-9207/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1229 07:16:53.461885  268071 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-9207/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1229 07:16:53.481146  268071 provision.go:87] duration metric: took 446.646845ms to configureAuth
	I1229 07:16:53.481178  268071 ubuntu.go:206] setting minikube options for container-runtime
	I1229 07:16:53.481414  268071 config.go:182] Loaded profile config "embed-certs-739827": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1229 07:16:53.481565  268071 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-739827
	I1229 07:16:53.502551  268071 main.go:144] libmachine: Using SSH client type: native
	I1229 07:16:53.502840  268071 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e300] 0x850fa0 <nil>  [] 0s} 127.0.0.1 33083 <nil> <nil>}
	I1229 07:16:53.502866  268071 main.go:144] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1229 07:16:53.839544  268071 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1229 07:16:53.839570  268071 machine.go:97] duration metric: took 4.295039577s to provisionDockerMachine
	I1229 07:16:53.839582  268071 start.go:293] postStartSetup for "embed-certs-739827" (driver="docker")
	I1229 07:16:53.839594  268071 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1229 07:16:53.839650  268071 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1229 07:16:53.839704  268071 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-739827
	I1229 07:16:53.861013  268071 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33083 SSHKeyPath:/home/jenkins/minikube-integration/22353-9207/.minikube/machines/embed-certs-739827/id_rsa Username:docker}
	I1229 07:16:53.962699  268071 ssh_runner.go:195] Run: cat /etc/os-release
	I1229 07:16:53.966474  268071 main.go:144] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1229 07:16:53.966507  268071 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1229 07:16:53.966522  268071 filesync.go:126] Scanning /home/jenkins/minikube-integration/22353-9207/.minikube/addons for local assets ...
	I1229 07:16:53.966575  268071 filesync.go:126] Scanning /home/jenkins/minikube-integration/22353-9207/.minikube/files for local assets ...
	I1229 07:16:53.966685  268071 filesync.go:149] local asset: /home/jenkins/minikube-integration/22353-9207/.minikube/files/etc/ssl/certs/127332.pem -> 127332.pem in /etc/ssl/certs
	I1229 07:16:53.966809  268071 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1229 07:16:53.975023  268071 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-9207/.minikube/files/etc/ssl/certs/127332.pem --> /etc/ssl/certs/127332.pem (1708 bytes)
	I1229 07:16:54.001087  268071 start.go:296] duration metric: took 161.488475ms for postStartSetup
	I1229 07:16:54.001171  268071 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1229 07:16:54.001244  268071 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-739827
	I1229 07:16:54.022211  268071 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33083 SSHKeyPath:/home/jenkins/minikube-integration/22353-9207/.minikube/machines/embed-certs-739827/id_rsa Username:docker}
	I1229 07:16:49.186705  225445 cri.go:96] found id: ""
	I1229 07:16:49.186738  225445 logs.go:282] 0 containers: []
	W1229 07:16:49.186749  225445 logs.go:284] No container was found matching "kindnet"
	I1229 07:16:49.186756  225445 cri.go:61] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1229 07:16:49.186813  225445 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=storage-provisioner
	I1229 07:16:49.219364  225445 cri.go:96] found id: ""
	I1229 07:16:49.219389  225445 logs.go:282] 0 containers: []
	W1229 07:16:49.219399  225445 logs.go:284] No container was found matching "storage-provisioner"
	I1229 07:16:49.219410  225445 logs.go:123] Gathering logs for kube-apiserver [8b80982715281e14912ea726d2a5a182323febb349dfab3bcb2a610db7fa1d11] ...
	I1229 07:16:49.219425  225445 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 8b80982715281e14912ea726d2a5a182323febb349dfab3bcb2a610db7fa1d11"
	I1229 07:16:49.256204  225445 logs.go:123] Gathering logs for etcd [02694b727a7ef5615b676832bc306017d23da15eff0a3aa9c1e32630ffe9e1bd] ...
	I1229 07:16:49.256252  225445 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 02694b727a7ef5615b676832bc306017d23da15eff0a3aa9c1e32630ffe9e1bd"
	I1229 07:16:49.296336  225445 logs.go:123] Gathering logs for kube-scheduler [1a2a5ec730671fe2fec7f6b801e54afe17547e25677f0f6dc85052e44f7374d4] ...
	I1229 07:16:49.296382  225445 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 1a2a5ec730671fe2fec7f6b801e54afe17547e25677f0f6dc85052e44f7374d4"
	I1229 07:16:49.379178  225445 logs.go:123] Gathering logs for kube-controller-manager [a67832fbf27ab39d26d7cd60ce6b9b5a496df64418a8f0557221862df96d6685] ...
	I1229 07:16:49.379212  225445 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a67832fbf27ab39d26d7cd60ce6b9b5a496df64418a8f0557221862df96d6685"
	I1229 07:16:49.409248  225445 logs.go:123] Gathering logs for kube-controller-manager [c6904a1539983ab61b05abbaa0001209617c96ddc1921ff83557531b68191f66] ...
	I1229 07:16:49.409273  225445 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c6904a1539983ab61b05abbaa0001209617c96ddc1921ff83557531b68191f66"
	I1229 07:16:49.437709  225445 logs.go:123] Gathering logs for CRI-O ...
	I1229 07:16:49.437739  225445 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1229 07:16:49.511823  225445 logs.go:123] Gathering logs for container status ...
	I1229 07:16:49.511859  225445 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1229 07:16:49.549208  225445 logs.go:123] Gathering logs for kubelet ...
	I1229 07:16:49.549294  225445 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1229 07:16:49.643989  225445 logs.go:123] Gathering logs for dmesg ...
	I1229 07:16:49.644035  225445 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1229 07:16:49.658567  225445 logs.go:123] Gathering logs for describe nodes ...
	I1229 07:16:49.658605  225445 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1229 07:16:49.715614  225445 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1229 07:16:49.715638  225445 logs.go:123] Gathering logs for kube-apiserver [3a9c461f6549cedfcb902d493a98dfdfd45573c28ec554e6b1d2ce904ee1fd67] ...
	I1229 07:16:49.715654  225445 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3a9c461f6549cedfcb902d493a98dfdfd45573c28ec554e6b1d2ce904ee1fd67"
	I1229 07:16:49.751920  225445 logs.go:123] Gathering logs for kube-scheduler [14199977eb133b2984f339d2e84d171a190e387623bebdf7ed21e27d8cf71d90] ...
	I1229 07:16:49.751952  225445 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 14199977eb133b2984f339d2e84d171a190e387623bebdf7ed21e27d8cf71d90"
	W1229 07:16:49.780385  225445 logs.go:138] Found kube-scheduler [14199977eb133b2984f339d2e84d171a190e387623bebdf7ed21e27d8cf71d90] problem: E1229 07:16:24.252692       1 run.go:72] "command failed" err="failed to create listener: failed to listen on 127.0.0.1:10259: listen tcp 127.0.0.1:10259: bind: address already in use"
	I1229 07:16:49.780422  225445 logs.go:123] Gathering logs for kube-scheduler [83f0b45b14d0f5c0f2428e774b2c22cc1cbf1cdc631be13cb8017de1791ed9c1] ...
	I1229 07:16:49.780441  225445 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 83f0b45b14d0f5c0f2428e774b2c22cc1cbf1cdc631be13cb8017de1791ed9c1"
	I1229 07:16:49.815064  225445 out.go:374] Setting ErrFile to fd 2...
	I1229 07:16:49.815101  225445 out.go:408] TERM=,COLORTERM=, which probably does not support color
	W1229 07:16:49.815155  225445 out.go:285] X Problems detected in kube-scheduler [14199977eb133b2984f339d2e84d171a190e387623bebdf7ed21e27d8cf71d90]:
	W1229 07:16:49.815171  225445 out.go:285]   E1229 07:16:24.252692       1 run.go:72] "command failed" err="failed to create listener: failed to listen on 127.0.0.1:10259: listen tcp 127.0.0.1:10259: bind: address already in use"
	I1229 07:16:49.815177  225445 out.go:374] Setting ErrFile to fd 2...
	I1229 07:16:49.815189  225445 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1229 07:16:54.118411  268071 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1229 07:16:54.123468  268071 fix.go:56] duration metric: took 4.901774194s for fixHost
	I1229 07:16:54.123496  268071 start.go:83] releasing machines lock for "embed-certs-739827", held for 4.901829553s
	I1229 07:16:54.123568  268071 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-739827
	I1229 07:16:54.144367  268071 ssh_runner.go:195] Run: cat /version.json
	I1229 07:16:54.144427  268071 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-739827
	I1229 07:16:54.144475  268071 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1229 07:16:54.144549  268071 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-739827
	I1229 07:16:54.165900  268071 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33083 SSHKeyPath:/home/jenkins/minikube-integration/22353-9207/.minikube/machines/embed-certs-739827/id_rsa Username:docker}
	I1229 07:16:54.166740  268071 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33083 SSHKeyPath:/home/jenkins/minikube-integration/22353-9207/.minikube/machines/embed-certs-739827/id_rsa Username:docker}
	I1229 07:16:54.325848  268071 ssh_runner.go:195] Run: systemctl --version
	I1229 07:16:54.333096  268071 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1229 07:16:54.378552  268071 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1229 07:16:54.383371  268071 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1229 07:16:54.383435  268071 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1229 07:16:54.391645  268071 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1229 07:16:54.391664  268071 start.go:496] detecting cgroup driver to use...
	I1229 07:16:54.391692  268071 detect.go:190] detected "systemd" cgroup driver on host os
	I1229 07:16:54.391736  268071 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1229 07:16:54.411531  268071 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1229 07:16:54.426977  268071 docker.go:218] disabling cri-docker service (if available) ...
	I1229 07:16:54.427025  268071 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1229 07:16:54.442322  268071 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1229 07:16:54.455842  268071 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1229 07:16:54.545359  268071 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1229 07:16:54.626697  268071 docker.go:234] disabling docker service ...
	I1229 07:16:54.626749  268071 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1229 07:16:54.640496  268071 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1229 07:16:54.652042  268071 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1229 07:16:54.731324  268071 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1229 07:16:54.810557  268071 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1229 07:16:54.822466  268071 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1229 07:16:54.836181  268071 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1229 07:16:54.836283  268071 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1229 07:16:54.844955  268071 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1229 07:16:54.845015  268071 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1229 07:16:54.853236  268071 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1229 07:16:54.861414  268071 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1229 07:16:54.869997  268071 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1229 07:16:54.877832  268071 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1229 07:16:54.886447  268071 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1229 07:16:54.894595  268071 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1229 07:16:54.903164  268071 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1229 07:16:54.910354  268071 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1229 07:16:54.917498  268071 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1229 07:16:54.999905  268071 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1229 07:16:55.138714  268071 start.go:553] Will wait 60s for socket path /var/run/crio/crio.sock
	I1229 07:16:55.138798  268071 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1229 07:16:55.142782  268071 start.go:574] Will wait 60s for crictl version
	I1229 07:16:55.142848  268071 ssh_runner.go:195] Run: which crictl
	I1229 07:16:55.146522  268071 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1229 07:16:55.170001  268071 start.go:590] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.35.0
	RuntimeApiVersion:  v1
	I1229 07:16:55.170065  268071 ssh_runner.go:195] Run: crio --version
	I1229 07:16:55.196203  268071 ssh_runner.go:195] Run: crio --version
	I1229 07:16:55.225122  268071 out.go:179] * Preparing Kubernetes v1.35.0 on CRI-O 1.35.0 ...
	I1229 07:16:55.226416  268071 cli_runner.go:164] Run: docker network inspect embed-certs-739827 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1229 07:16:55.243756  268071 ssh_runner.go:195] Run: grep 192.168.103.1	host.minikube.internal$ /etc/hosts
	I1229 07:16:55.247870  268071 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.103.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1229 07:16:55.257968  268071 kubeadm.go:884] updating cluster {Name:embed-certs-739827 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766979815-22353@sha256:20dad5895b49b986a1253c0faab60865204843ac97fd3a6e6210da5896244409 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:embed-certs-739827 Namespace:default APIServerHAVIP: APIServerName:minikubeCA AP
IServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:dock
er BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false} ...
	I1229 07:16:55.258082  268071 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime crio
	I1229 07:16:55.258124  268071 ssh_runner.go:195] Run: sudo crictl images --output json
	I1229 07:16:55.290195  268071 crio.go:561] all images are preloaded for cri-o runtime.
	I1229 07:16:55.290233  268071 crio.go:433] Images already preloaded, skipping extraction
	I1229 07:16:55.290295  268071 ssh_runner.go:195] Run: sudo crictl images --output json
	I1229 07:16:55.317321  268071 crio.go:561] all images are preloaded for cri-o runtime.
	I1229 07:16:55.317343  268071 cache_images.go:86] Images are preloaded, skipping loading
	I1229 07:16:55.317350  268071 kubeadm.go:935] updating node { 192.168.103.2 8443 v1.35.0 crio true true} ...
	I1229 07:16:55.317435  268071 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=embed-certs-739827 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.103.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0 ClusterName:embed-certs-739827 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1229 07:16:55.317495  268071 ssh_runner.go:195] Run: crio config
	I1229 07:16:55.362409  268071 cni.go:84] Creating CNI manager for ""
	I1229 07:16:55.362433  268071 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1229 07:16:55.362448  268071 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1229 07:16:55.362470  268071 kubeadm.go:197] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.103.2 APIServerPort:8443 KubernetesVersion:v1.35.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-739827 NodeName:embed-certs-739827 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.103.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.103.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:
/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1229 07:16:55.362591  268071 kubeadm.go:203] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.103.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-739827"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.103.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.103.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1229 07:16:55.362654  268071 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0
	I1229 07:16:55.371551  268071 binaries.go:51] Found k8s binaries, skipping transfer
	I1229 07:16:55.371622  268071 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1229 07:16:55.379151  268071 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (369 bytes)
	I1229 07:16:55.392005  268071 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1229 07:16:55.404573  268071 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2217 bytes)
	I1229 07:16:55.416447  268071 ssh_runner.go:195] Run: grep 192.168.103.2	control-plane.minikube.internal$ /etc/hosts
	I1229 07:16:55.419970  268071 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.103.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1229 07:16:55.429402  268071 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1229 07:16:55.506987  268071 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1229 07:16:55.533696  268071 certs.go:69] Setting up /home/jenkins/minikube-integration/22353-9207/.minikube/profiles/embed-certs-739827 for IP: 192.168.103.2
	I1229 07:16:55.533716  268071 certs.go:195] generating shared ca certs ...
	I1229 07:16:55.533730  268071 certs.go:227] acquiring lock for ca certs: {Name:mk9ea2caa33885d3eff30cd6b31b0954826457bd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1229 07:16:55.533887  268071 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22353-9207/.minikube/ca.key
	I1229 07:16:55.533945  268071 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22353-9207/.minikube/proxy-client-ca.key
	I1229 07:16:55.533959  268071 certs.go:257] generating profile certs ...
	I1229 07:16:55.534067  268071 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22353-9207/.minikube/profiles/embed-certs-739827/client.key
	I1229 07:16:55.534143  268071 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22353-9207/.minikube/profiles/embed-certs-739827/apiserver.key.2a13e84f
	I1229 07:16:55.534213  268071 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22353-9207/.minikube/profiles/embed-certs-739827/proxy-client.key
	I1229 07:16:55.534376  268071 certs.go:484] found cert: /home/jenkins/minikube-integration/22353-9207/.minikube/certs/12733.pem (1338 bytes)
	W1229 07:16:55.534423  268071 certs.go:480] ignoring /home/jenkins/minikube-integration/22353-9207/.minikube/certs/12733_empty.pem, impossibly tiny 0 bytes
	I1229 07:16:55.534469  268071 certs.go:484] found cert: /home/jenkins/minikube-integration/22353-9207/.minikube/certs/ca-key.pem (1675 bytes)
	I1229 07:16:55.534510  268071 certs.go:484] found cert: /home/jenkins/minikube-integration/22353-9207/.minikube/certs/ca.pem (1082 bytes)
	I1229 07:16:55.534547  268071 certs.go:484] found cert: /home/jenkins/minikube-integration/22353-9207/.minikube/certs/cert.pem (1123 bytes)
	I1229 07:16:55.534579  268071 certs.go:484] found cert: /home/jenkins/minikube-integration/22353-9207/.minikube/certs/key.pem (1675 bytes)
	I1229 07:16:55.534638  268071 certs.go:484] found cert: /home/jenkins/minikube-integration/22353-9207/.minikube/files/etc/ssl/certs/127332.pem (1708 bytes)
	I1229 07:16:55.535268  268071 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-9207/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1229 07:16:55.553091  268071 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-9207/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1229 07:16:55.571666  268071 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-9207/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1229 07:16:55.590416  268071 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-9207/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1229 07:16:55.612654  268071 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-9207/.minikube/profiles/embed-certs-739827/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I1229 07:16:55.631860  268071 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-9207/.minikube/profiles/embed-certs-739827/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1229 07:16:55.649462  268071 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-9207/.minikube/profiles/embed-certs-739827/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1229 07:16:55.668006  268071 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-9207/.minikube/profiles/embed-certs-739827/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1229 07:16:55.685539  268071 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-9207/.minikube/files/etc/ssl/certs/127332.pem --> /usr/share/ca-certificates/127332.pem (1708 bytes)
	I1229 07:16:55.702160  268071 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-9207/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1229 07:16:55.719045  268071 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-9207/.minikube/certs/12733.pem --> /usr/share/ca-certificates/12733.pem (1338 bytes)
	I1229 07:16:55.736825  268071 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (722 bytes)
	I1229 07:16:55.748796  268071 ssh_runner.go:195] Run: openssl version
	I1229 07:16:55.754859  268071 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/127332.pem
	I1229 07:16:55.761973  268071 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/127332.pem /etc/ssl/certs/127332.pem
	I1229 07:16:55.769054  268071 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/127332.pem
	I1229 07:16:55.772576  268071 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 29 06:49 /usr/share/ca-certificates/127332.pem
	I1229 07:16:55.772622  268071 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/127332.pem
	I1229 07:16:55.807333  268071 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1229 07:16:55.815139  268071 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1229 07:16:55.822628  268071 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1229 07:16:55.829908  268071 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1229 07:16:55.834170  268071 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 29 06:46 /usr/share/ca-certificates/minikubeCA.pem
	I1229 07:16:55.834244  268071 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1229 07:16:55.869069  268071 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1229 07:16:55.876700  268071 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/12733.pem
	I1229 07:16:55.883926  268071 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/12733.pem /etc/ssl/certs/12733.pem
	I1229 07:16:55.890986  268071 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12733.pem
	I1229 07:16:55.894623  268071 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 29 06:49 /usr/share/ca-certificates/12733.pem
	I1229 07:16:55.894682  268071 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12733.pem
	I1229 07:16:55.930007  268071 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1229 07:16:55.937656  268071 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1229 07:16:55.941402  268071 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1229 07:16:55.975959  268071 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1229 07:16:56.009865  268071 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1229 07:16:56.051542  268071 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1229 07:16:56.096491  268071 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1229 07:16:56.145609  268071 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1229 07:16:56.196508  268071 kubeadm.go:401] StartCluster: {Name:embed-certs-739827 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766979815-22353@sha256:20dad5895b49b986a1253c0faab60865204843ac97fd3a6e6210da5896244409 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:embed-certs-739827 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker
BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1229 07:16:56.196629  268071 cri.go:61] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1229 07:16:56.196699  268071 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1229 07:16:56.229777  268071 cri.go:96] found id: "64d38c25f85b27ef903c4b442a4a233566702ef4d41de37f0bd76a24a6632555"
	I1229 07:16:56.229800  268071 cri.go:96] found id: "9212464b12efa806f75edd62f5a28621d98bc923f0f5c51a13c6e0475b23ee0a"
	I1229 07:16:56.229806  268071 cri.go:96] found id: "f8f720f7da22897696acdb14fb867efe0f070b8de40dde3450d76b6859332adc"
	I1229 07:16:56.229810  268071 cri.go:96] found id: "0b939e4faa5624d77348fcf707669fb95bdce762e69420b9e5dde5b8d7fad11c"
	I1229 07:16:56.229815  268071 cri.go:96] found id: ""
	I1229 07:16:56.229859  268071 ssh_runner.go:195] Run: sudo runc list -f json
	W1229 07:16:56.241656  268071 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-29T07:16:56Z" level=error msg="open /run/runc: no such file or directory"
	I1229 07:16:56.241760  268071 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1229 07:16:56.249651  268071 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1229 07:16:56.249669  268071 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1229 07:16:56.249710  268071 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1229 07:16:56.256981  268071 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1229 07:16:56.257775  268071 kubeconfig.go:47] verify endpoint returned: get endpoint: "embed-certs-739827" does not appear in /home/jenkins/minikube-integration/22353-9207/kubeconfig
	I1229 07:16:56.258182  268071 kubeconfig.go:62] /home/jenkins/minikube-integration/22353-9207/kubeconfig needs updating (will repair): [kubeconfig missing "embed-certs-739827" cluster setting kubeconfig missing "embed-certs-739827" context setting]
	I1229 07:16:56.258856  268071 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22353-9207/kubeconfig: {Name:mkdd37af1bd6d0f9cc034a64c07c8d33ee5c10ac Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1229 07:16:56.260639  268071 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1229 07:16:56.269283  268071 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.103.2
	I1229 07:16:56.269315  268071 kubeadm.go:602] duration metric: took 19.640054ms to restartPrimaryControlPlane
	I1229 07:16:56.269326  268071 kubeadm.go:403] duration metric: took 72.829066ms to StartCluster
	I1229 07:16:56.269344  268071 settings.go:142] acquiring lock: {Name:mkc0a676e8e9f981283a1b55a28ae5261bf35d87 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1229 07:16:56.269414  268071 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22353-9207/kubeconfig
	I1229 07:16:56.271281  268071 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22353-9207/kubeconfig: {Name:mkdd37af1bd6d0f9cc034a64c07c8d33ee5c10ac Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1229 07:16:56.271570  268071 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1229 07:16:56.271687  268071 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1229 07:16:56.271790  268071 addons.go:70] Setting storage-provisioner=true in profile "embed-certs-739827"
	I1229 07:16:56.271806  268071 addons.go:239] Setting addon storage-provisioner=true in "embed-certs-739827"
	I1229 07:16:56.271805  268071 config.go:182] Loaded profile config "embed-certs-739827": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	W1229 07:16:56.271814  268071 addons.go:248] addon storage-provisioner should already be in state true
	I1229 07:16:56.271810  268071 addons.go:70] Setting dashboard=true in profile "embed-certs-739827"
	I1229 07:16:56.271833  268071 addons.go:239] Setting addon dashboard=true in "embed-certs-739827"
	I1229 07:16:56.271841  268071 host.go:66] Checking if "embed-certs-739827" exists ...
	W1229 07:16:56.271843  268071 addons.go:248] addon dashboard should already be in state true
	I1229 07:16:56.271838  268071 addons.go:70] Setting default-storageclass=true in profile "embed-certs-739827"
	I1229 07:16:56.271879  268071 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-739827"
	I1229 07:16:56.271881  268071 host.go:66] Checking if "embed-certs-739827" exists ...
	I1229 07:16:56.272194  268071 cli_runner.go:164] Run: docker container inspect embed-certs-739827 --format={{.State.Status}}
	I1229 07:16:56.272378  268071 cli_runner.go:164] Run: docker container inspect embed-certs-739827 --format={{.State.Status}}
	I1229 07:16:56.272384  268071 cli_runner.go:164] Run: docker container inspect embed-certs-739827 --format={{.State.Status}}
	I1229 07:16:56.273507  268071 out.go:179] * Verifying Kubernetes components...
	I1229 07:16:56.274580  268071 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1229 07:16:56.296920  268071 addons.go:239] Setting addon default-storageclass=true in "embed-certs-739827"
	W1229 07:16:56.296949  268071 addons.go:248] addon default-storageclass should already be in state true
	I1229 07:16:56.296979  268071 host.go:66] Checking if "embed-certs-739827" exists ...
	I1229 07:16:56.297493  268071 cli_runner.go:164] Run: docker container inspect embed-certs-739827 --format={{.State.Status}}
	I1229 07:16:56.297612  268071 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1229 07:16:56.299120  268071 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1229 07:16:56.299143  268071 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1229 07:16:56.299207  268071 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-739827
	I1229 07:16:56.301260  268071 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1229 07:16:56.302499  268071 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1229 07:16:56.303582  268071 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1229 07:16:56.303603  268071 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1229 07:16:56.303651  268071 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-739827
	I1229 07:16:56.320255  268071 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1229 07:16:56.320281  268071 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1229 07:16:56.320345  268071 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-739827
	I1229 07:16:56.324308  268071 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33083 SSHKeyPath:/home/jenkins/minikube-integration/22353-9207/.minikube/machines/embed-certs-739827/id_rsa Username:docker}
	I1229 07:16:56.327626  268071 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33083 SSHKeyPath:/home/jenkins/minikube-integration/22353-9207/.minikube/machines/embed-certs-739827/id_rsa Username:docker}
	I1229 07:16:56.355956  268071 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33083 SSHKeyPath:/home/jenkins/minikube-integration/22353-9207/.minikube/machines/embed-certs-739827/id_rsa Username:docker}
	I1229 07:16:56.439323  268071 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1229 07:16:56.450493  268071 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1229 07:16:56.450520  268071 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1229 07:16:56.451780  268071 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1229 07:16:56.452668  268071 node_ready.go:35] waiting up to 6m0s for node "embed-certs-739827" to be "Ready" ...
	I1229 07:16:56.464268  268071 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1229 07:16:56.464766  268071 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1229 07:16:56.464789  268071 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1229 07:16:56.479017  268071 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1229 07:16:56.479040  268071 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1229 07:16:56.493130  268071 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1229 07:16:56.493149  268071 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1229 07:16:56.506469  268071 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1229 07:16:56.506494  268071 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1229 07:16:56.520469  268071 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1229 07:16:56.520500  268071 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1229 07:16:56.533796  268071 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1229 07:16:56.533818  268071 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1229 07:16:56.546016  268071 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1229 07:16:56.546036  268071 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1229 07:16:56.558769  268071 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1229 07:16:56.558789  268071 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1229 07:16:56.570982  268071 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1229 07:16:57.587424  268071 node_ready.go:49] node "embed-certs-739827" is "Ready"
	I1229 07:16:57.587456  268071 node_ready.go:38] duration metric: took 1.134759132s for node "embed-certs-739827" to be "Ready" ...
	I1229 07:16:57.587473  268071 api_server.go:52] waiting for apiserver process to appear ...
	I1229 07:16:57.587528  268071 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1229 07:16:58.223824  268071 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.772014189s)
	I1229 07:16:58.223903  268071 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.759604106s)
	I1229 07:16:58.224004  268071 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.652986623s)
	I1229 07:16:58.224029  268071 api_server.go:72] duration metric: took 1.952426368s to wait for apiserver process to appear ...
	I1229 07:16:58.224044  268071 api_server.go:88] waiting for apiserver healthz status ...
	I1229 07:16:58.224065  268071 api_server.go:299] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1229 07:16:58.225705  268071 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p embed-certs-739827 addons enable metrics-server
	
	I1229 07:16:58.230342  268071 api_server.go:325] https://192.168.103.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1229 07:16:58.230365  268071 api_server.go:103] status: https://192.168.103.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1229 07:16:58.236061  268071 out.go:179] * Enabled addons: storage-provisioner, dashboard, default-storageclass
	I1229 07:16:53.888535  269280 out.go:252] * Restarting existing docker container for "default-k8s-diff-port-798607" ...
	I1229 07:16:53.888610  269280 cli_runner.go:164] Run: docker start default-k8s-diff-port-798607
	I1229 07:16:54.134779  269280 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-798607 --format={{.State.Status}}
	I1229 07:16:54.155865  269280 kic.go:430] container "default-k8s-diff-port-798607" state is running.
	I1229 07:16:54.156327  269280 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-798607
	I1229 07:16:54.179273  269280 profile.go:143] Saving config to /home/jenkins/minikube-integration/22353-9207/.minikube/profiles/default-k8s-diff-port-798607/config.json ...
	I1229 07:16:54.179538  269280 machine.go:94] provisionDockerMachine start ...
	I1229 07:16:54.179606  269280 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-798607
	I1229 07:16:54.199668  269280 main.go:144] libmachine: Using SSH client type: native
	I1229 07:16:54.199951  269280 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e300] 0x850fa0 <nil>  [] 0s} 127.0.0.1 33088 <nil> <nil>}
	I1229 07:16:54.199971  269280 main.go:144] libmachine: About to run SSH command:
	hostname
	I1229 07:16:54.200636  269280 main.go:144] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:52880->127.0.0.1:33088: read: connection reset by peer
	I1229 07:16:57.353328  269280 main.go:144] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-798607
	
	I1229 07:16:57.353357  269280 ubuntu.go:182] provisioning hostname "default-k8s-diff-port-798607"
	I1229 07:16:57.353420  269280 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-798607
	I1229 07:16:57.374413  269280 main.go:144] libmachine: Using SSH client type: native
	I1229 07:16:57.374642  269280 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e300] 0x850fa0 <nil>  [] 0s} 127.0.0.1 33088 <nil> <nil>}
	I1229 07:16:57.374661  269280 main.go:144] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-798607 && echo "default-k8s-diff-port-798607" | sudo tee /etc/hostname
	I1229 07:16:57.535265  269280 main.go:144] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-798607
	
	I1229 07:16:57.535358  269280 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-798607
	I1229 07:16:57.555067  269280 main.go:144] libmachine: Using SSH client type: native
	I1229 07:16:57.555329  269280 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e300] 0x850fa0 <nil>  [] 0s} 127.0.0.1 33088 <nil> <nil>}
	I1229 07:16:57.555349  269280 main.go:144] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-798607' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-798607/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-798607' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1229 07:16:57.734163  269280 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	I1229 07:16:57.734195  269280 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22353-9207/.minikube CaCertPath:/home/jenkins/minikube-integration/22353-9207/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22353-9207/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22353-9207/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22353-9207/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22353-9207/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22353-9207/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22353-9207/.minikube}
	I1229 07:16:57.734268  269280 ubuntu.go:190] setting up certificates
	I1229 07:16:57.734283  269280 provision.go:84] configureAuth start
	I1229 07:16:57.734347  269280 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-798607
	I1229 07:16:57.759994  269280 provision.go:143] copyHostCerts
	I1229 07:16:57.760060  269280 exec_runner.go:144] found /home/jenkins/minikube-integration/22353-9207/.minikube/ca.pem, removing ...
	I1229 07:16:57.760090  269280 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22353-9207/.minikube/ca.pem
	I1229 07:16:57.760184  269280 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22353-9207/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22353-9207/.minikube/ca.pem (1082 bytes)
	I1229 07:16:57.760800  269280 exec_runner.go:144] found /home/jenkins/minikube-integration/22353-9207/.minikube/cert.pem, removing ...
	I1229 07:16:57.760822  269280 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22353-9207/.minikube/cert.pem
	I1229 07:16:57.760870  269280 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22353-9207/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22353-9207/.minikube/cert.pem (1123 bytes)
	I1229 07:16:57.761029  269280 exec_runner.go:144] found /home/jenkins/minikube-integration/22353-9207/.minikube/key.pem, removing ...
	I1229 07:16:57.761045  269280 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22353-9207/.minikube/key.pem
	I1229 07:16:57.761087  269280 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22353-9207/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22353-9207/.minikube/key.pem (1675 bytes)
	I1229 07:16:57.761175  269280 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22353-9207/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22353-9207/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22353-9207/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-798607 san=[127.0.0.1 192.168.85.2 default-k8s-diff-port-798607 localhost minikube]
	I1229 07:16:57.858590  269280 provision.go:177] copyRemoteCerts
	I1229 07:16:57.858686  269280 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1229 07:16:57.858730  269280 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-798607
	I1229 07:16:57.883395  269280 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33088 SSHKeyPath:/home/jenkins/minikube-integration/22353-9207/.minikube/machines/default-k8s-diff-port-798607/id_rsa Username:docker}
	I1229 07:16:57.992988  269280 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-9207/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1229 07:16:58.022849  269280 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-9207/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1229 07:16:58.051752  269280 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-9207/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I1229 07:16:58.076859  269280 provision.go:87] duration metric: took 342.540112ms to configureAuth
	I1229 07:16:58.076900  269280 ubuntu.go:206] setting minikube options for container-runtime
	I1229 07:16:58.077390  269280 config.go:182] Loaded profile config "default-k8s-diff-port-798607": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1229 07:16:58.077529  269280 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-798607
	I1229 07:16:58.100804  269280 main.go:144] libmachine: Using SSH client type: native
	I1229 07:16:58.101122  269280 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e300] 0x850fa0 <nil>  [] 0s} 127.0.0.1 33088 <nil> <nil>}
	I1229 07:16:58.101161  269280 main.go:144] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1229 07:16:58.423291  269280 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1229 07:16:58.423324  269280 machine.go:97] duration metric: took 4.243766485s to provisionDockerMachine
	I1229 07:16:58.423339  269280 start.go:293] postStartSetup for "default-k8s-diff-port-798607" (driver="docker")
	I1229 07:16:58.423354  269280 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1229 07:16:58.423415  269280 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1229 07:16:58.423470  269280 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-798607
	I1229 07:16:58.445130  269280 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33088 SSHKeyPath:/home/jenkins/minikube-integration/22353-9207/.minikube/machines/default-k8s-diff-port-798607/id_rsa Username:docker}
	I1229 07:16:58.544697  269280 ssh_runner.go:195] Run: cat /etc/os-release
	I1229 07:16:58.548269  269280 main.go:144] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1229 07:16:58.548293  269280 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1229 07:16:58.548303  269280 filesync.go:126] Scanning /home/jenkins/minikube-integration/22353-9207/.minikube/addons for local assets ...
	I1229 07:16:58.548348  269280 filesync.go:126] Scanning /home/jenkins/minikube-integration/22353-9207/.minikube/files for local assets ...
	I1229 07:16:58.548417  269280 filesync.go:149] local asset: /home/jenkins/minikube-integration/22353-9207/.minikube/files/etc/ssl/certs/127332.pem -> 127332.pem in /etc/ssl/certs
	I1229 07:16:58.548508  269280 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1229 07:16:58.556094  269280 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-9207/.minikube/files/etc/ssl/certs/127332.pem --> /etc/ssl/certs/127332.pem (1708 bytes)
	I1229 07:16:58.576602  269280 start.go:296] duration metric: took 153.245299ms for postStartSetup
	I1229 07:16:58.576684  269280 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1229 07:16:58.576730  269280 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-798607
	I1229 07:16:58.598430  269280 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33088 SSHKeyPath:/home/jenkins/minikube-integration/22353-9207/.minikube/machines/default-k8s-diff-port-798607/id_rsa Username:docker}
	I1229 07:16:58.237027  268071 addons.go:530] duration metric: took 1.965349819s for enable addons: enabled=[storage-provisioner dashboard default-storageclass]
	I1229 07:16:58.724321  268071 api_server.go:299] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1229 07:16:58.729969  268071 api_server.go:325] https://192.168.103.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1229 07:16:58.729991  268071 api_server.go:103] status: https://192.168.103.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1229 07:16:58.702136  269280 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1229 07:16:58.707636  269280 fix.go:56] duration metric: took 4.83888363s for fixHost
	I1229 07:16:58.707665  269280 start.go:83] releasing machines lock for "default-k8s-diff-port-798607", held for 4.838948781s
	I1229 07:16:58.707734  269280 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-798607
	I1229 07:16:58.729806  269280 ssh_runner.go:195] Run: cat /version.json
	I1229 07:16:58.729859  269280 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-798607
	I1229 07:16:58.729910  269280 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1229 07:16:58.729993  269280 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-798607
	I1229 07:16:58.751739  269280 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33088 SSHKeyPath:/home/jenkins/minikube-integration/22353-9207/.minikube/machines/default-k8s-diff-port-798607/id_rsa Username:docker}
	I1229 07:16:58.752472  269280 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33088 SSHKeyPath:/home/jenkins/minikube-integration/22353-9207/.minikube/machines/default-k8s-diff-port-798607/id_rsa Username:docker}
	I1229 07:16:58.906022  269280 ssh_runner.go:195] Run: systemctl --version
	I1229 07:16:58.912790  269280 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1229 07:16:58.947369  269280 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1229 07:16:58.952729  269280 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1229 07:16:58.952791  269280 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1229 07:16:58.961158  269280 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1229 07:16:58.961188  269280 start.go:496] detecting cgroup driver to use...
	I1229 07:16:58.961229  269280 detect.go:190] detected "systemd" cgroup driver on host os
	I1229 07:16:58.961275  269280 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1229 07:16:58.976412  269280 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1229 07:16:58.988422  269280 docker.go:218] disabling cri-docker service (if available) ...
	I1229 07:16:58.988487  269280 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1229 07:16:59.003016  269280 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1229 07:16:59.015790  269280 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1229 07:16:59.097420  269280 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1229 07:16:59.179667  269280 docker.go:234] disabling docker service ...
	I1229 07:16:59.179723  269280 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1229 07:16:59.194569  269280 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1229 07:16:59.207352  269280 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1229 07:16:59.300204  269280 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1229 07:16:59.381251  269280 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1229 07:16:59.393641  269280 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1229 07:16:59.407386  269280 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1229 07:16:59.407436  269280 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1229 07:16:59.415993  269280 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1229 07:16:59.416052  269280 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1229 07:16:59.424270  269280 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1229 07:16:59.432612  269280 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1229 07:16:59.441240  269280 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1229 07:16:59.448948  269280 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1229 07:16:59.457926  269280 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1229 07:16:59.466179  269280 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1229 07:16:59.474638  269280 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1229 07:16:59.482009  269280 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1229 07:16:59.489043  269280 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1229 07:16:59.564381  269280 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1229 07:16:59.712532  269280 start.go:553] Will wait 60s for socket path /var/run/crio/crio.sock
	I1229 07:16:59.712589  269280 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1229 07:16:59.716750  269280 start.go:574] Will wait 60s for crictl version
	I1229 07:16:59.716807  269280 ssh_runner.go:195] Run: which crictl
	I1229 07:16:59.720200  269280 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1229 07:16:59.744400  269280 start.go:590] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.35.0
	RuntimeApiVersion:  v1
	I1229 07:16:59.744487  269280 ssh_runner.go:195] Run: crio --version
	I1229 07:16:59.772076  269280 ssh_runner.go:195] Run: crio --version
	I1229 07:16:59.803331  269280 out.go:179] * Preparing Kubernetes v1.35.0 on CRI-O 1.35.0 ...
	I1229 07:16:59.804981  269280 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-798607 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1229 07:16:59.823371  269280 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1229 07:16:59.827437  269280 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1229 07:16:59.838312  269280 kubeadm.go:884] updating cluster {Name:default-k8s-diff-port-798607 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766979815-22353@sha256:20dad5895b49b986a1253c0faab60865204843ac97fd3a6e6210da5896244409 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:default-k8s-diff-port-798607 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountT
ype:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false} ...
	I1229 07:16:59.838459  269280 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime crio
	I1229 07:16:59.838541  269280 ssh_runner.go:195] Run: sudo crictl images --output json
	I1229 07:16:59.875565  269280 crio.go:561] all images are preloaded for cri-o runtime.
	I1229 07:16:59.875586  269280 crio.go:433] Images already preloaded, skipping extraction
	I1229 07:16:59.875639  269280 ssh_runner.go:195] Run: sudo crictl images --output json
	I1229 07:16:59.906671  269280 crio.go:561] all images are preloaded for cri-o runtime.
	I1229 07:16:59.906695  269280 cache_images.go:86] Images are preloaded, skipping loading
	I1229 07:16:59.906705  269280 kubeadm.go:935] updating node { 192.168.85.2 8444 v1.35.0 crio true true} ...
	I1229 07:16:59.906801  269280 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=default-k8s-diff-port-798607 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0 ClusterName:default-k8s-diff-port-798607 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1229 07:16:59.906871  269280 ssh_runner.go:195] Run: crio config
	I1229 07:16:59.960073  269280 cni.go:84] Creating CNI manager for ""
	I1229 07:16:59.960102  269280 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1229 07:16:59.960120  269280 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1229 07:16:59.960151  269280 kubeadm.go:197] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8444 KubernetesVersion:v1.35.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-798607 NodeName:default-k8s-diff-port-798607 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.c
rt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1229 07:16:59.960333  269280 kubeadm.go:203] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-798607"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1229 07:16:59.960405  269280 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0
	I1229 07:16:59.969068  269280 binaries.go:51] Found k8s binaries, skipping transfer
	I1229 07:16:59.969131  269280 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1229 07:16:59.976580  269280 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (378 bytes)
	I1229 07:16:59.991540  269280 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1229 07:17:00.005777  269280 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2224 bytes)
	I1229 07:17:00.020212  269280 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1229 07:17:00.024758  269280 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1229 07:17:00.036880  269280 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1229 07:17:00.131475  269280 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1229 07:17:00.160048  269280 certs.go:69] Setting up /home/jenkins/minikube-integration/22353-9207/.minikube/profiles/default-k8s-diff-port-798607 for IP: 192.168.85.2
	I1229 07:17:00.160074  269280 certs.go:195] generating shared ca certs ...
	I1229 07:17:00.160094  269280 certs.go:227] acquiring lock for ca certs: {Name:mk9ea2caa33885d3eff30cd6b31b0954826457bd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1229 07:17:00.160273  269280 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22353-9207/.minikube/ca.key
	I1229 07:17:00.160334  269280 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22353-9207/.minikube/proxy-client-ca.key
	I1229 07:17:00.160351  269280 certs.go:257] generating profile certs ...
	I1229 07:17:00.160459  269280 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22353-9207/.minikube/profiles/default-k8s-diff-port-798607/client.key
	I1229 07:17:00.160524  269280 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22353-9207/.minikube/profiles/default-k8s-diff-port-798607/apiserver.key.86858a19
	I1229 07:17:00.160556  269280 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22353-9207/.minikube/profiles/default-k8s-diff-port-798607/proxy-client.key
	I1229 07:17:00.160673  269280 certs.go:484] found cert: /home/jenkins/minikube-integration/22353-9207/.minikube/certs/12733.pem (1338 bytes)
	W1229 07:17:00.160710  269280 certs.go:480] ignoring /home/jenkins/minikube-integration/22353-9207/.minikube/certs/12733_empty.pem, impossibly tiny 0 bytes
	I1229 07:17:00.160720  269280 certs.go:484] found cert: /home/jenkins/minikube-integration/22353-9207/.minikube/certs/ca-key.pem (1675 bytes)
	I1229 07:17:00.160754  269280 certs.go:484] found cert: /home/jenkins/minikube-integration/22353-9207/.minikube/certs/ca.pem (1082 bytes)
	I1229 07:17:00.160787  269280 certs.go:484] found cert: /home/jenkins/minikube-integration/22353-9207/.minikube/certs/cert.pem (1123 bytes)
	I1229 07:17:00.160825  269280 certs.go:484] found cert: /home/jenkins/minikube-integration/22353-9207/.minikube/certs/key.pem (1675 bytes)
	I1229 07:17:00.160901  269280 certs.go:484] found cert: /home/jenkins/minikube-integration/22353-9207/.minikube/files/etc/ssl/certs/127332.pem (1708 bytes)
	I1229 07:17:00.161691  269280 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-9207/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1229 07:17:00.182440  269280 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-9207/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1229 07:17:00.205142  269280 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-9207/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1229 07:17:00.226543  269280 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-9207/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1229 07:17:00.252897  269280 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-9207/.minikube/profiles/default-k8s-diff-port-798607/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1229 07:17:00.275390  269280 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-9207/.minikube/profiles/default-k8s-diff-port-798607/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1229 07:17:00.293737  269280 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-9207/.minikube/profiles/default-k8s-diff-port-798607/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1229 07:17:00.310788  269280 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-9207/.minikube/profiles/default-k8s-diff-port-798607/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1229 07:17:00.327602  269280 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-9207/.minikube/certs/12733.pem --> /usr/share/ca-certificates/12733.pem (1338 bytes)
	I1229 07:17:00.345043  269280 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-9207/.minikube/files/etc/ssl/certs/127332.pem --> /usr/share/ca-certificates/127332.pem (1708 bytes)
	I1229 07:17:00.364798  269280 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-9207/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1229 07:17:00.384816  269280 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (722 bytes)
	I1229 07:17:00.397657  269280 ssh_runner.go:195] Run: openssl version
	I1229 07:17:00.403639  269280 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1229 07:17:00.411793  269280 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1229 07:17:00.419573  269280 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1229 07:17:00.423368  269280 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 29 06:46 /usr/share/ca-certificates/minikubeCA.pem
	I1229 07:17:00.423415  269280 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1229 07:17:00.461710  269280 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1229 07:17:00.469582  269280 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/12733.pem
	I1229 07:17:00.477954  269280 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/12733.pem /etc/ssl/certs/12733.pem
	I1229 07:17:00.485941  269280 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12733.pem
	I1229 07:17:00.489658  269280 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 29 06:49 /usr/share/ca-certificates/12733.pem
	I1229 07:17:00.489709  269280 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12733.pem
	I1229 07:17:00.528688  269280 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1229 07:17:00.537870  269280 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/127332.pem
	I1229 07:17:00.545649  269280 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/127332.pem /etc/ssl/certs/127332.pem
	I1229 07:17:00.553726  269280 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/127332.pem
	I1229 07:17:00.557805  269280 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 29 06:49 /usr/share/ca-certificates/127332.pem
	I1229 07:17:00.557869  269280 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/127332.pem
	I1229 07:17:00.594096  269280 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1229 07:17:00.601814  269280 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1229 07:17:00.605469  269280 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1229 07:17:00.641364  269280 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1229 07:17:00.678445  269280 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1229 07:17:00.734451  269280 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1229 07:17:00.779781  269280 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1229 07:17:00.837839  269280 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1229 07:17:00.878628  269280 kubeadm.go:401] StartCluster: {Name:default-k8s-diff-port-798607 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766979815-22353@sha256:20dad5895b49b986a1253c0faab60865204843ac97fd3a6e6210da5896244409 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:default-k8s-diff-port-798607 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType
:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1229 07:17:00.878727  269280 cri.go:61] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1229 07:17:00.878789  269280 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1229 07:17:00.908801  269280 cri.go:96] found id: "2b72f7f6b29d95aee779b60cd81822c9b177c8165e5f4b6f517ffabb7842f102"
	I1229 07:17:00.908823  269280 cri.go:96] found id: "b68c52dc0f0ed416a57bc48dc7336f1d94c6becc7da6d8e5dc24d055b6929608"
	I1229 07:17:00.908829  269280 cri.go:96] found id: "7adaca7a38cbd91d087cd7df5275e466d228d6e8dd4c54aa4a305ea9bee1f833"
	I1229 07:17:00.908836  269280 cri.go:96] found id: "c791e2da2999f159e921bf68b6eb0ff81a9e870d3867e046bd180bb6857643da"
	I1229 07:17:00.908841  269280 cri.go:96] found id: ""
	I1229 07:17:00.908884  269280 ssh_runner.go:195] Run: sudo runc list -f json
	W1229 07:17:00.921184  269280 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-29T07:17:00Z" level=error msg="open /run/runc: no such file or directory"
	I1229 07:17:00.921264  269280 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1229 07:17:00.929477  269280 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1229 07:17:00.929494  269280 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1229 07:17:00.929540  269280 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1229 07:17:00.937562  269280 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1229 07:17:00.938465  269280 kubeconfig.go:47] verify endpoint returned: get endpoint: "default-k8s-diff-port-798607" does not appear in /home/jenkins/minikube-integration/22353-9207/kubeconfig
	I1229 07:17:00.939015  269280 kubeconfig.go:62] /home/jenkins/minikube-integration/22353-9207/kubeconfig needs updating (will repair): [kubeconfig missing "default-k8s-diff-port-798607" cluster setting kubeconfig missing "default-k8s-diff-port-798607" context setting]
	I1229 07:17:00.939924  269280 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22353-9207/kubeconfig: {Name:mkdd37af1bd6d0f9cc034a64c07c8d33ee5c10ac Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1229 07:17:00.941618  269280 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1229 07:17:00.949752  269280 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.85.2
	I1229 07:17:00.949782  269280 kubeadm.go:602] duration metric: took 20.281612ms to restartPrimaryControlPlane
	I1229 07:17:00.949799  269280 kubeadm.go:403] duration metric: took 71.176228ms to StartCluster
	I1229 07:17:00.949816  269280 settings.go:142] acquiring lock: {Name:mkc0a676e8e9f981283a1b55a28ae5261bf35d87 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1229 07:17:00.949884  269280 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22353-9207/kubeconfig
	I1229 07:17:00.952411  269280 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22353-9207/kubeconfig: {Name:mkdd37af1bd6d0f9cc034a64c07c8d33ee5c10ac Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1229 07:17:00.952767  269280 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1229 07:17:00.952842  269280 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1229 07:17:00.952937  269280 addons.go:70] Setting storage-provisioner=true in profile "default-k8s-diff-port-798607"
	I1229 07:17:00.952953  269280 addons.go:239] Setting addon storage-provisioner=true in "default-k8s-diff-port-798607"
	W1229 07:17:00.952961  269280 addons.go:248] addon storage-provisioner should already be in state true
	I1229 07:17:00.952905  269280 config.go:182] Loaded profile config "default-k8s-diff-port-798607": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1229 07:17:00.952994  269280 host.go:66] Checking if "default-k8s-diff-port-798607" exists ...
	I1229 07:17:00.952989  269280 addons.go:70] Setting dashboard=true in profile "default-k8s-diff-port-798607"
	I1229 07:17:00.953010  269280 addons.go:239] Setting addon dashboard=true in "default-k8s-diff-port-798607"
	W1229 07:17:00.953024  269280 addons.go:248] addon dashboard should already be in state true
	I1229 07:17:00.953048  269280 host.go:66] Checking if "default-k8s-diff-port-798607" exists ...
	I1229 07:17:00.953041  269280 addons.go:70] Setting default-storageclass=true in profile "default-k8s-diff-port-798607"
	I1229 07:17:00.953075  269280 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-798607"
	I1229 07:17:00.953489  269280 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-798607 --format={{.State.Status}}
	I1229 07:17:00.953544  269280 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-798607 --format={{.State.Status}}
	I1229 07:17:00.953544  269280 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-798607 --format={{.State.Status}}
	I1229 07:17:00.954872  269280 out.go:179] * Verifying Kubernetes components...
	I1229 07:17:00.956186  269280 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1229 07:17:00.981243  269280 addons.go:239] Setting addon default-storageclass=true in "default-k8s-diff-port-798607"
	W1229 07:17:00.981267  269280 addons.go:248] addon default-storageclass should already be in state true
	I1229 07:17:00.981295  269280 host.go:66] Checking if "default-k8s-diff-port-798607" exists ...
	I1229 07:17:00.981422  269280 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1229 07:17:00.981428  269280 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1229 07:17:00.981739  269280 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-798607 --format={{.State.Status}}
	I1229 07:17:00.982984  269280 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1229 07:17:00.983021  269280 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1229 07:17:00.983070  269280 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-798607
	I1229 07:17:00.986305  269280 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1229 07:17:00.987464  269280 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1229 07:17:00.987493  269280 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1229 07:17:00.987547  269280 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-798607
	I1229 07:17:01.018440  269280 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1229 07:17:01.018536  269280 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1229 07:17:01.018624  269280 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-798607
	I1229 07:17:01.024615  269280 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33088 SSHKeyPath:/home/jenkins/minikube-integration/22353-9207/.minikube/machines/default-k8s-diff-port-798607/id_rsa Username:docker}
	I1229 07:17:01.025944  269280 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33088 SSHKeyPath:/home/jenkins/minikube-integration/22353-9207/.minikube/machines/default-k8s-diff-port-798607/id_rsa Username:docker}
	I1229 07:17:01.043350  269280 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33088 SSHKeyPath:/home/jenkins/minikube-integration/22353-9207/.minikube/machines/default-k8s-diff-port-798607/id_rsa Username:docker}
	I1229 07:17:01.098375  269280 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1229 07:17:01.110911  269280 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-798607" to be "Ready" ...
	I1229 07:17:01.136377  269280 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1229 07:17:01.138937  269280 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1229 07:17:01.138961  269280 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1229 07:17:01.153106  269280 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1229 07:17:01.153128  269280 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1229 07:17:01.156903  269280 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1229 07:17:01.166688  269280 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1229 07:17:01.166716  269280 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1229 07:17:01.180384  269280 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1229 07:17:01.180407  269280 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1229 07:17:01.196316  269280 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1229 07:17:01.196344  269280 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1229 07:17:01.209332  269280 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1229 07:17:01.209351  269280 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1229 07:17:01.222884  269280 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1229 07:17:01.222907  269280 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1229 07:17:01.236019  269280 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1229 07:17:01.236045  269280 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1229 07:17:01.249461  269280 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1229 07:17:01.249482  269280 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1229 07:17:01.262076  269280 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1229 07:17:02.899952  269280 node_ready.go:49] node "default-k8s-diff-port-798607" is "Ready"
	I1229 07:17:02.899987  269280 node_ready.go:38] duration metric: took 1.78904051s for node "default-k8s-diff-port-798607" to be "Ready" ...
	I1229 07:17:02.900003  269280 api_server.go:52] waiting for apiserver process to appear ...
	I1229 07:17:02.900053  269280 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1229 07:17:03.653408  269280 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.496471206s)
	I1229 07:17:03.653606  269280 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (2.391447046s)
	I1229 07:17:03.653417  269280 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.516981335s)
	I1229 07:17:03.653770  269280 api_server.go:72] duration metric: took 2.700967736s to wait for apiserver process to appear ...
	I1229 07:17:03.653783  269280 api_server.go:88] waiting for apiserver healthz status ...
	I1229 07:17:03.653800  269280 api_server.go:299] Checking apiserver healthz at https://192.168.85.2:8444/healthz ...
	I1229 07:17:03.655394  269280 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p default-k8s-diff-port-798607 addons enable metrics-server
	
	I1229 07:17:03.661118  269280 api_server.go:325] https://192.168.85.2:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1229 07:17:03.661144  269280 api_server.go:103] status: https://192.168.85.2:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1229 07:17:03.664741  269280 out.go:179] * Enabled addons: storage-provisioner, dashboard, default-storageclass
	I1229 07:17:03.665617  269280 addons.go:530] duration metric: took 2.712781499s for enable addons: enabled=[storage-provisioner dashboard default-storageclass]
	I1229 07:16:59.224356  268071 api_server.go:299] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1229 07:16:59.228577  268071 api_server.go:325] https://192.168.103.2:8443/healthz returned 200:
	ok
	I1229 07:16:59.229566  268071 api_server.go:141] control plane version: v1.35.0
	I1229 07:16:59.229590  268071 api_server.go:131] duration metric: took 1.005538983s to wait for apiserver health ...
	I1229 07:16:59.229598  268071 system_pods.go:43] waiting for kube-system pods to appear ...
	I1229 07:16:59.233207  268071 system_pods.go:59] 8 kube-system pods found
	I1229 07:16:59.233260  268071 system_pods.go:61] "coredns-7d764666f9-55529" [279a41bb-4bd1-4a8d-9999-27eb0a996229] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1229 07:16:59.233273  268071 system_pods.go:61] "etcd-embed-certs-739827" [c31026b1-ea55-4f4c-a4da-7acec9849459] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1229 07:16:59.233278  268071 system_pods.go:61] "kindnet-l6mxr" [8c745434-7be8-4f4a-9685-0b2ebdcd1a6f] Running
	I1229 07:16:59.233285  268071 system_pods.go:61] "kube-apiserver-embed-certs-739827" [0bc8eb4f-d400-4844-930d-20bd4547241d] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1229 07:16:59.233294  268071 system_pods.go:61] "kube-controller-manager-embed-certs-739827" [c97351b5-e77c-4fdf-909e-5953b4bf6a71] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1229 07:16:59.233298  268071 system_pods.go:61] "kube-proxy-hdmp6" [ddf343da-e4e2-4ea1-a49d-02ad395abdaa] Running
	I1229 07:16:59.233304  268071 system_pods.go:61] "kube-scheduler-embed-certs-739827" [db6cf8d3-cae8-4601-ae36-c2003c4d368d] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1229 07:16:59.233307  268071 system_pods.go:61] "storage-provisioner" [b7edc06a-181d-4c30-b979-9aa3f1f50ecb] Running
	I1229 07:16:59.233325  268071 system_pods.go:74] duration metric: took 3.722075ms to wait for pod list to return data ...
	I1229 07:16:59.233334  268071 default_sa.go:34] waiting for default service account to be created ...
	I1229 07:16:59.235660  268071 default_sa.go:45] found service account: "default"
	I1229 07:16:59.235683  268071 default_sa.go:55] duration metric: took 2.342702ms for default service account to be created ...
	I1229 07:16:59.235693  268071 system_pods.go:116] waiting for k8s-apps to be running ...
	I1229 07:16:59.240999  268071 system_pods.go:86] 8 kube-system pods found
	I1229 07:16:59.241039  268071 system_pods.go:89] "coredns-7d764666f9-55529" [279a41bb-4bd1-4a8d-9999-27eb0a996229] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1229 07:16:59.241050  268071 system_pods.go:89] "etcd-embed-certs-739827" [c31026b1-ea55-4f4c-a4da-7acec9849459] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1229 07:16:59.241057  268071 system_pods.go:89] "kindnet-l6mxr" [8c745434-7be8-4f4a-9685-0b2ebdcd1a6f] Running
	I1229 07:16:59.241067  268071 system_pods.go:89] "kube-apiserver-embed-certs-739827" [0bc8eb4f-d400-4844-930d-20bd4547241d] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1229 07:16:59.241074  268071 system_pods.go:89] "kube-controller-manager-embed-certs-739827" [c97351b5-e77c-4fdf-909e-5953b4bf6a71] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1229 07:16:59.241080  268071 system_pods.go:89] "kube-proxy-hdmp6" [ddf343da-e4e2-4ea1-a49d-02ad395abdaa] Running
	I1229 07:16:59.241089  268071 system_pods.go:89] "kube-scheduler-embed-certs-739827" [db6cf8d3-cae8-4601-ae36-c2003c4d368d] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1229 07:16:59.241093  268071 system_pods.go:89] "storage-provisioner" [b7edc06a-181d-4c30-b979-9aa3f1f50ecb] Running
	I1229 07:16:59.241104  268071 system_pods.go:126] duration metric: took 5.403763ms to wait for k8s-apps to be running ...
	I1229 07:16:59.241116  268071 system_svc.go:44] waiting for kubelet service to be running ....
	I1229 07:16:59.241163  268071 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1229 07:16:59.260325  268071 system_svc.go:56] duration metric: took 19.201478ms WaitForService to wait for kubelet
	I1229 07:16:59.260362  268071 kubeadm.go:587] duration metric: took 2.988758005s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1229 07:16:59.260386  268071 node_conditions.go:102] verifying NodePressure condition ...
	I1229 07:16:59.263806  268071 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1229 07:16:59.263835  268071 node_conditions.go:123] node cpu capacity is 8
	I1229 07:16:59.263853  268071 node_conditions.go:105] duration metric: took 3.46033ms to run NodePressure ...
	I1229 07:16:59.263877  268071 start.go:242] waiting for startup goroutines ...
	I1229 07:16:59.263889  268071 start.go:247] waiting for cluster config update ...
	I1229 07:16:59.263903  268071 start.go:256] writing updated cluster config ...
	I1229 07:16:59.264243  268071 ssh_runner.go:195] Run: rm -f paused
	I1229 07:16:59.268391  268071 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1229 07:16:59.271672  268071 pod_ready.go:83] waiting for pod "coredns-7d764666f9-55529" in "kube-system" namespace to be "Ready" or be gone ...
	W1229 07:17:01.277861  268071 pod_ready.go:104] pod "coredns-7d764666f9-55529" is not "Ready", error: <nil>
	W1229 07:17:03.278152  268071 pod_ready.go:104] pod "coredns-7d764666f9-55529" is not "Ready", error: <nil>
	I1229 07:16:59.817661  225445 api_server.go:299] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1229 07:16:59.818080  225445 api_server.go:315] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1229 07:16:59.818142  225445 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1229 07:16:59.818206  225445 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1229 07:16:59.847971  225445 cri.go:96] found id: "8b80982715281e14912ea726d2a5a182323febb349dfab3bcb2a610db7fa1d11"
	I1229 07:16:59.847997  225445 cri.go:96] found id: "3a9c461f6549cedfcb902d493a98dfdfd45573c28ec554e6b1d2ce904ee1fd67"
	I1229 07:16:59.848004  225445 cri.go:96] found id: ""
	I1229 07:16:59.848013  225445 logs.go:282] 2 containers: [8b80982715281e14912ea726d2a5a182323febb349dfab3bcb2a610db7fa1d11 3a9c461f6549cedfcb902d493a98dfdfd45573c28ec554e6b1d2ce904ee1fd67]
	I1229 07:16:59.848071  225445 ssh_runner.go:195] Run: which crictl
	I1229 07:16:59.852004  225445 ssh_runner.go:195] Run: which crictl
	I1229 07:16:59.856151  225445 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1229 07:16:59.856233  225445 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1229 07:16:59.886546  225445 cri.go:96] found id: "02694b727a7ef5615b676832bc306017d23da15eff0a3aa9c1e32630ffe9e1bd"
	I1229 07:16:59.886568  225445 cri.go:96] found id: ""
	I1229 07:16:59.886577  225445 logs.go:282] 1 containers: [02694b727a7ef5615b676832bc306017d23da15eff0a3aa9c1e32630ffe9e1bd]
	I1229 07:16:59.886632  225445 ssh_runner.go:195] Run: which crictl
	I1229 07:16:59.890671  225445 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1229 07:16:59.890754  225445 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1229 07:16:59.922157  225445 cri.go:96] found id: ""
	I1229 07:16:59.922184  225445 logs.go:282] 0 containers: []
	W1229 07:16:59.922193  225445 logs.go:284] No container was found matching "coredns"
	I1229 07:16:59.922199  225445 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1229 07:16:59.922269  225445 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1229 07:16:59.955240  225445 cri.go:96] found id: "14199977eb133b2984f339d2e84d171a190e387623bebdf7ed21e27d8cf71d90"
	I1229 07:16:59.955264  225445 cri.go:96] found id: "1a2a5ec730671fe2fec7f6b801e54afe17547e25677f0f6dc85052e44f7374d4"
	I1229 07:16:59.955270  225445 cri.go:96] found id: "83f0b45b14d0f5c0f2428e774b2c22cc1cbf1cdc631be13cb8017de1791ed9c1"
	I1229 07:16:59.955274  225445 cri.go:96] found id: ""
	I1229 07:16:59.955283  225445 logs.go:282] 3 containers: [14199977eb133b2984f339d2e84d171a190e387623bebdf7ed21e27d8cf71d90 1a2a5ec730671fe2fec7f6b801e54afe17547e25677f0f6dc85052e44f7374d4 83f0b45b14d0f5c0f2428e774b2c22cc1cbf1cdc631be13cb8017de1791ed9c1]
	I1229 07:16:59.955352  225445 ssh_runner.go:195] Run: which crictl
	I1229 07:16:59.960257  225445 ssh_runner.go:195] Run: which crictl
	I1229 07:16:59.964304  225445 ssh_runner.go:195] Run: which crictl
	I1229 07:16:59.967919  225445 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1229 07:16:59.967976  225445 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1229 07:16:59.998958  225445 cri.go:96] found id: ""
	I1229 07:16:59.998983  225445 logs.go:282] 0 containers: []
	W1229 07:16:59.998992  225445 logs.go:284] No container was found matching "kube-proxy"
	I1229 07:16:59.999000  225445 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1229 07:16:59.999053  225445 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1229 07:17:00.031391  225445 cri.go:96] found id: "d89f0865da0d6d00be5eaa57878033c5f8099b0390b297f0364ac2b1a1c1463e"
	I1229 07:17:00.031411  225445 cri.go:96] found id: "a67832fbf27ab39d26d7cd60ce6b9b5a496df64418a8f0557221862df96d6685"
	I1229 07:17:00.031414  225445 cri.go:96] found id: "c6904a1539983ab61b05abbaa0001209617c96ddc1921ff83557531b68191f66"
	I1229 07:17:00.031417  225445 cri.go:96] found id: ""
	I1229 07:17:00.031425  225445 logs.go:282] 3 containers: [d89f0865da0d6d00be5eaa57878033c5f8099b0390b297f0364ac2b1a1c1463e a67832fbf27ab39d26d7cd60ce6b9b5a496df64418a8f0557221862df96d6685 c6904a1539983ab61b05abbaa0001209617c96ddc1921ff83557531b68191f66]
	I1229 07:17:00.031479  225445 ssh_runner.go:195] Run: which crictl
	I1229 07:17:00.036097  225445 ssh_runner.go:195] Run: which crictl
	I1229 07:17:00.040261  225445 ssh_runner.go:195] Run: which crictl
	I1229 07:17:00.044030  225445 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1229 07:17:00.044095  225445 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1229 07:17:00.082824  225445 cri.go:96] found id: ""
	I1229 07:17:00.082857  225445 logs.go:282] 0 containers: []
	W1229 07:17:00.082869  225445 logs.go:284] No container was found matching "kindnet"
	I1229 07:17:00.082878  225445 cri.go:61] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1229 07:17:00.082939  225445 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=storage-provisioner
	I1229 07:17:00.113619  225445 cri.go:96] found id: ""
	I1229 07:17:00.113650  225445 logs.go:282] 0 containers: []
	W1229 07:17:00.113661  225445 logs.go:284] No container was found matching "storage-provisioner"
	I1229 07:17:00.113672  225445 logs.go:123] Gathering logs for kube-controller-manager [c6904a1539983ab61b05abbaa0001209617c96ddc1921ff83557531b68191f66] ...
	I1229 07:17:00.113689  225445 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c6904a1539983ab61b05abbaa0001209617c96ddc1921ff83557531b68191f66"
	I1229 07:17:00.142716  225445 logs.go:123] Gathering logs for CRI-O ...
	I1229 07:17:00.142741  225445 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1229 07:17:00.235813  225445 logs.go:123] Gathering logs for kubelet ...
	I1229 07:17:00.235902  225445 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1229 07:17:00.338703  225445 logs.go:123] Gathering logs for dmesg ...
	I1229 07:17:00.338733  225445 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1229 07:17:00.352779  225445 logs.go:123] Gathering logs for kube-apiserver [8b80982715281e14912ea726d2a5a182323febb349dfab3bcb2a610db7fa1d11] ...
	I1229 07:17:00.352804  225445 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 8b80982715281e14912ea726d2a5a182323febb349dfab3bcb2a610db7fa1d11"
	I1229 07:17:00.388501  225445 logs.go:123] Gathering logs for etcd [02694b727a7ef5615b676832bc306017d23da15eff0a3aa9c1e32630ffe9e1bd] ...
	I1229 07:17:00.388528  225445 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 02694b727a7ef5615b676832bc306017d23da15eff0a3aa9c1e32630ffe9e1bd"
	I1229 07:17:00.421889  225445 logs.go:123] Gathering logs for kube-controller-manager [d89f0865da0d6d00be5eaa57878033c5f8099b0390b297f0364ac2b1a1c1463e] ...
	I1229 07:17:00.421918  225445 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d89f0865da0d6d00be5eaa57878033c5f8099b0390b297f0364ac2b1a1c1463e"
	I1229 07:17:00.449936  225445 logs.go:123] Gathering logs for kube-controller-manager [a67832fbf27ab39d26d7cd60ce6b9b5a496df64418a8f0557221862df96d6685] ...
	I1229 07:17:00.449961  225445 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a67832fbf27ab39d26d7cd60ce6b9b5a496df64418a8f0557221862df96d6685"
	I1229 07:17:00.476666  225445 logs.go:123] Gathering logs for container status ...
	I1229 07:17:00.476693  225445 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1229 07:17:00.510330  225445 logs.go:123] Gathering logs for describe nodes ...
	I1229 07:17:00.510357  225445 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	
	
	==> CRI-O <==
	Dec 29 07:16:31 no-preload-122332 crio[580]: time="2025-12-29T07:16:31.469558385Z" level=info msg="Started container" PID=1791 containerID=4d2d448b4a7c3be44c2e8fc003543736bed79d9f1440dd278928896b191224a0 description=kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-8kjsc/dashboard-metrics-scraper id=99077551-8f48-4480-8193-6203bf551c66 name=/runtime.v1.RuntimeService/StartContainer sandboxID=3e4e6bf33b90cd084ab70e1629d95bb9642de99fd03f4957f8246aa41ff068c9
	Dec 29 07:16:32 no-preload-122332 crio[580]: time="2025-12-29T07:16:32.292339661Z" level=info msg="Removing container: f6077febbb8010d447c8c50ef72bb55285597337f424ce773b7ef2f351928145" id=9cdeb85c-12fa-4b94-b6c9-85b65ad7b2b0 name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 29 07:16:32 no-preload-122332 crio[580]: time="2025-12-29T07:16:32.304674566Z" level=info msg="Removed container f6077febbb8010d447c8c50ef72bb55285597337f424ce773b7ef2f351928145: kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-8kjsc/dashboard-metrics-scraper" id=9cdeb85c-12fa-4b94-b6c9-85b65ad7b2b0 name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 29 07:16:43 no-preload-122332 crio[580]: time="2025-12-29T07:16:43.318924017Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=f528f112-70db-4990-8483-9916e3e5301a name=/runtime.v1.ImageService/ImageStatus
	Dec 29 07:16:43 no-preload-122332 crio[580]: time="2025-12-29T07:16:43.319953887Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=58aab12a-1b18-4831-86b2-efd7c0c03641 name=/runtime.v1.ImageService/ImageStatus
	Dec 29 07:16:43 no-preload-122332 crio[580]: time="2025-12-29T07:16:43.320971906Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=2eaa6a59-7f8e-46c7-9d1e-d92f7b899879 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 29 07:16:43 no-preload-122332 crio[580]: time="2025-12-29T07:16:43.321112626Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 29 07:16:43 no-preload-122332 crio[580]: time="2025-12-29T07:16:43.325422665Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 29 07:16:43 no-preload-122332 crio[580]: time="2025-12-29T07:16:43.325619378Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/3432c767b3f9fe5d1c33d9a9ceb7dfbe8eef1b1fcc05b379b8ce5abe3b57c4b0/merged/etc/passwd: no such file or directory"
	Dec 29 07:16:43 no-preload-122332 crio[580]: time="2025-12-29T07:16:43.325649612Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/3432c767b3f9fe5d1c33d9a9ceb7dfbe8eef1b1fcc05b379b8ce5abe3b57c4b0/merged/etc/group: no such file or directory"
	Dec 29 07:16:43 no-preload-122332 crio[580]: time="2025-12-29T07:16:43.325948418Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 29 07:16:43 no-preload-122332 crio[580]: time="2025-12-29T07:16:43.352352079Z" level=info msg="Created container 545c1cbee5f14da1e2b27f7f896e2dc4c58720ea9d59b706ffa64166d5bb9f96: kube-system/storage-provisioner/storage-provisioner" id=2eaa6a59-7f8e-46c7-9d1e-d92f7b899879 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 29 07:16:43 no-preload-122332 crio[580]: time="2025-12-29T07:16:43.352972293Z" level=info msg="Starting container: 545c1cbee5f14da1e2b27f7f896e2dc4c58720ea9d59b706ffa64166d5bb9f96" id=41a57976-39ec-4bea-847c-dc8e948bb212 name=/runtime.v1.RuntimeService/StartContainer
	Dec 29 07:16:43 no-preload-122332 crio[580]: time="2025-12-29T07:16:43.354790776Z" level=info msg="Started container" PID=1808 containerID=545c1cbee5f14da1e2b27f7f896e2dc4c58720ea9d59b706ffa64166d5bb9f96 description=kube-system/storage-provisioner/storage-provisioner id=41a57976-39ec-4bea-847c-dc8e948bb212 name=/runtime.v1.RuntimeService/StartContainer sandboxID=3e08fbed83128e8a3aa813cf3f1f445a8cb3767b29a3f9d6f0218b7cbc487ef5
	Dec 29 07:16:54 no-preload-122332 crio[580]: time="2025-12-29T07:16:54.211687159Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=0284802b-5455-47cf-9132-9150a238764c name=/runtime.v1.ImageService/ImageStatus
	Dec 29 07:16:54 no-preload-122332 crio[580]: time="2025-12-29T07:16:54.212896671Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=4a359756-0b3f-4b02-a4f9-9f39d4a3ee74 name=/runtime.v1.ImageService/ImageStatus
	Dec 29 07:16:54 no-preload-122332 crio[580]: time="2025-12-29T07:16:54.214329854Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-8kjsc/dashboard-metrics-scraper" id=d6292ce8-38f0-4bd3-ae77-414b4cfe85d9 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 29 07:16:54 no-preload-122332 crio[580]: time="2025-12-29T07:16:54.214492575Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 29 07:16:54 no-preload-122332 crio[580]: time="2025-12-29T07:16:54.221178329Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 29 07:16:54 no-preload-122332 crio[580]: time="2025-12-29T07:16:54.221886491Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 29 07:16:54 no-preload-122332 crio[580]: time="2025-12-29T07:16:54.244514565Z" level=info msg="Created container 322e7b29c6c5691659866c9876262fb3eee6007fc245f7ce7d575d2de9068828: kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-8kjsc/dashboard-metrics-scraper" id=d6292ce8-38f0-4bd3-ae77-414b4cfe85d9 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 29 07:16:54 no-preload-122332 crio[580]: time="2025-12-29T07:16:54.245156699Z" level=info msg="Starting container: 322e7b29c6c5691659866c9876262fb3eee6007fc245f7ce7d575d2de9068828" id=e259a160-5096-4545-8cb2-b67ef0494dfb name=/runtime.v1.RuntimeService/StartContainer
	Dec 29 07:16:54 no-preload-122332 crio[580]: time="2025-12-29T07:16:54.246921154Z" level=info msg="Started container" PID=1848 containerID=322e7b29c6c5691659866c9876262fb3eee6007fc245f7ce7d575d2de9068828 description=kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-8kjsc/dashboard-metrics-scraper id=e259a160-5096-4545-8cb2-b67ef0494dfb name=/runtime.v1.RuntimeService/StartContainer sandboxID=3e4e6bf33b90cd084ab70e1629d95bb9642de99fd03f4957f8246aa41ff068c9
	Dec 29 07:16:54 no-preload-122332 crio[580]: time="2025-12-29T07:16:54.351037157Z" level=info msg="Removing container: 4d2d448b4a7c3be44c2e8fc003543736bed79d9f1440dd278928896b191224a0" id=70cabe84-f1bd-4e30-b53e-48c9eb8cec57 name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 29 07:16:54 no-preload-122332 crio[580]: time="2025-12-29T07:16:54.361526727Z" level=info msg="Removed container 4d2d448b4a7c3be44c2e8fc003543736bed79d9f1440dd278928896b191224a0: kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-8kjsc/dashboard-metrics-scraper" id=70cabe84-f1bd-4e30-b53e-48c9eb8cec57 name=/runtime.v1.RuntimeService/RemoveContainer
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                          NAMESPACE
	322e7b29c6c56       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           13 seconds ago      Exited              dashboard-metrics-scraper   3                   3e4e6bf33b90c       dashboard-metrics-scraper-867fb5f87b-8kjsc   kubernetes-dashboard
	545c1cbee5f14       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           24 seconds ago      Running             storage-provisioner         1                   3e08fbed83128       storage-provisioner                          kube-system
	b4ab1c883154a       docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029   49 seconds ago      Running             kubernetes-dashboard        0                   6f98c18b2c171       kubernetes-dashboard-b84665fb8-vrx7d         kubernetes-dashboard
	f959fe071dc9a       56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c                                           55 seconds ago      Running             busybox                     1                   f49ee03213e08       busybox                                      default
	0da01eca9a562       aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139                                           55 seconds ago      Running             coredns                     0                   01d00e058e623       coredns-7d764666f9-6rcr2                     kube-system
	f6bda58857416       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           55 seconds ago      Exited              storage-provisioner         0                   3e08fbed83128       storage-provisioner                          kube-system
	83ebe55fd0c59       4921d7a6dffa922dd679732ba4797085c4f39e9a53bee8b6fdb1d463e8571251                                           55 seconds ago      Running             kindnet-cni                 0                   00c641f33ad10       kindnet-rq99f                                kube-system
	4749520de1b72       32652ff1bbe6b6df16a3dc621fcad4e6e185a2852bffc5847aab79e36c54bfb8                                           55 seconds ago      Running             kube-proxy                  0                   bc79067c0f5b7       kube-proxy-qvww2                             kube-system
	182221ab78b63       2c9a4b058bd7e6ec479c38f9e8a1dac2f5ee5b0a3ebda6dfac968e8720229508                                           58 seconds ago      Running             kube-controller-manager     0                   55fca5419a43a       kube-controller-manager-no-preload-122332    kube-system
	3c840a729524e       5c6acd67e9cd10eb60a246cd233db251ef62ea97e6572f897e873f0cb648f499                                           58 seconds ago      Running             kube-apiserver              0                   36ea1b63465cd       kube-apiserver-no-preload-122332             kube-system
	482322719dad6       550794e3b12ac21ec7fd940bdfb45f7c8dae3c52acd8c70e580b882de20c3dcc                                           58 seconds ago      Running             kube-scheduler              0                   a46a8654dea5c       kube-scheduler-no-preload-122332             kube-system
	013472dcacb3d       0a108f7189562e99793bdecab61fdf1a7c9d913af3385de9da17fb9d6ff430e2                                           58 seconds ago      Running             etcd                        0                   dc1be2ee47f52       etcd-no-preload-122332                       kube-system
	
	
	==> coredns [0da01eca9a562b5fe8053fa35b1c01007594c183cf9335c44971775cd1ec09d0] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = c7556d8fdf49c5e32a9077be8cfb9fc6947bb07e663a10d55b192eb63ad1f2bd9793e8e5f5a36fc9abb1957831eec5c997fd9821790e3990ae9531bf41ecea37
	CoreDNS-1.13.1
	linux/amd64, go1.25.2, 1db4568
	[INFO] 127.0.0.1:36395 - 25710 "HINFO IN 423111088672476072.3620522526429269876. udp 56 false 512" NXDOMAIN qr,rd,ra 131 0.01542756s
	[INFO] plugin/ready: Plugins not ready: "kubernetes"
	[INFO] plugin/ready: Plugins not ready: "kubernetes"
	[INFO] plugin/ready: Plugins not ready: "kubernetes"
	[INFO] plugin/ready: Plugins not ready: "kubernetes"
	[ERROR] plugin/kubernetes: Failed to watch
	[ERROR] plugin/kubernetes: Failed to watch
	[ERROR] plugin/kubernetes: Failed to watch
	
	
	==> describe nodes <==
	Name:               no-preload-122332
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=no-preload-122332
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=306c26d738eaa3534a776d7c684ed563998a48b8
	                    minikube.k8s.io/name=no-preload-122332
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_29T07_15_12_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 29 Dec 2025 07:15:09 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-122332
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 29 Dec 2025 07:17:02 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 29 Dec 2025 07:16:42 +0000   Mon, 29 Dec 2025 07:15:07 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 29 Dec 2025 07:16:42 +0000   Mon, 29 Dec 2025 07:15:07 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 29 Dec 2025 07:16:42 +0000   Mon, 29 Dec 2025 07:15:07 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 29 Dec 2025 07:16:42 +0000   Mon, 29 Dec 2025 07:15:29 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.94.2
	  Hostname:    no-preload-122332
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863360Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863360Ki
	  pods:               110
	System Info:
	  Machine ID:                 44f990608ba801cb32708aeb6951f96d
	  System UUID:                da04b11d-c694-431a-acb9-a897f234eb76
	  Boot ID:                    38b59c2f-2e15-4538-b2f7-46a8b6545a02
	  Kernel Version:             6.8.0-1045-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.35.0
	  Kubelet Version:            v1.35.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         95s
	  kube-system                 coredns-7d764666f9-6rcr2                      100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     110s
	  kube-system                 etcd-no-preload-122332                        100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         116s
	  kube-system                 kindnet-rq99f                                 100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      110s
	  kube-system                 kube-apiserver-no-preload-122332              250m (3%)     0 (0%)      0 (0%)           0 (0%)         116s
	  kube-system                 kube-controller-manager-no-preload-122332     200m (2%)     0 (0%)      0 (0%)           0 (0%)         116s
	  kube-system                 kube-proxy-qvww2                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         110s
	  kube-system                 kube-scheduler-no-preload-122332              100m (1%)     0 (0%)      0 (0%)           0 (0%)         117s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         110s
	  kubernetes-dashboard        dashboard-metrics-scraper-867fb5f87b-8kjsc    0 (0%)        0 (0%)      0 (0%)           0 (0%)         52s
	  kubernetes-dashboard        kubernetes-dashboard-b84665fb8-vrx7d          0 (0%)        0 (0%)      0 (0%)           0 (0%)         52s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason          Age   From             Message
	  ----    ------          ----  ----             -------
	  Normal  RegisteredNode  111s  node-controller  Node no-preload-122332 event: Registered Node no-preload-122332 in Controller
	  Normal  RegisteredNode  53s   node-controller  Node no-preload-122332 event: Registered Node no-preload-122332 in Controller
	
	
	==> dmesg <==
	[Dec29 06:17] MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details.
	[  +0.001714] TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details.
	[  +0.001000] MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details.
	[  +0.084008] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge
	[  +0.380672] i8042: Warning: Keylock active
	[  +0.013979] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.493300] block sda: the capability attribute has been deprecated.
	[  +0.084603] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.024785] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +4.737934] kauditd_printk_skb: 47 callbacks suppressed
	
	
	==> etcd [013472dcacb3dee11074415629264465301e3f2be8dd69785de033ac3c97d206] <==
	{"level":"info","ts":"2025-12-29T07:16:09.769534Z","caller":"embed/etcd.go:640","msg":"serving peer traffic","address":"192.168.94.2:2380"}
	{"level":"info","ts":"2025-12-29T07:16:09.769581Z","caller":"embed/etcd.go:611","msg":"cmux::serve","address":"192.168.94.2:2380"}
	{"level":"info","ts":"2025-12-29T07:16:09.769603Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1981","msg":"dfc97eb0aae75b33 switched to configuration voters=(16125559238023404339)"}
	{"level":"info","ts":"2025-12-29T07:16:09.769715Z","caller":"embed/etcd.go:292","msg":"now serving peer/client/metrics","local-member-id":"dfc97eb0aae75b33","initial-advertise-peer-urls":["https://192.168.94.2:2380"],"listen-peer-urls":["https://192.168.94.2:2380"],"advertise-client-urls":["https://192.168.94.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.94.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2025-12-29T07:16:09.769742Z","caller":"membership/cluster.go:433","msg":"ignore already added member","cluster-id":"da400bbece288f5a","local-member-id":"dfc97eb0aae75b33","added-peer-id":"dfc97eb0aae75b33","added-peer-peer-urls":["https://192.168.94.2:2380"],"added-peer-is-learner":false}
	{"level":"info","ts":"2025-12-29T07:16:09.769778Z","caller":"embed/etcd.go:890","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2025-12-29T07:16:09.769858Z","caller":"membership/cluster.go:674","msg":"updated cluster version","cluster-id":"da400bbece288f5a","local-member-id":"dfc97eb0aae75b33","from":"3.6","to":"3.6"}
	{"level":"info","ts":"2025-12-29T07:16:10.460201Z","logger":"raft","caller":"v3@v3.6.0/raft.go:988","msg":"dfc97eb0aae75b33 is starting a new election at term 2"}
	{"level":"info","ts":"2025-12-29T07:16:10.460267Z","logger":"raft","caller":"v3@v3.6.0/raft.go:930","msg":"dfc97eb0aae75b33 became pre-candidate at term 2"}
	{"level":"info","ts":"2025-12-29T07:16:10.460308Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1077","msg":"dfc97eb0aae75b33 received MsgPreVoteResp from dfc97eb0aae75b33 at term 2"}
	{"level":"info","ts":"2025-12-29T07:16:10.460317Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1693","msg":"dfc97eb0aae75b33 has received 1 MsgPreVoteResp votes and 0 vote rejections"}
	{"level":"info","ts":"2025-12-29T07:16:10.460332Z","logger":"raft","caller":"v3@v3.6.0/raft.go:912","msg":"dfc97eb0aae75b33 became candidate at term 3"}
	{"level":"info","ts":"2025-12-29T07:16:10.460896Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1077","msg":"dfc97eb0aae75b33 received MsgVoteResp from dfc97eb0aae75b33 at term 3"}
	{"level":"info","ts":"2025-12-29T07:16:10.460929Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1693","msg":"dfc97eb0aae75b33 has received 1 MsgVoteResp votes and 0 vote rejections"}
	{"level":"info","ts":"2025-12-29T07:16:10.460964Z","logger":"raft","caller":"v3@v3.6.0/raft.go:970","msg":"dfc97eb0aae75b33 became leader at term 3"}
	{"level":"info","ts":"2025-12-29T07:16:10.460979Z","logger":"raft","caller":"v3@v3.6.0/node.go:370","msg":"raft.node: dfc97eb0aae75b33 elected leader dfc97eb0aae75b33 at term 3"}
	{"level":"info","ts":"2025-12-29T07:16:10.462299Z","caller":"embed/serve.go:138","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-12-29T07:16:10.462294Z","caller":"etcdserver/server.go:1820","msg":"published local member to cluster through raft","local-member-id":"dfc97eb0aae75b33","local-member-attributes":"{Name:no-preload-122332 ClientURLs:[https://192.168.94.2:2379]}","cluster-id":"da400bbece288f5a","publish-timeout":"7s"}
	{"level":"info","ts":"2025-12-29T07:16:10.462321Z","caller":"embed/serve.go:138","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-12-29T07:16:10.462718Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-12-29T07:16:10.462746Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-12-29T07:16:10.464441Z","caller":"v3rpc/health.go:63","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-12-29T07:16:10.464554Z","caller":"v3rpc/health.go:63","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-12-29T07:16:10.466740Z","caller":"embed/serve.go:283","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.94.2:2379"}
	{"level":"info","ts":"2025-12-29T07:16:10.466806Z","caller":"embed/serve.go:283","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	
	
	==> kernel <==
	 07:17:07 up 59 min,  0 user,  load average: 2.64, 2.69, 2.01
	Linux no-preload-122332 6.8.0-1045-gcp #48~22.04.1-Ubuntu SMP Tue Nov 25 13:07:56 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [83ebe55fd0c5939b15566c7fa2cb8186d179a5062dc285850807eb6f771c21bb] <==
	I1229 07:16:12.811901       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1229 07:16:12.812191       1 main.go:139] hostIP = 192.168.94.2
	podIP = 192.168.94.2
	I1229 07:16:12.812399       1 main.go:148] setting mtu 1500 for CNI 
	I1229 07:16:12.812428       1 main.go:178] kindnetd IP family: "ipv4"
	I1229 07:16:12.812443       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-12-29T07:16:13Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1229 07:16:13.108371       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1229 07:16:13.108623       1 controller.go:381] "Waiting for informer caches to sync"
	I1229 07:16:13.207008       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1229 07:16:13.208160       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1229 07:16:13.507836       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1229 07:16:13.507863       1 metrics.go:72] Registering metrics
	I1229 07:16:13.507915       1 controller.go:711] "Syncing nftables rules"
	I1229 07:16:23.107974       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1229 07:16:23.108057       1 main.go:301] handling current node
	I1229 07:16:33.107759       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1229 07:16:33.107788       1 main.go:301] handling current node
	I1229 07:16:43.107940       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1229 07:16:43.108000       1 main.go:301] handling current node
	I1229 07:16:53.107606       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1229 07:16:53.107644       1 main.go:301] handling current node
	I1229 07:17:03.107825       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1229 07:17:03.107870       1 main.go:301] handling current node
	
	
	==> kube-apiserver [3c840a729524e5af9fc1ab0924ee6323875c1b5066189ad27582f5313c496cbc] <==
	I1229 07:16:11.587336       1 shared_informer.go:377] "Caches are synced"
	I1229 07:16:11.587394       1 aggregator.go:187] initial CRD sync complete...
	I1229 07:16:11.587454       1 autoregister_controller.go:144] Starting autoregister controller
	I1229 07:16:11.587463       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1229 07:16:11.587470       1 cache.go:39] Caches are synced for autoregister controller
	I1229 07:16:11.587647       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1229 07:16:11.587659       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1229 07:16:11.587934       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	E1229 07:16:11.593934       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1229 07:16:11.594602       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1229 07:16:11.600264       1 cidrallocator.go:302] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1229 07:16:11.603083       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1229 07:16:11.603100       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1229 07:16:11.633166       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1229 07:16:11.876801       1 controller.go:667] quota admission added evaluator for: namespaces
	I1229 07:16:11.903184       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1229 07:16:11.920235       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1229 07:16:11.926159       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1229 07:16:11.933197       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1229 07:16:11.963794       1 alloc.go:329] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.106.138.101"}
	I1229 07:16:11.974302       1 alloc.go:329] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.107.161.243"}
	I1229 07:16:12.492769       1 storage_scheduling.go:139] all system priority classes are created successfully or already exist.
	I1229 07:16:15.207352       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1229 07:16:15.257296       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1229 07:16:15.456766       1 controller.go:667] quota admission added evaluator for: endpoints
	
	
	==> kube-controller-manager [182221ab78b63253e283f5b17e6c4eefd8ff0cf8a867399484c79718b382becd] <==
	I1229 07:16:14.759460       1 shared_informer.go:377] "Caches are synced"
	I1229 07:16:14.759477       1 shared_informer.go:377] "Caches are synced"
	I1229 07:16:14.759493       1 shared_informer.go:377] "Caches are synced"
	I1229 07:16:14.759398       1 shared_informer.go:377] "Caches are synced"
	I1229 07:16:14.759460       1 shared_informer.go:377] "Caches are synced"
	I1229 07:16:14.760036       1 shared_informer.go:377] "Caches are synced"
	I1229 07:16:14.760085       1 shared_informer.go:377] "Caches are synced"
	I1229 07:16:14.760415       1 shared_informer.go:377] "Caches are synced"
	I1229 07:16:14.760425       1 shared_informer.go:377] "Caches are synced"
	I1229 07:16:14.760435       1 shared_informer.go:377] "Caches are synced"
	I1229 07:16:14.760459       1 shared_informer.go:377] "Caches are synced"
	I1229 07:16:14.760630       1 shared_informer.go:377] "Caches are synced"
	I1229 07:16:14.760427       1 shared_informer.go:377] "Caches are synced"
	I1229 07:16:14.760686       1 shared_informer.go:377] "Caches are synced"
	I1229 07:16:14.760415       1 shared_informer.go:377] "Caches are synced"
	I1229 07:16:14.760740       1 node_lifecycle_controller.go:1234] "Initializing eviction metric for zone" zone=""
	I1229 07:16:14.760821       1 node_lifecycle_controller.go:886] "Missing timestamp for Node. Assuming now as a timestamp" node="no-preload-122332"
	I1229 07:16:14.760887       1 node_lifecycle_controller.go:1080] "Controller detected that zone is now in new state" zone="" newState="Normal"
	I1229 07:16:14.762883       1 shared_informer.go:377] "Caches are synced"
	I1229 07:16:14.763519       1 shared_informer.go:377] "Caches are synced"
	I1229 07:16:14.770385       1 shared_informer.go:370] "Waiting for caches to sync"
	I1229 07:16:14.860529       1 shared_informer.go:377] "Caches are synced"
	I1229 07:16:14.860547       1 garbagecollector.go:166] "Garbage collector: all resource monitors have synced"
	I1229 07:16:14.860554       1 garbagecollector.go:169] "Proceeding to collect garbage"
	I1229 07:16:14.870930       1 shared_informer.go:377] "Caches are synced"
	
	
	==> kube-proxy [4749520de1b726f631eef5a9218e09908cae4d296fcd6920b8b44725efffa5f9] <==
	I1229 07:16:12.659884       1 server_linux.go:53] "Using iptables proxy"
	I1229 07:16:12.740653       1 shared_informer.go:370] "Waiting for caches to sync"
	I1229 07:16:12.841762       1 shared_informer.go:377] "Caches are synced"
	I1229 07:16:12.841825       1 server.go:218] "Successfully retrieved NodeIPs" NodeIPs=["192.168.94.2"]
	E1229 07:16:12.841930       1 server.go:255] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1229 07:16:12.865462       1 server.go:264] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1229 07:16:12.865533       1 server_linux.go:136] "Using iptables Proxier"
	I1229 07:16:12.872021       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1229 07:16:12.872517       1 server.go:529] "Version info" version="v1.35.0"
	I1229 07:16:12.872532       1 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1229 07:16:12.874214       1 config.go:403] "Starting serviceCIDR config controller"
	I1229 07:16:12.874243       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1229 07:16:12.874270       1 config.go:106] "Starting endpoint slice config controller"
	I1229 07:16:12.874276       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1229 07:16:12.874353       1 config.go:309] "Starting node config controller"
	I1229 07:16:12.874367       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1229 07:16:12.874374       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1229 07:16:12.874085       1 config.go:200] "Starting service config controller"
	I1229 07:16:12.874390       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1229 07:16:12.974508       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1229 07:16:12.974535       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1229 07:16:12.975096       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-scheduler [482322719dad640690982288c2258e90836d194891b2179cab964e1340265902] <==
	I1229 07:16:10.025238       1 serving.go:386] Generated self-signed cert in-memory
	W1229 07:16:11.501865       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1229 07:16:11.501916       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1229 07:16:11.501927       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1229 07:16:11.501937       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1229 07:16:11.546602       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.35.0"
	I1229 07:16:11.546637       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1229 07:16:11.550479       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1229 07:16:11.550579       1 shared_informer.go:370] "Waiting for caches to sync"
	I1229 07:16:11.550945       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1229 07:16:11.551018       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1229 07:16:11.651085       1 shared_informer.go:377] "Caches are synced"
	
	
	==> kubelet <==
	Dec 29 07:16:26 no-preload-122332 kubelet[732]: E1229 07:16:26.831289     732 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-controller-manager-no-preload-122332" containerName="kube-controller-manager"
	Dec 29 07:16:31 no-preload-122332 kubelet[732]: E1229 07:16:31.421108     732 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-8kjsc" containerName="dashboard-metrics-scraper"
	Dec 29 07:16:31 no-preload-122332 kubelet[732]: I1229 07:16:31.421142     732 scope.go:122] "RemoveContainer" containerID="f6077febbb8010d447c8c50ef72bb55285597337f424ce773b7ef2f351928145"
	Dec 29 07:16:32 no-preload-122332 kubelet[732]: I1229 07:16:32.291024     732 scope.go:122] "RemoveContainer" containerID="f6077febbb8010d447c8c50ef72bb55285597337f424ce773b7ef2f351928145"
	Dec 29 07:16:32 no-preload-122332 kubelet[732]: E1229 07:16:32.291299     732 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-8kjsc" containerName="dashboard-metrics-scraper"
	Dec 29 07:16:32 no-preload-122332 kubelet[732]: I1229 07:16:32.291338     732 scope.go:122] "RemoveContainer" containerID="4d2d448b4a7c3be44c2e8fc003543736bed79d9f1440dd278928896b191224a0"
	Dec 29 07:16:32 no-preload-122332 kubelet[732]: E1229 07:16:32.291545     732 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-867fb5f87b-8kjsc_kubernetes-dashboard(ebddbcf3-af17-41a9-8034-37df434c96e9)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-8kjsc" podUID="ebddbcf3-af17-41a9-8034-37df434c96e9"
	Dec 29 07:16:41 no-preload-122332 kubelet[732]: E1229 07:16:41.421154     732 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-8kjsc" containerName="dashboard-metrics-scraper"
	Dec 29 07:16:41 no-preload-122332 kubelet[732]: I1229 07:16:41.421193     732 scope.go:122] "RemoveContainer" containerID="4d2d448b4a7c3be44c2e8fc003543736bed79d9f1440dd278928896b191224a0"
	Dec 29 07:16:41 no-preload-122332 kubelet[732]: E1229 07:16:41.421392     732 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-867fb5f87b-8kjsc_kubernetes-dashboard(ebddbcf3-af17-41a9-8034-37df434c96e9)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-8kjsc" podUID="ebddbcf3-af17-41a9-8034-37df434c96e9"
	Dec 29 07:16:43 no-preload-122332 kubelet[732]: I1229 07:16:43.318579     732 scope.go:122] "RemoveContainer" containerID="f6bda588574168156c2fbabe167417553897fbea83ffd12be951a62f9ebeef8b"
	Dec 29 07:16:50 no-preload-122332 kubelet[732]: E1229 07:16:50.951630     732 prober_manager.go:209] "Readiness probe already exists for container" pod="kube-system/coredns-7d764666f9-6rcr2" containerName="coredns"
	Dec 29 07:16:54 no-preload-122332 kubelet[732]: E1229 07:16:54.211013     732 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-8kjsc" containerName="dashboard-metrics-scraper"
	Dec 29 07:16:54 no-preload-122332 kubelet[732]: I1229 07:16:54.211049     732 scope.go:122] "RemoveContainer" containerID="4d2d448b4a7c3be44c2e8fc003543736bed79d9f1440dd278928896b191224a0"
	Dec 29 07:16:54 no-preload-122332 kubelet[732]: I1229 07:16:54.349303     732 scope.go:122] "RemoveContainer" containerID="4d2d448b4a7c3be44c2e8fc003543736bed79d9f1440dd278928896b191224a0"
	Dec 29 07:16:54 no-preload-122332 kubelet[732]: E1229 07:16:54.349551     732 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-8kjsc" containerName="dashboard-metrics-scraper"
	Dec 29 07:16:54 no-preload-122332 kubelet[732]: I1229 07:16:54.349596     732 scope.go:122] "RemoveContainer" containerID="322e7b29c6c5691659866c9876262fb3eee6007fc245f7ce7d575d2de9068828"
	Dec 29 07:16:54 no-preload-122332 kubelet[732]: E1229 07:16:54.349786     732 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-867fb5f87b-8kjsc_kubernetes-dashboard(ebddbcf3-af17-41a9-8034-37df434c96e9)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-8kjsc" podUID="ebddbcf3-af17-41a9-8034-37df434c96e9"
	Dec 29 07:17:01 no-preload-122332 kubelet[732]: E1229 07:17:01.421129     732 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-8kjsc" containerName="dashboard-metrics-scraper"
	Dec 29 07:17:01 no-preload-122332 kubelet[732]: I1229 07:17:01.421183     732 scope.go:122] "RemoveContainer" containerID="322e7b29c6c5691659866c9876262fb3eee6007fc245f7ce7d575d2de9068828"
	Dec 29 07:17:01 no-preload-122332 kubelet[732]: E1229 07:17:01.421424     732 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-867fb5f87b-8kjsc_kubernetes-dashboard(ebddbcf3-af17-41a9-8034-37df434c96e9)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-8kjsc" podUID="ebddbcf3-af17-41a9-8034-37df434c96e9"
	Dec 29 07:17:04 no-preload-122332 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Dec 29 07:17:04 no-preload-122332 systemd[1]: kubelet.service: Deactivated successfully.
	Dec 29 07:17:04 no-preload-122332 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 29 07:17:04 no-preload-122332 systemd[1]: kubelet.service: Consumed 1.728s CPU time.
	
	
	==> kubernetes-dashboard [b4ab1c883154a271188d140f15f54d642fc3b90bc67d3be7f26173073eed79c9] <==
	2025/12/29 07:16:18 Using namespace: kubernetes-dashboard
	2025/12/29 07:16:18 Using in-cluster config to connect to apiserver
	2025/12/29 07:16:18 Using secret token for csrf signing
	2025/12/29 07:16:18 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/12/29 07:16:18 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/12/29 07:16:18 Successful initial request to the apiserver, version: v1.35.0
	2025/12/29 07:16:18 Generating JWE encryption key
	2025/12/29 07:16:18 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/12/29 07:16:18 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/12/29 07:16:18 Initializing JWE encryption key from synchronized object
	2025/12/29 07:16:18 Creating in-cluster Sidecar client
	2025/12/29 07:16:18 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/12/29 07:16:18 Serving insecurely on HTTP port: 9090
	2025/12/29 07:16:48 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/12/29 07:16:18 Starting overwatch
	
	
	==> storage-provisioner [545c1cbee5f14da1e2b27f7f896e2dc4c58720ea9d59b706ffa64166d5bb9f96] <==
	I1229 07:16:43.366908       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1229 07:16:43.374369       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1229 07:16:43.374414       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1229 07:16:43.376471       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1229 07:16:46.831396       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1229 07:16:51.091621       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1229 07:16:54.691108       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1229 07:16:57.745584       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1229 07:17:00.768407       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1229 07:17:00.774189       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 07:17:00.774543       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1229 07:17:00.774712       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-122332_8f4ee5ed-151a-4e41-a32f-de4e707c566a!
	I1229 07:17:00.775093       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"643709f2-3cd4-4ace-8f28-a3dfde29064a", APIVersion:"v1", ResourceVersion:"675", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-122332_8f4ee5ed-151a-4e41-a32f-de4e707c566a became leader
	W1229 07:17:00.780500       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1229 07:17:00.786915       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 07:17:00.875831       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-122332_8f4ee5ed-151a-4e41-a32f-de4e707c566a!
	W1229 07:17:02.790919       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1229 07:17:02.801208       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1229 07:17:04.805088       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1229 07:17:04.812643       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1229 07:17:06.822271       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1229 07:17:06.860678       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	
	
	==> storage-provisioner [f6bda588574168156c2fbabe167417553897fbea83ffd12be951a62f9ebeef8b] <==
	I1229 07:16:12.626233       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1229 07:16:42.629303       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-122332 -n no-preload-122332
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-122332 -n no-preload-122332: exit status 2 (389.663496ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:270: (dbg) Run:  kubectl --context no-preload-122332 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:294: <<< TestStartStop/group/no-preload/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:295: ---------------------/post-mortem---------------------------------
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect no-preload-122332
helpers_test.go:244: (dbg) docker inspect no-preload-122332:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "9aa41434eb0f8df6141a3e51dfd50ea3fd6a2b2f3e194918c950f2f55445a90f",
	        "Created": "2025-12-29T07:14:49.513032226Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 260987,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-29T07:16:02.728248002Z",
	            "FinishedAt": "2025-12-29T07:16:01.740795191Z"
	        },
	        "Image": "sha256:a2c772d8c9235c5e8a66a3d0af7f5184436aa141d74f90a5659fe0d594ea14d5",
	        "ResolvConfPath": "/var/lib/docker/containers/9aa41434eb0f8df6141a3e51dfd50ea3fd6a2b2f3e194918c950f2f55445a90f/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/9aa41434eb0f8df6141a3e51dfd50ea3fd6a2b2f3e194918c950f2f55445a90f/hostname",
	        "HostsPath": "/var/lib/docker/containers/9aa41434eb0f8df6141a3e51dfd50ea3fd6a2b2f3e194918c950f2f55445a90f/hosts",
	        "LogPath": "/var/lib/docker/containers/9aa41434eb0f8df6141a3e51dfd50ea3fd6a2b2f3e194918c950f2f55445a90f/9aa41434eb0f8df6141a3e51dfd50ea3fd6a2b2f3e194918c950f2f55445a90f-json.log",
	        "Name": "/no-preload-122332",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "no-preload-122332:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "no-preload-122332",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "9aa41434eb0f8df6141a3e51dfd50ea3fd6a2b2f3e194918c950f2f55445a90f",
	                "LowerDir": "/var/lib/docker/overlay2/e2357c0b79397c13786788b28fea63035db3d475bb6e264a508668d9a8bb0046-init/diff:/var/lib/docker/overlay2/3c9e424a3298c821d4195922375c499065668db3230de879006c717dc268a220/diff",
	                "MergedDir": "/var/lib/docker/overlay2/e2357c0b79397c13786788b28fea63035db3d475bb6e264a508668d9a8bb0046/merged",
	                "UpperDir": "/var/lib/docker/overlay2/e2357c0b79397c13786788b28fea63035db3d475bb6e264a508668d9a8bb0046/diff",
	                "WorkDir": "/var/lib/docker/overlay2/e2357c0b79397c13786788b28fea63035db3d475bb6e264a508668d9a8bb0046/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "no-preload-122332",
	                "Source": "/var/lib/docker/volumes/no-preload-122332/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-122332",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766979815-22353@sha256:20dad5895b49b986a1253c0faab60865204843ac97fd3a6e6210da5896244409",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-122332",
	                "name.minikube.sigs.k8s.io": "no-preload-122332",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "fdc5efbc0e97a736dd762cf66d30f6e464dbfae8bd3796ec62650f5da62d14c4",
	            "SandboxKey": "/var/run/docker/netns/fdc5efbc0e97",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33078"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33079"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33082"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33080"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33081"
	                    }
	                ]
	            },
	            "Networks": {
	                "no-preload-122332": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.94.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "18727729929e09903a8602637fce4f42992b3e819228d475208a35800e81902c",
	                    "EndpointID": "7317cf54a0d5aab79aedbfcc4c5ee1e2268991f46c7a1b5a559990df8d67574f",
	                    "Gateway": "192.168.94.1",
	                    "IPAddress": "192.168.94.2",
	                    "MacAddress": "3e:89:14:71:83:9e",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "no-preload-122332",
	                        "9aa41434eb0f"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-122332 -n no-preload-122332
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-122332 -n no-preload-122332: exit status 2 (385.076549ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestStartStop/group/no-preload/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-122332 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-amd64 -p no-preload-122332 logs -n 25: (1.243255974s)
helpers_test.go:261: TestStartStop/group/no-preload/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                   ARGS                                                                                   │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ delete  │ -p stopped-upgrade-518014                                                                                                                                                │ stopped-upgrade-518014       │ jenkins │ v1.37.0 │ 29 Dec 25 07:14 UTC │ 29 Dec 25 07:14 UTC │
	│ start   │ -p no-preload-122332 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0                  │ no-preload-122332            │ jenkins │ v1.37.0 │ 29 Dec 25 07:14 UTC │ 29 Dec 25 07:15 UTC │
	│ image   │ old-k8s-version-876718 image list --format=json                                                                                                                          │ old-k8s-version-876718       │ jenkins │ v1.37.0 │ 29 Dec 25 07:15 UTC │ 29 Dec 25 07:15 UTC │
	│ pause   │ -p old-k8s-version-876718 --alsologtostderr -v=1                                                                                                                         │ old-k8s-version-876718       │ jenkins │ v1.37.0 │ 29 Dec 25 07:15 UTC │                     │
	│ delete  │ -p old-k8s-version-876718                                                                                                                                                │ old-k8s-version-876718       │ jenkins │ v1.37.0 │ 29 Dec 25 07:15 UTC │ 29 Dec 25 07:15 UTC │
	│ delete  │ -p old-k8s-version-876718                                                                                                                                                │ old-k8s-version-876718       │ jenkins │ v1.37.0 │ 29 Dec 25 07:15 UTC │ 29 Dec 25 07:15 UTC │
	│ start   │ -p embed-certs-739827 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0                   │ embed-certs-739827           │ jenkins │ v1.37.0 │ 29 Dec 25 07:15 UTC │ 29 Dec 25 07:16 UTC │
	│ addons  │ enable metrics-server -p no-preload-122332 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                  │ no-preload-122332            │ jenkins │ v1.37.0 │ 29 Dec 25 07:15 UTC │                     │
	│ start   │ -p cert-expiration-452455 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio                                                                │ cert-expiration-452455       │ jenkins │ v1.37.0 │ 29 Dec 25 07:15 UTC │ 29 Dec 25 07:15 UTC │
	│ stop    │ -p no-preload-122332 --alsologtostderr -v=3                                                                                                                              │ no-preload-122332            │ jenkins │ v1.37.0 │ 29 Dec 25 07:15 UTC │ 29 Dec 25 07:16 UTC │
	│ delete  │ -p cert-expiration-452455                                                                                                                                                │ cert-expiration-452455       │ jenkins │ v1.37.0 │ 29 Dec 25 07:15 UTC │ 29 Dec 25 07:15 UTC │
	│ delete  │ -p disable-driver-mounts-708770                                                                                                                                          │ disable-driver-mounts-708770 │ jenkins │ v1.37.0 │ 29 Dec 25 07:15 UTC │ 29 Dec 25 07:15 UTC │
	│ start   │ -p default-k8s-diff-port-798607 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0 │ default-k8s-diff-port-798607 │ jenkins │ v1.37.0 │ 29 Dec 25 07:15 UTC │ 29 Dec 25 07:16 UTC │
	│ addons  │ enable dashboard -p no-preload-122332 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                             │ no-preload-122332            │ jenkins │ v1.37.0 │ 29 Dec 25 07:16 UTC │ 29 Dec 25 07:16 UTC │
	│ start   │ -p no-preload-122332 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0                  │ no-preload-122332            │ jenkins │ v1.37.0 │ 29 Dec 25 07:16 UTC │ 29 Dec 25 07:16 UTC │
	│ addons  │ enable metrics-server -p embed-certs-739827 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                 │ embed-certs-739827           │ jenkins │ v1.37.0 │ 29 Dec 25 07:16 UTC │                     │
	│ stop    │ -p embed-certs-739827 --alsologtostderr -v=3                                                                                                                             │ embed-certs-739827           │ jenkins │ v1.37.0 │ 29 Dec 25 07:16 UTC │ 29 Dec 25 07:16 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-798607 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                       │ default-k8s-diff-port-798607 │ jenkins │ v1.37.0 │ 29 Dec 25 07:16 UTC │                     │
	│ stop    │ -p default-k8s-diff-port-798607 --alsologtostderr -v=3                                                                                                                   │ default-k8s-diff-port-798607 │ jenkins │ v1.37.0 │ 29 Dec 25 07:16 UTC │ 29 Dec 25 07:16 UTC │
	│ addons  │ enable dashboard -p embed-certs-739827 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                            │ embed-certs-739827           │ jenkins │ v1.37.0 │ 29 Dec 25 07:16 UTC │ 29 Dec 25 07:16 UTC │
	│ start   │ -p embed-certs-739827 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0                   │ embed-certs-739827           │ jenkins │ v1.37.0 │ 29 Dec 25 07:16 UTC │                     │
	│ addons  │ enable dashboard -p default-k8s-diff-port-798607 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                  │ default-k8s-diff-port-798607 │ jenkins │ v1.37.0 │ 29 Dec 25 07:16 UTC │ 29 Dec 25 07:16 UTC │
	│ start   │ -p default-k8s-diff-port-798607 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0 │ default-k8s-diff-port-798607 │ jenkins │ v1.37.0 │ 29 Dec 25 07:16 UTC │                     │
	│ image   │ no-preload-122332 image list --format=json                                                                                                                               │ no-preload-122332            │ jenkins │ v1.37.0 │ 29 Dec 25 07:17 UTC │ 29 Dec 25 07:17 UTC │
	│ pause   │ -p no-preload-122332 --alsologtostderr -v=1                                                                                                                              │ no-preload-122332            │ jenkins │ v1.37.0 │ 29 Dec 25 07:17 UTC │                     │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/29 07:16:53
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1229 07:16:53.674737  269280 out.go:360] Setting OutFile to fd 1 ...
	I1229 07:16:53.674829  269280 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1229 07:16:53.674840  269280 out.go:374] Setting ErrFile to fd 2...
	I1229 07:16:53.674846  269280 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1229 07:16:53.675081  269280 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22353-9207/.minikube/bin
	I1229 07:16:53.675541  269280 out.go:368] Setting JSON to false
	I1229 07:16:53.676755  269280 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":3566,"bootTime":1766989048,"procs":287,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1229 07:16:53.676830  269280 start.go:143] virtualization: kvm guest
	I1229 07:16:53.678604  269280 out.go:179] * [default-k8s-diff-port-798607] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1229 07:16:53.679809  269280 out.go:179]   - MINIKUBE_LOCATION=22353
	I1229 07:16:53.679852  269280 notify.go:221] Checking for updates...
	I1229 07:16:53.682193  269280 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1229 07:16:53.683273  269280 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22353-9207/kubeconfig
	I1229 07:16:53.684195  269280 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22353-9207/.minikube
	I1229 07:16:53.685317  269280 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1229 07:16:53.686392  269280 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1229 07:16:53.688062  269280 config.go:182] Loaded profile config "default-k8s-diff-port-798607": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1229 07:16:53.688582  269280 driver.go:422] Setting default libvirt URI to qemu:///system
	I1229 07:16:53.713654  269280 docker.go:124] docker version: linux-29.1.3:Docker Engine - Community
	I1229 07:16:53.713735  269280 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1229 07:16:53.773722  269280 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:64 OomKillDisable:false NGoroutines:75 SystemTime:2025-12-29 07:16:53.763812458 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:dea7da592f5d1d2b7755e3a161be07f43fad8f75 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1229 07:16:53.773832  269280 docker.go:319] overlay module found
	I1229 07:16:53.777031  269280 out.go:179] * Using the docker driver based on existing profile
	I1229 07:16:53.778574  269280 start.go:309] selected driver: docker
	I1229 07:16:53.778590  269280 start.go:928] validating driver "docker" against &{Name:default-k8s-diff-port-798607 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766979815-22353@sha256:20dad5895b49b986a1253c0faab60865204843ac97fd3a6e6210da5896244409 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:default-k8s-diff-port-798607 Namespace:default APISe
rverHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] M
ountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1229 07:16:53.778676  269280 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1229 07:16:53.779254  269280 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1229 07:16:53.837961  269280 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:64 OomKillDisable:false NGoroutines:75 SystemTime:2025-12-29 07:16:53.826615969 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:dea7da592f5d1d2b7755e3a161be07f43fad8f75 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1229 07:16:53.838279  269280 start_flags.go:1019] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1229 07:16:53.838323  269280 cni.go:84] Creating CNI manager for ""
	I1229 07:16:53.838396  269280 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1229 07:16:53.838463  269280 start.go:353] cluster config:
	{Name:default-k8s-diff-port-798607 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766979815-22353@sha256:20dad5895b49b986a1253c0faab60865204843ac97fd3a6e6210da5896244409 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:default-k8s-diff-port-798607 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:
cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false
DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1229 07:16:53.841727  269280 out.go:179] * Starting "default-k8s-diff-port-798607" primary control-plane node in "default-k8s-diff-port-798607" cluster
	I1229 07:16:53.842789  269280 cache.go:134] Beginning downloading kic base image for docker with crio
	I1229 07:16:53.844012  269280 out.go:179] * Pulling base image v0.0.48-1766979815-22353 ...
	I1229 07:16:53.845087  269280 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime crio
	I1229 07:16:53.845124  269280 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22353-9207/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-cri-o-overlay-amd64.tar.lz4
	I1229 07:16:53.845134  269280 cache.go:65] Caching tarball of preloaded images
	I1229 07:16:53.845191  269280 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766979815-22353@sha256:20dad5895b49b986a1253c0faab60865204843ac97fd3a6e6210da5896244409 in local docker daemon
	I1229 07:16:53.845268  269280 preload.go:251] Found /home/jenkins/minikube-integration/22353-9207/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1229 07:16:53.845284  269280 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0 on crio
	I1229 07:16:53.845418  269280 profile.go:143] Saving config to /home/jenkins/minikube-integration/22353-9207/.minikube/profiles/default-k8s-diff-port-798607/config.json ...
	I1229 07:16:53.868568  269280 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766979815-22353@sha256:20dad5895b49b986a1253c0faab60865204843ac97fd3a6e6210da5896244409 in local docker daemon, skipping pull
	I1229 07:16:53.868586  269280 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766979815-22353@sha256:20dad5895b49b986a1253c0faab60865204843ac97fd3a6e6210da5896244409 exists in daemon, skipping load
	I1229 07:16:53.868603  269280 cache.go:243] Successfully downloaded all kic artifacts
	I1229 07:16:53.868641  269280 start.go:360] acquireMachinesLock for default-k8s-diff-port-798607: {Name:mk70c0b726e0ebb1a3d037018e7b56d52af0e215 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1229 07:16:53.868704  269280 start.go:364] duration metric: took 41.245µs to acquireMachinesLock for "default-k8s-diff-port-798607"
	I1229 07:16:53.868736  269280 start.go:96] Skipping create...Using existing machine configuration
	I1229 07:16:53.868745  269280 fix.go:54] fixHost starting: 
	I1229 07:16:53.869055  269280 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-798607 --format={{.State.Status}}
	I1229 07:16:53.886644  269280 fix.go:112] recreateIfNeeded on default-k8s-diff-port-798607: state=Stopped err=<nil>
	W1229 07:16:53.886676  269280 fix.go:138] unexpected machine state, will restart: <nil>
	I1229 07:16:49.243708  268071 out.go:252] * Restarting existing docker container for "embed-certs-739827" ...
	I1229 07:16:49.243787  268071 cli_runner.go:164] Run: docker start embed-certs-739827
	I1229 07:16:49.502325  268071 cli_runner.go:164] Run: docker container inspect embed-certs-739827 --format={{.State.Status}}
	I1229 07:16:49.522981  268071 kic.go:430] container "embed-certs-739827" state is running.
	I1229 07:16:49.523543  268071 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-739827
	I1229 07:16:49.544230  268071 profile.go:143] Saving config to /home/jenkins/minikube-integration/22353-9207/.minikube/profiles/embed-certs-739827/config.json ...
	I1229 07:16:49.544511  268071 machine.go:94] provisionDockerMachine start ...
	I1229 07:16:49.544606  268071 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-739827
	I1229 07:16:49.565179  268071 main.go:144] libmachine: Using SSH client type: native
	I1229 07:16:49.565520  268071 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e300] 0x850fa0 <nil>  [] 0s} 127.0.0.1 33083 <nil> <nil>}
	I1229 07:16:49.565543  268071 main.go:144] libmachine: About to run SSH command:
	hostname
	I1229 07:16:49.566394  268071 main.go:144] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:41862->127.0.0.1:33083: read: connection reset by peer
	I1229 07:16:52.706163  268071 main.go:144] libmachine: SSH cmd err, output: <nil>: embed-certs-739827
	
	I1229 07:16:52.706207  268071 ubuntu.go:182] provisioning hostname "embed-certs-739827"
	I1229 07:16:52.706295  268071 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-739827
	I1229 07:16:52.725495  268071 main.go:144] libmachine: Using SSH client type: native
	I1229 07:16:52.725770  268071 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e300] 0x850fa0 <nil>  [] 0s} 127.0.0.1 33083 <nil> <nil>}
	I1229 07:16:52.725790  268071 main.go:144] libmachine: About to run SSH command:
	sudo hostname embed-certs-739827 && echo "embed-certs-739827" | sudo tee /etc/hostname
	I1229 07:16:52.873147  268071 main.go:144] libmachine: SSH cmd err, output: <nil>: embed-certs-739827
	
	I1229 07:16:52.873248  268071 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-739827
	I1229 07:16:52.894665  268071 main.go:144] libmachine: Using SSH client type: native
	I1229 07:16:52.894977  268071 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e300] 0x850fa0 <nil>  [] 0s} 127.0.0.1 33083 <nil> <nil>}
	I1229 07:16:52.895004  268071 main.go:144] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-739827' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-739827/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-739827' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1229 07:16:53.034410  268071 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	I1229 07:16:53.034438  268071 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22353-9207/.minikube CaCertPath:/home/jenkins/minikube-integration/22353-9207/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22353-9207/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22353-9207/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22353-9207/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22353-9207/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22353-9207/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22353-9207/.minikube}
	I1229 07:16:53.034469  268071 ubuntu.go:190] setting up certificates
	I1229 07:16:53.034487  268071 provision.go:84] configureAuth start
	I1229 07:16:53.034551  268071 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-739827
	I1229 07:16:53.055953  268071 provision.go:143] copyHostCerts
	I1229 07:16:53.056033  268071 exec_runner.go:144] found /home/jenkins/minikube-integration/22353-9207/.minikube/ca.pem, removing ...
	I1229 07:16:53.056048  268071 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22353-9207/.minikube/ca.pem
	I1229 07:16:53.056126  268071 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22353-9207/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22353-9207/.minikube/ca.pem (1082 bytes)
	I1229 07:16:53.056266  268071 exec_runner.go:144] found /home/jenkins/minikube-integration/22353-9207/.minikube/cert.pem, removing ...
	I1229 07:16:53.056278  268071 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22353-9207/.minikube/cert.pem
	I1229 07:16:53.056315  268071 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22353-9207/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22353-9207/.minikube/cert.pem (1123 bytes)
	I1229 07:16:53.056386  268071 exec_runner.go:144] found /home/jenkins/minikube-integration/22353-9207/.minikube/key.pem, removing ...
	I1229 07:16:53.056395  268071 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22353-9207/.minikube/key.pem
	I1229 07:16:53.056419  268071 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22353-9207/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22353-9207/.minikube/key.pem (1675 bytes)
	I1229 07:16:53.056500  268071 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22353-9207/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22353-9207/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22353-9207/.minikube/certs/ca-key.pem org=jenkins.embed-certs-739827 san=[127.0.0.1 192.168.103.2 embed-certs-739827 localhost minikube]
	I1229 07:16:53.296848  268071 provision.go:177] copyRemoteCerts
	I1229 07:16:53.296902  268071 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1229 07:16:53.296935  268071 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-739827
	I1229 07:16:53.317808  268071 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33083 SSHKeyPath:/home/jenkins/minikube-integration/22353-9207/.minikube/machines/embed-certs-739827/id_rsa Username:docker}
	I1229 07:16:53.422280  268071 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-9207/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1229 07:16:53.441814  268071 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-9207/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1229 07:16:53.461885  268071 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-9207/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1229 07:16:53.481146  268071 provision.go:87] duration metric: took 446.646845ms to configureAuth
	I1229 07:16:53.481178  268071 ubuntu.go:206] setting minikube options for container-runtime
	I1229 07:16:53.481414  268071 config.go:182] Loaded profile config "embed-certs-739827": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1229 07:16:53.481565  268071 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-739827
	I1229 07:16:53.502551  268071 main.go:144] libmachine: Using SSH client type: native
	I1229 07:16:53.502840  268071 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e300] 0x850fa0 <nil>  [] 0s} 127.0.0.1 33083 <nil> <nil>}
	I1229 07:16:53.502866  268071 main.go:144] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1229 07:16:53.839544  268071 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1229 07:16:53.839570  268071 machine.go:97] duration metric: took 4.295039577s to provisionDockerMachine
	I1229 07:16:53.839582  268071 start.go:293] postStartSetup for "embed-certs-739827" (driver="docker")
	I1229 07:16:53.839594  268071 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1229 07:16:53.839650  268071 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1229 07:16:53.839704  268071 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-739827
	I1229 07:16:53.861013  268071 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33083 SSHKeyPath:/home/jenkins/minikube-integration/22353-9207/.minikube/machines/embed-certs-739827/id_rsa Username:docker}
	I1229 07:16:53.962699  268071 ssh_runner.go:195] Run: cat /etc/os-release
	I1229 07:16:53.966474  268071 main.go:144] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1229 07:16:53.966507  268071 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1229 07:16:53.966522  268071 filesync.go:126] Scanning /home/jenkins/minikube-integration/22353-9207/.minikube/addons for local assets ...
	I1229 07:16:53.966575  268071 filesync.go:126] Scanning /home/jenkins/minikube-integration/22353-9207/.minikube/files for local assets ...
	I1229 07:16:53.966685  268071 filesync.go:149] local asset: /home/jenkins/minikube-integration/22353-9207/.minikube/files/etc/ssl/certs/127332.pem -> 127332.pem in /etc/ssl/certs
	I1229 07:16:53.966809  268071 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1229 07:16:53.975023  268071 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-9207/.minikube/files/etc/ssl/certs/127332.pem --> /etc/ssl/certs/127332.pem (1708 bytes)
	I1229 07:16:54.001087  268071 start.go:296] duration metric: took 161.488475ms for postStartSetup
	I1229 07:16:54.001171  268071 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1229 07:16:54.001244  268071 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-739827
	I1229 07:16:54.022211  268071 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33083 SSHKeyPath:/home/jenkins/minikube-integration/22353-9207/.minikube/machines/embed-certs-739827/id_rsa Username:docker}
	I1229 07:16:49.186705  225445 cri.go:96] found id: ""
	I1229 07:16:49.186738  225445 logs.go:282] 0 containers: []
	W1229 07:16:49.186749  225445 logs.go:284] No container was found matching "kindnet"
	I1229 07:16:49.186756  225445 cri.go:61] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1229 07:16:49.186813  225445 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=storage-provisioner
	I1229 07:16:49.219364  225445 cri.go:96] found id: ""
	I1229 07:16:49.219389  225445 logs.go:282] 0 containers: []
	W1229 07:16:49.219399  225445 logs.go:284] No container was found matching "storage-provisioner"
	I1229 07:16:49.219410  225445 logs.go:123] Gathering logs for kube-apiserver [8b80982715281e14912ea726d2a5a182323febb349dfab3bcb2a610db7fa1d11] ...
	I1229 07:16:49.219425  225445 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 8b80982715281e14912ea726d2a5a182323febb349dfab3bcb2a610db7fa1d11"
	I1229 07:16:49.256204  225445 logs.go:123] Gathering logs for etcd [02694b727a7ef5615b676832bc306017d23da15eff0a3aa9c1e32630ffe9e1bd] ...
	I1229 07:16:49.256252  225445 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 02694b727a7ef5615b676832bc306017d23da15eff0a3aa9c1e32630ffe9e1bd"
	I1229 07:16:49.296336  225445 logs.go:123] Gathering logs for kube-scheduler [1a2a5ec730671fe2fec7f6b801e54afe17547e25677f0f6dc85052e44f7374d4] ...
	I1229 07:16:49.296382  225445 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 1a2a5ec730671fe2fec7f6b801e54afe17547e25677f0f6dc85052e44f7374d4"
	I1229 07:16:49.379178  225445 logs.go:123] Gathering logs for kube-controller-manager [a67832fbf27ab39d26d7cd60ce6b9b5a496df64418a8f0557221862df96d6685] ...
	I1229 07:16:49.379212  225445 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a67832fbf27ab39d26d7cd60ce6b9b5a496df64418a8f0557221862df96d6685"
	I1229 07:16:49.409248  225445 logs.go:123] Gathering logs for kube-controller-manager [c6904a1539983ab61b05abbaa0001209617c96ddc1921ff83557531b68191f66] ...
	I1229 07:16:49.409273  225445 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c6904a1539983ab61b05abbaa0001209617c96ddc1921ff83557531b68191f66"
	I1229 07:16:49.437709  225445 logs.go:123] Gathering logs for CRI-O ...
	I1229 07:16:49.437739  225445 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1229 07:16:49.511823  225445 logs.go:123] Gathering logs for container status ...
	I1229 07:16:49.511859  225445 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1229 07:16:49.549208  225445 logs.go:123] Gathering logs for kubelet ...
	I1229 07:16:49.549294  225445 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1229 07:16:49.643989  225445 logs.go:123] Gathering logs for dmesg ...
	I1229 07:16:49.644035  225445 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1229 07:16:49.658567  225445 logs.go:123] Gathering logs for describe nodes ...
	I1229 07:16:49.658605  225445 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1229 07:16:49.715614  225445 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1229 07:16:49.715638  225445 logs.go:123] Gathering logs for kube-apiserver [3a9c461f6549cedfcb902d493a98dfdfd45573c28ec554e6b1d2ce904ee1fd67] ...
	I1229 07:16:49.715654  225445 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3a9c461f6549cedfcb902d493a98dfdfd45573c28ec554e6b1d2ce904ee1fd67"
	I1229 07:16:49.751920  225445 logs.go:123] Gathering logs for kube-scheduler [14199977eb133b2984f339d2e84d171a190e387623bebdf7ed21e27d8cf71d90] ...
	I1229 07:16:49.751952  225445 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 14199977eb133b2984f339d2e84d171a190e387623bebdf7ed21e27d8cf71d90"
	W1229 07:16:49.780385  225445 logs.go:138] Found kube-scheduler [14199977eb133b2984f339d2e84d171a190e387623bebdf7ed21e27d8cf71d90] problem: E1229 07:16:24.252692       1 run.go:72] "command failed" err="failed to create listener: failed to listen on 127.0.0.1:10259: listen tcp 127.0.0.1:10259: bind: address already in use"
	I1229 07:16:49.780422  225445 logs.go:123] Gathering logs for kube-scheduler [83f0b45b14d0f5c0f2428e774b2c22cc1cbf1cdc631be13cb8017de1791ed9c1] ...
	I1229 07:16:49.780441  225445 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 83f0b45b14d0f5c0f2428e774b2c22cc1cbf1cdc631be13cb8017de1791ed9c1"
	I1229 07:16:49.815064  225445 out.go:374] Setting ErrFile to fd 2...
	I1229 07:16:49.815101  225445 out.go:408] TERM=,COLORTERM=, which probably does not support color
	W1229 07:16:49.815155  225445 out.go:285] X Problems detected in kube-scheduler [14199977eb133b2984f339d2e84d171a190e387623bebdf7ed21e27d8cf71d90]:
	W1229 07:16:49.815171  225445 out.go:285]   E1229 07:16:24.252692       1 run.go:72] "command failed" err="failed to create listener: failed to listen on 127.0.0.1:10259: listen tcp 127.0.0.1:10259: bind: address already in use"
	I1229 07:16:49.815177  225445 out.go:374] Setting ErrFile to fd 2...
	I1229 07:16:49.815189  225445 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1229 07:16:54.118411  268071 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1229 07:16:54.123468  268071 fix.go:56] duration metric: took 4.901774194s for fixHost
	I1229 07:16:54.123496  268071 start.go:83] releasing machines lock for "embed-certs-739827", held for 4.901829553s
	I1229 07:16:54.123568  268071 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-739827
	I1229 07:16:54.144367  268071 ssh_runner.go:195] Run: cat /version.json
	I1229 07:16:54.144427  268071 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-739827
	I1229 07:16:54.144475  268071 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1229 07:16:54.144549  268071 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-739827
	I1229 07:16:54.165900  268071 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33083 SSHKeyPath:/home/jenkins/minikube-integration/22353-9207/.minikube/machines/embed-certs-739827/id_rsa Username:docker}
	I1229 07:16:54.166740  268071 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33083 SSHKeyPath:/home/jenkins/minikube-integration/22353-9207/.minikube/machines/embed-certs-739827/id_rsa Username:docker}
	I1229 07:16:54.325848  268071 ssh_runner.go:195] Run: systemctl --version
	I1229 07:16:54.333096  268071 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1229 07:16:54.378552  268071 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1229 07:16:54.383371  268071 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1229 07:16:54.383435  268071 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1229 07:16:54.391645  268071 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1229 07:16:54.391664  268071 start.go:496] detecting cgroup driver to use...
	I1229 07:16:54.391692  268071 detect.go:190] detected "systemd" cgroup driver on host os
	I1229 07:16:54.391736  268071 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1229 07:16:54.411531  268071 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1229 07:16:54.426977  268071 docker.go:218] disabling cri-docker service (if available) ...
	I1229 07:16:54.427025  268071 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1229 07:16:54.442322  268071 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1229 07:16:54.455842  268071 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1229 07:16:54.545359  268071 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1229 07:16:54.626697  268071 docker.go:234] disabling docker service ...
	I1229 07:16:54.626749  268071 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1229 07:16:54.640496  268071 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1229 07:16:54.652042  268071 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1229 07:16:54.731324  268071 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1229 07:16:54.810557  268071 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1229 07:16:54.822466  268071 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1229 07:16:54.836181  268071 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1229 07:16:54.836283  268071 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1229 07:16:54.844955  268071 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1229 07:16:54.845015  268071 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1229 07:16:54.853236  268071 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1229 07:16:54.861414  268071 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1229 07:16:54.869997  268071 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1229 07:16:54.877832  268071 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1229 07:16:54.886447  268071 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1229 07:16:54.894595  268071 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1229 07:16:54.903164  268071 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1229 07:16:54.910354  268071 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1229 07:16:54.917498  268071 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1229 07:16:54.999905  268071 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1229 07:16:55.138714  268071 start.go:553] Will wait 60s for socket path /var/run/crio/crio.sock
	I1229 07:16:55.138798  268071 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1229 07:16:55.142782  268071 start.go:574] Will wait 60s for crictl version
	I1229 07:16:55.142848  268071 ssh_runner.go:195] Run: which crictl
	I1229 07:16:55.146522  268071 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1229 07:16:55.170001  268071 start.go:590] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.35.0
	RuntimeApiVersion:  v1
	I1229 07:16:55.170065  268071 ssh_runner.go:195] Run: crio --version
	I1229 07:16:55.196203  268071 ssh_runner.go:195] Run: crio --version
	I1229 07:16:55.225122  268071 out.go:179] * Preparing Kubernetes v1.35.0 on CRI-O 1.35.0 ...
	I1229 07:16:55.226416  268071 cli_runner.go:164] Run: docker network inspect embed-certs-739827 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1229 07:16:55.243756  268071 ssh_runner.go:195] Run: grep 192.168.103.1	host.minikube.internal$ /etc/hosts
	I1229 07:16:55.247870  268071 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.103.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1229 07:16:55.257968  268071 kubeadm.go:884] updating cluster {Name:embed-certs-739827 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766979815-22353@sha256:20dad5895b49b986a1253c0faab60865204843ac97fd3a6e6210da5896244409 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:embed-certs-739827 Namespace:default APIServerHAVIP: APIServerName:minikubeCA AP
IServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:dock
er BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false} ...
	I1229 07:16:55.258082  268071 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime crio
	I1229 07:16:55.258124  268071 ssh_runner.go:195] Run: sudo crictl images --output json
	I1229 07:16:55.290195  268071 crio.go:561] all images are preloaded for cri-o runtime.
	I1229 07:16:55.290233  268071 crio.go:433] Images already preloaded, skipping extraction
	I1229 07:16:55.290295  268071 ssh_runner.go:195] Run: sudo crictl images --output json
	I1229 07:16:55.317321  268071 crio.go:561] all images are preloaded for cri-o runtime.
	I1229 07:16:55.317343  268071 cache_images.go:86] Images are preloaded, skipping loading
	I1229 07:16:55.317350  268071 kubeadm.go:935] updating node { 192.168.103.2 8443 v1.35.0 crio true true} ...
	I1229 07:16:55.317435  268071 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=embed-certs-739827 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.103.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0 ClusterName:embed-certs-739827 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1229 07:16:55.317495  268071 ssh_runner.go:195] Run: crio config
	I1229 07:16:55.362409  268071 cni.go:84] Creating CNI manager for ""
	I1229 07:16:55.362433  268071 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1229 07:16:55.362448  268071 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1229 07:16:55.362470  268071 kubeadm.go:197] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.103.2 APIServerPort:8443 KubernetesVersion:v1.35.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-739827 NodeName:embed-certs-739827 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.103.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.103.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:
/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1229 07:16:55.362591  268071 kubeadm.go:203] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.103.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-739827"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.103.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.103.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1229 07:16:55.362654  268071 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0
	I1229 07:16:55.371551  268071 binaries.go:51] Found k8s binaries, skipping transfer
	I1229 07:16:55.371622  268071 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1229 07:16:55.379151  268071 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (369 bytes)
	I1229 07:16:55.392005  268071 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1229 07:16:55.404573  268071 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2217 bytes)
	I1229 07:16:55.416447  268071 ssh_runner.go:195] Run: grep 192.168.103.2	control-plane.minikube.internal$ /etc/hosts
	I1229 07:16:55.419970  268071 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.103.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1229 07:16:55.429402  268071 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1229 07:16:55.506987  268071 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1229 07:16:55.533696  268071 certs.go:69] Setting up /home/jenkins/minikube-integration/22353-9207/.minikube/profiles/embed-certs-739827 for IP: 192.168.103.2
	I1229 07:16:55.533716  268071 certs.go:195] generating shared ca certs ...
	I1229 07:16:55.533730  268071 certs.go:227] acquiring lock for ca certs: {Name:mk9ea2caa33885d3eff30cd6b31b0954826457bd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1229 07:16:55.533887  268071 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22353-9207/.minikube/ca.key
	I1229 07:16:55.533945  268071 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22353-9207/.minikube/proxy-client-ca.key
	I1229 07:16:55.533959  268071 certs.go:257] generating profile certs ...
	I1229 07:16:55.534067  268071 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22353-9207/.minikube/profiles/embed-certs-739827/client.key
	I1229 07:16:55.534143  268071 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22353-9207/.minikube/profiles/embed-certs-739827/apiserver.key.2a13e84f
	I1229 07:16:55.534213  268071 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22353-9207/.minikube/profiles/embed-certs-739827/proxy-client.key
	I1229 07:16:55.534376  268071 certs.go:484] found cert: /home/jenkins/minikube-integration/22353-9207/.minikube/certs/12733.pem (1338 bytes)
	W1229 07:16:55.534423  268071 certs.go:480] ignoring /home/jenkins/minikube-integration/22353-9207/.minikube/certs/12733_empty.pem, impossibly tiny 0 bytes
	I1229 07:16:55.534469  268071 certs.go:484] found cert: /home/jenkins/minikube-integration/22353-9207/.minikube/certs/ca-key.pem (1675 bytes)
	I1229 07:16:55.534510  268071 certs.go:484] found cert: /home/jenkins/minikube-integration/22353-9207/.minikube/certs/ca.pem (1082 bytes)
	I1229 07:16:55.534547  268071 certs.go:484] found cert: /home/jenkins/minikube-integration/22353-9207/.minikube/certs/cert.pem (1123 bytes)
	I1229 07:16:55.534579  268071 certs.go:484] found cert: /home/jenkins/minikube-integration/22353-9207/.minikube/certs/key.pem (1675 bytes)
	I1229 07:16:55.534638  268071 certs.go:484] found cert: /home/jenkins/minikube-integration/22353-9207/.minikube/files/etc/ssl/certs/127332.pem (1708 bytes)
	I1229 07:16:55.535268  268071 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-9207/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1229 07:16:55.553091  268071 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-9207/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1229 07:16:55.571666  268071 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-9207/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1229 07:16:55.590416  268071 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-9207/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1229 07:16:55.612654  268071 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-9207/.minikube/profiles/embed-certs-739827/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I1229 07:16:55.631860  268071 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-9207/.minikube/profiles/embed-certs-739827/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1229 07:16:55.649462  268071 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-9207/.minikube/profiles/embed-certs-739827/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1229 07:16:55.668006  268071 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-9207/.minikube/profiles/embed-certs-739827/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1229 07:16:55.685539  268071 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-9207/.minikube/files/etc/ssl/certs/127332.pem --> /usr/share/ca-certificates/127332.pem (1708 bytes)
	I1229 07:16:55.702160  268071 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-9207/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1229 07:16:55.719045  268071 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-9207/.minikube/certs/12733.pem --> /usr/share/ca-certificates/12733.pem (1338 bytes)
	I1229 07:16:55.736825  268071 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (722 bytes)
	I1229 07:16:55.748796  268071 ssh_runner.go:195] Run: openssl version
	I1229 07:16:55.754859  268071 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/127332.pem
	I1229 07:16:55.761973  268071 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/127332.pem /etc/ssl/certs/127332.pem
	I1229 07:16:55.769054  268071 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/127332.pem
	I1229 07:16:55.772576  268071 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 29 06:49 /usr/share/ca-certificates/127332.pem
	I1229 07:16:55.772622  268071 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/127332.pem
	I1229 07:16:55.807333  268071 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1229 07:16:55.815139  268071 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1229 07:16:55.822628  268071 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1229 07:16:55.829908  268071 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1229 07:16:55.834170  268071 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 29 06:46 /usr/share/ca-certificates/minikubeCA.pem
	I1229 07:16:55.834244  268071 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1229 07:16:55.869069  268071 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1229 07:16:55.876700  268071 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/12733.pem
	I1229 07:16:55.883926  268071 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/12733.pem /etc/ssl/certs/12733.pem
	I1229 07:16:55.890986  268071 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12733.pem
	I1229 07:16:55.894623  268071 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 29 06:49 /usr/share/ca-certificates/12733.pem
	I1229 07:16:55.894682  268071 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12733.pem
	I1229 07:16:55.930007  268071 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1229 07:16:55.937656  268071 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1229 07:16:55.941402  268071 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1229 07:16:55.975959  268071 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1229 07:16:56.009865  268071 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1229 07:16:56.051542  268071 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1229 07:16:56.096491  268071 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1229 07:16:56.145609  268071 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1229 07:16:56.196508  268071 kubeadm.go:401] StartCluster: {Name:embed-certs-739827 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766979815-22353@sha256:20dad5895b49b986a1253c0faab60865204843ac97fd3a6e6210da5896244409 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:embed-certs-739827 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker
BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1229 07:16:56.196629  268071 cri.go:61] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1229 07:16:56.196699  268071 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1229 07:16:56.229777  268071 cri.go:96] found id: "64d38c25f85b27ef903c4b442a4a233566702ef4d41de37f0bd76a24a6632555"
	I1229 07:16:56.229800  268071 cri.go:96] found id: "9212464b12efa806f75edd62f5a28621d98bc923f0f5c51a13c6e0475b23ee0a"
	I1229 07:16:56.229806  268071 cri.go:96] found id: "f8f720f7da22897696acdb14fb867efe0f070b8de40dde3450d76b6859332adc"
	I1229 07:16:56.229810  268071 cri.go:96] found id: "0b939e4faa5624d77348fcf707669fb95bdce762e69420b9e5dde5b8d7fad11c"
	I1229 07:16:56.229815  268071 cri.go:96] found id: ""
	I1229 07:16:56.229859  268071 ssh_runner.go:195] Run: sudo runc list -f json
	W1229 07:16:56.241656  268071 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-29T07:16:56Z" level=error msg="open /run/runc: no such file or directory"
	I1229 07:16:56.241760  268071 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1229 07:16:56.249651  268071 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1229 07:16:56.249669  268071 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1229 07:16:56.249710  268071 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1229 07:16:56.256981  268071 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1229 07:16:56.257775  268071 kubeconfig.go:47] verify endpoint returned: get endpoint: "embed-certs-739827" does not appear in /home/jenkins/minikube-integration/22353-9207/kubeconfig
	I1229 07:16:56.258182  268071 kubeconfig.go:62] /home/jenkins/minikube-integration/22353-9207/kubeconfig needs updating (will repair): [kubeconfig missing "embed-certs-739827" cluster setting kubeconfig missing "embed-certs-739827" context setting]
	I1229 07:16:56.258856  268071 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22353-9207/kubeconfig: {Name:mkdd37af1bd6d0f9cc034a64c07c8d33ee5c10ac Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1229 07:16:56.260639  268071 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1229 07:16:56.269283  268071 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.103.2
	I1229 07:16:56.269315  268071 kubeadm.go:602] duration metric: took 19.640054ms to restartPrimaryControlPlane
	I1229 07:16:56.269326  268071 kubeadm.go:403] duration metric: took 72.829066ms to StartCluster
	I1229 07:16:56.269344  268071 settings.go:142] acquiring lock: {Name:mkc0a676e8e9f981283a1b55a28ae5261bf35d87 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1229 07:16:56.269414  268071 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22353-9207/kubeconfig
	I1229 07:16:56.271281  268071 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22353-9207/kubeconfig: {Name:mkdd37af1bd6d0f9cc034a64c07c8d33ee5c10ac Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1229 07:16:56.271570  268071 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1229 07:16:56.271687  268071 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1229 07:16:56.271790  268071 addons.go:70] Setting storage-provisioner=true in profile "embed-certs-739827"
	I1229 07:16:56.271806  268071 addons.go:239] Setting addon storage-provisioner=true in "embed-certs-739827"
	I1229 07:16:56.271805  268071 config.go:182] Loaded profile config "embed-certs-739827": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	W1229 07:16:56.271814  268071 addons.go:248] addon storage-provisioner should already be in state true
	I1229 07:16:56.271810  268071 addons.go:70] Setting dashboard=true in profile "embed-certs-739827"
	I1229 07:16:56.271833  268071 addons.go:239] Setting addon dashboard=true in "embed-certs-739827"
	I1229 07:16:56.271841  268071 host.go:66] Checking if "embed-certs-739827" exists ...
	W1229 07:16:56.271843  268071 addons.go:248] addon dashboard should already be in state true
	I1229 07:16:56.271838  268071 addons.go:70] Setting default-storageclass=true in profile "embed-certs-739827"
	I1229 07:16:56.271879  268071 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-739827"
	I1229 07:16:56.271881  268071 host.go:66] Checking if "embed-certs-739827" exists ...
	I1229 07:16:56.272194  268071 cli_runner.go:164] Run: docker container inspect embed-certs-739827 --format={{.State.Status}}
	I1229 07:16:56.272378  268071 cli_runner.go:164] Run: docker container inspect embed-certs-739827 --format={{.State.Status}}
	I1229 07:16:56.272384  268071 cli_runner.go:164] Run: docker container inspect embed-certs-739827 --format={{.State.Status}}
	I1229 07:16:56.273507  268071 out.go:179] * Verifying Kubernetes components...
	I1229 07:16:56.274580  268071 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1229 07:16:56.296920  268071 addons.go:239] Setting addon default-storageclass=true in "embed-certs-739827"
	W1229 07:16:56.296949  268071 addons.go:248] addon default-storageclass should already be in state true
	I1229 07:16:56.296979  268071 host.go:66] Checking if "embed-certs-739827" exists ...
	I1229 07:16:56.297493  268071 cli_runner.go:164] Run: docker container inspect embed-certs-739827 --format={{.State.Status}}
	I1229 07:16:56.297612  268071 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1229 07:16:56.299120  268071 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1229 07:16:56.299143  268071 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1229 07:16:56.299207  268071 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-739827
	I1229 07:16:56.301260  268071 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1229 07:16:56.302499  268071 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1229 07:16:56.303582  268071 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1229 07:16:56.303603  268071 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1229 07:16:56.303651  268071 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-739827
	I1229 07:16:56.320255  268071 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1229 07:16:56.320281  268071 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1229 07:16:56.320345  268071 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-739827
	I1229 07:16:56.324308  268071 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33083 SSHKeyPath:/home/jenkins/minikube-integration/22353-9207/.minikube/machines/embed-certs-739827/id_rsa Username:docker}
	I1229 07:16:56.327626  268071 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33083 SSHKeyPath:/home/jenkins/minikube-integration/22353-9207/.minikube/machines/embed-certs-739827/id_rsa Username:docker}
	I1229 07:16:56.355956  268071 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33083 SSHKeyPath:/home/jenkins/minikube-integration/22353-9207/.minikube/machines/embed-certs-739827/id_rsa Username:docker}
	I1229 07:16:56.439323  268071 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1229 07:16:56.450493  268071 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1229 07:16:56.450520  268071 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1229 07:16:56.451780  268071 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1229 07:16:56.452668  268071 node_ready.go:35] waiting up to 6m0s for node "embed-certs-739827" to be "Ready" ...
	I1229 07:16:56.464268  268071 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1229 07:16:56.464766  268071 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1229 07:16:56.464789  268071 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1229 07:16:56.479017  268071 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1229 07:16:56.479040  268071 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1229 07:16:56.493130  268071 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1229 07:16:56.493149  268071 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1229 07:16:56.506469  268071 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1229 07:16:56.506494  268071 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1229 07:16:56.520469  268071 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1229 07:16:56.520500  268071 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1229 07:16:56.533796  268071 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1229 07:16:56.533818  268071 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1229 07:16:56.546016  268071 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1229 07:16:56.546036  268071 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1229 07:16:56.558769  268071 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1229 07:16:56.558789  268071 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1229 07:16:56.570982  268071 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1229 07:16:57.587424  268071 node_ready.go:49] node "embed-certs-739827" is "Ready"
	I1229 07:16:57.587456  268071 node_ready.go:38] duration metric: took 1.134759132s for node "embed-certs-739827" to be "Ready" ...
	I1229 07:16:57.587473  268071 api_server.go:52] waiting for apiserver process to appear ...
	I1229 07:16:57.587528  268071 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1229 07:16:58.223824  268071 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.772014189s)
	I1229 07:16:58.223903  268071 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.759604106s)
	I1229 07:16:58.224004  268071 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.652986623s)
	I1229 07:16:58.224029  268071 api_server.go:72] duration metric: took 1.952426368s to wait for apiserver process to appear ...
	I1229 07:16:58.224044  268071 api_server.go:88] waiting for apiserver healthz status ...
	I1229 07:16:58.224065  268071 api_server.go:299] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1229 07:16:58.225705  268071 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p embed-certs-739827 addons enable metrics-server
	
	I1229 07:16:58.230342  268071 api_server.go:325] https://192.168.103.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1229 07:16:58.230365  268071 api_server.go:103] status: https://192.168.103.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1229 07:16:58.236061  268071 out.go:179] * Enabled addons: storage-provisioner, dashboard, default-storageclass
	I1229 07:16:53.888535  269280 out.go:252] * Restarting existing docker container for "default-k8s-diff-port-798607" ...
	I1229 07:16:53.888610  269280 cli_runner.go:164] Run: docker start default-k8s-diff-port-798607
	I1229 07:16:54.134779  269280 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-798607 --format={{.State.Status}}
	I1229 07:16:54.155865  269280 kic.go:430] container "default-k8s-diff-port-798607" state is running.
	I1229 07:16:54.156327  269280 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-798607
	I1229 07:16:54.179273  269280 profile.go:143] Saving config to /home/jenkins/minikube-integration/22353-9207/.minikube/profiles/default-k8s-diff-port-798607/config.json ...
	I1229 07:16:54.179538  269280 machine.go:94] provisionDockerMachine start ...
	I1229 07:16:54.179606  269280 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-798607
	I1229 07:16:54.199668  269280 main.go:144] libmachine: Using SSH client type: native
	I1229 07:16:54.199951  269280 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e300] 0x850fa0 <nil>  [] 0s} 127.0.0.1 33088 <nil> <nil>}
	I1229 07:16:54.199971  269280 main.go:144] libmachine: About to run SSH command:
	hostname
	I1229 07:16:54.200636  269280 main.go:144] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:52880->127.0.0.1:33088: read: connection reset by peer
	I1229 07:16:57.353328  269280 main.go:144] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-798607
	
	I1229 07:16:57.353357  269280 ubuntu.go:182] provisioning hostname "default-k8s-diff-port-798607"
	I1229 07:16:57.353420  269280 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-798607
	I1229 07:16:57.374413  269280 main.go:144] libmachine: Using SSH client type: native
	I1229 07:16:57.374642  269280 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e300] 0x850fa0 <nil>  [] 0s} 127.0.0.1 33088 <nil> <nil>}
	I1229 07:16:57.374661  269280 main.go:144] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-798607 && echo "default-k8s-diff-port-798607" | sudo tee /etc/hostname
	I1229 07:16:57.535265  269280 main.go:144] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-798607
	
	I1229 07:16:57.535358  269280 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-798607
	I1229 07:16:57.555067  269280 main.go:144] libmachine: Using SSH client type: native
	I1229 07:16:57.555329  269280 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e300] 0x850fa0 <nil>  [] 0s} 127.0.0.1 33088 <nil> <nil>}
	I1229 07:16:57.555349  269280 main.go:144] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-798607' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-798607/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-798607' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1229 07:16:57.734163  269280 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	I1229 07:16:57.734195  269280 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22353-9207/.minikube CaCertPath:/home/jenkins/minikube-integration/22353-9207/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22353-9207/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22353-9207/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22353-9207/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22353-9207/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22353-9207/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22353-9207/.minikube}
	I1229 07:16:57.734268  269280 ubuntu.go:190] setting up certificates
	I1229 07:16:57.734283  269280 provision.go:84] configureAuth start
	I1229 07:16:57.734347  269280 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-798607
	I1229 07:16:57.759994  269280 provision.go:143] copyHostCerts
	I1229 07:16:57.760060  269280 exec_runner.go:144] found /home/jenkins/minikube-integration/22353-9207/.minikube/ca.pem, removing ...
	I1229 07:16:57.760090  269280 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22353-9207/.minikube/ca.pem
	I1229 07:16:57.760184  269280 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22353-9207/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22353-9207/.minikube/ca.pem (1082 bytes)
	I1229 07:16:57.760800  269280 exec_runner.go:144] found /home/jenkins/minikube-integration/22353-9207/.minikube/cert.pem, removing ...
	I1229 07:16:57.760822  269280 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22353-9207/.minikube/cert.pem
	I1229 07:16:57.760870  269280 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22353-9207/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22353-9207/.minikube/cert.pem (1123 bytes)
	I1229 07:16:57.761029  269280 exec_runner.go:144] found /home/jenkins/minikube-integration/22353-9207/.minikube/key.pem, removing ...
	I1229 07:16:57.761045  269280 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22353-9207/.minikube/key.pem
	I1229 07:16:57.761087  269280 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22353-9207/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22353-9207/.minikube/key.pem (1675 bytes)
	I1229 07:16:57.761175  269280 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22353-9207/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22353-9207/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22353-9207/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-798607 san=[127.0.0.1 192.168.85.2 default-k8s-diff-port-798607 localhost minikube]
	I1229 07:16:57.858590  269280 provision.go:177] copyRemoteCerts
	I1229 07:16:57.858686  269280 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1229 07:16:57.858730  269280 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-798607
	I1229 07:16:57.883395  269280 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33088 SSHKeyPath:/home/jenkins/minikube-integration/22353-9207/.minikube/machines/default-k8s-diff-port-798607/id_rsa Username:docker}
	I1229 07:16:57.992988  269280 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-9207/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1229 07:16:58.022849  269280 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-9207/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1229 07:16:58.051752  269280 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-9207/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I1229 07:16:58.076859  269280 provision.go:87] duration metric: took 342.540112ms to configureAuth
	I1229 07:16:58.076900  269280 ubuntu.go:206] setting minikube options for container-runtime
	I1229 07:16:58.077390  269280 config.go:182] Loaded profile config "default-k8s-diff-port-798607": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1229 07:16:58.077529  269280 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-798607
	I1229 07:16:58.100804  269280 main.go:144] libmachine: Using SSH client type: native
	I1229 07:16:58.101122  269280 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e300] 0x850fa0 <nil>  [] 0s} 127.0.0.1 33088 <nil> <nil>}
	I1229 07:16:58.101161  269280 main.go:144] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1229 07:16:58.423291  269280 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1229 07:16:58.423324  269280 machine.go:97] duration metric: took 4.243766485s to provisionDockerMachine
	I1229 07:16:58.423339  269280 start.go:293] postStartSetup for "default-k8s-diff-port-798607" (driver="docker")
	I1229 07:16:58.423354  269280 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1229 07:16:58.423415  269280 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1229 07:16:58.423470  269280 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-798607
	I1229 07:16:58.445130  269280 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33088 SSHKeyPath:/home/jenkins/minikube-integration/22353-9207/.minikube/machines/default-k8s-diff-port-798607/id_rsa Username:docker}
	I1229 07:16:58.544697  269280 ssh_runner.go:195] Run: cat /etc/os-release
	I1229 07:16:58.548269  269280 main.go:144] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1229 07:16:58.548293  269280 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1229 07:16:58.548303  269280 filesync.go:126] Scanning /home/jenkins/minikube-integration/22353-9207/.minikube/addons for local assets ...
	I1229 07:16:58.548348  269280 filesync.go:126] Scanning /home/jenkins/minikube-integration/22353-9207/.minikube/files for local assets ...
	I1229 07:16:58.548417  269280 filesync.go:149] local asset: /home/jenkins/minikube-integration/22353-9207/.minikube/files/etc/ssl/certs/127332.pem -> 127332.pem in /etc/ssl/certs
	I1229 07:16:58.548508  269280 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1229 07:16:58.556094  269280 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-9207/.minikube/files/etc/ssl/certs/127332.pem --> /etc/ssl/certs/127332.pem (1708 bytes)
	I1229 07:16:58.576602  269280 start.go:296] duration metric: took 153.245299ms for postStartSetup
	I1229 07:16:58.576684  269280 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1229 07:16:58.576730  269280 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-798607
	I1229 07:16:58.598430  269280 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33088 SSHKeyPath:/home/jenkins/minikube-integration/22353-9207/.minikube/machines/default-k8s-diff-port-798607/id_rsa Username:docker}
	I1229 07:16:58.237027  268071 addons.go:530] duration metric: took 1.965349819s for enable addons: enabled=[storage-provisioner dashboard default-storageclass]
	I1229 07:16:58.724321  268071 api_server.go:299] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1229 07:16:58.729969  268071 api_server.go:325] https://192.168.103.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1229 07:16:58.729991  268071 api_server.go:103] status: https://192.168.103.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1229 07:16:58.702136  269280 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1229 07:16:58.707636  269280 fix.go:56] duration metric: took 4.83888363s for fixHost
	I1229 07:16:58.707665  269280 start.go:83] releasing machines lock for "default-k8s-diff-port-798607", held for 4.838948781s
	I1229 07:16:58.707734  269280 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-798607
	I1229 07:16:58.729806  269280 ssh_runner.go:195] Run: cat /version.json
	I1229 07:16:58.729859  269280 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-798607
	I1229 07:16:58.729910  269280 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1229 07:16:58.729993  269280 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-798607
	I1229 07:16:58.751739  269280 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33088 SSHKeyPath:/home/jenkins/minikube-integration/22353-9207/.minikube/machines/default-k8s-diff-port-798607/id_rsa Username:docker}
	I1229 07:16:58.752472  269280 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33088 SSHKeyPath:/home/jenkins/minikube-integration/22353-9207/.minikube/machines/default-k8s-diff-port-798607/id_rsa Username:docker}
	I1229 07:16:58.906022  269280 ssh_runner.go:195] Run: systemctl --version
	I1229 07:16:58.912790  269280 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1229 07:16:58.947369  269280 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1229 07:16:58.952729  269280 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1229 07:16:58.952791  269280 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1229 07:16:58.961158  269280 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1229 07:16:58.961188  269280 start.go:496] detecting cgroup driver to use...
	I1229 07:16:58.961229  269280 detect.go:190] detected "systemd" cgroup driver on host os
	I1229 07:16:58.961275  269280 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1229 07:16:58.976412  269280 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1229 07:16:58.988422  269280 docker.go:218] disabling cri-docker service (if available) ...
	I1229 07:16:58.988487  269280 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1229 07:16:59.003016  269280 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1229 07:16:59.015790  269280 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1229 07:16:59.097420  269280 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1229 07:16:59.179667  269280 docker.go:234] disabling docker service ...
	I1229 07:16:59.179723  269280 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1229 07:16:59.194569  269280 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1229 07:16:59.207352  269280 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1229 07:16:59.300204  269280 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1229 07:16:59.381251  269280 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1229 07:16:59.393641  269280 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1229 07:16:59.407386  269280 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1229 07:16:59.407436  269280 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1229 07:16:59.415993  269280 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1229 07:16:59.416052  269280 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1229 07:16:59.424270  269280 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1229 07:16:59.432612  269280 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1229 07:16:59.441240  269280 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1229 07:16:59.448948  269280 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1229 07:16:59.457926  269280 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1229 07:16:59.466179  269280 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1229 07:16:59.474638  269280 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1229 07:16:59.482009  269280 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1229 07:16:59.489043  269280 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1229 07:16:59.564381  269280 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1229 07:16:59.712532  269280 start.go:553] Will wait 60s for socket path /var/run/crio/crio.sock
	I1229 07:16:59.712589  269280 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1229 07:16:59.716750  269280 start.go:574] Will wait 60s for crictl version
	I1229 07:16:59.716807  269280 ssh_runner.go:195] Run: which crictl
	I1229 07:16:59.720200  269280 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1229 07:16:59.744400  269280 start.go:590] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.35.0
	RuntimeApiVersion:  v1
	I1229 07:16:59.744487  269280 ssh_runner.go:195] Run: crio --version
	I1229 07:16:59.772076  269280 ssh_runner.go:195] Run: crio --version
	I1229 07:16:59.803331  269280 out.go:179] * Preparing Kubernetes v1.35.0 on CRI-O 1.35.0 ...
	I1229 07:16:59.804981  269280 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-798607 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1229 07:16:59.823371  269280 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1229 07:16:59.827437  269280 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1229 07:16:59.838312  269280 kubeadm.go:884] updating cluster {Name:default-k8s-diff-port-798607 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766979815-22353@sha256:20dad5895b49b986a1253c0faab60865204843ac97fd3a6e6210da5896244409 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:default-k8s-diff-port-798607 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountT
ype:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false} ...
	I1229 07:16:59.838459  269280 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime crio
	I1229 07:16:59.838541  269280 ssh_runner.go:195] Run: sudo crictl images --output json
	I1229 07:16:59.875565  269280 crio.go:561] all images are preloaded for cri-o runtime.
	I1229 07:16:59.875586  269280 crio.go:433] Images already preloaded, skipping extraction
	I1229 07:16:59.875639  269280 ssh_runner.go:195] Run: sudo crictl images --output json
	I1229 07:16:59.906671  269280 crio.go:561] all images are preloaded for cri-o runtime.
	I1229 07:16:59.906695  269280 cache_images.go:86] Images are preloaded, skipping loading
	I1229 07:16:59.906705  269280 kubeadm.go:935] updating node { 192.168.85.2 8444 v1.35.0 crio true true} ...
	I1229 07:16:59.906801  269280 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=default-k8s-diff-port-798607 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0 ClusterName:default-k8s-diff-port-798607 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1229 07:16:59.906871  269280 ssh_runner.go:195] Run: crio config
	I1229 07:16:59.960073  269280 cni.go:84] Creating CNI manager for ""
	I1229 07:16:59.960102  269280 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1229 07:16:59.960120  269280 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1229 07:16:59.960151  269280 kubeadm.go:197] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8444 KubernetesVersion:v1.35.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-798607 NodeName:default-k8s-diff-port-798607 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.c
rt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1229 07:16:59.960333  269280 kubeadm.go:203] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-798607"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1229 07:16:59.960405  269280 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0
	I1229 07:16:59.969068  269280 binaries.go:51] Found k8s binaries, skipping transfer
	I1229 07:16:59.969131  269280 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1229 07:16:59.976580  269280 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (378 bytes)
	I1229 07:16:59.991540  269280 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1229 07:17:00.005777  269280 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2224 bytes)
	I1229 07:17:00.020212  269280 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1229 07:17:00.024758  269280 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1229 07:17:00.036880  269280 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1229 07:17:00.131475  269280 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1229 07:17:00.160048  269280 certs.go:69] Setting up /home/jenkins/minikube-integration/22353-9207/.minikube/profiles/default-k8s-diff-port-798607 for IP: 192.168.85.2
	I1229 07:17:00.160074  269280 certs.go:195] generating shared ca certs ...
	I1229 07:17:00.160094  269280 certs.go:227] acquiring lock for ca certs: {Name:mk9ea2caa33885d3eff30cd6b31b0954826457bd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1229 07:17:00.160273  269280 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22353-9207/.minikube/ca.key
	I1229 07:17:00.160334  269280 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22353-9207/.minikube/proxy-client-ca.key
	I1229 07:17:00.160351  269280 certs.go:257] generating profile certs ...
	I1229 07:17:00.160459  269280 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22353-9207/.minikube/profiles/default-k8s-diff-port-798607/client.key
	I1229 07:17:00.160524  269280 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22353-9207/.minikube/profiles/default-k8s-diff-port-798607/apiserver.key.86858a19
	I1229 07:17:00.160556  269280 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22353-9207/.minikube/profiles/default-k8s-diff-port-798607/proxy-client.key
	I1229 07:17:00.160673  269280 certs.go:484] found cert: /home/jenkins/minikube-integration/22353-9207/.minikube/certs/12733.pem (1338 bytes)
	W1229 07:17:00.160710  269280 certs.go:480] ignoring /home/jenkins/minikube-integration/22353-9207/.minikube/certs/12733_empty.pem, impossibly tiny 0 bytes
	I1229 07:17:00.160720  269280 certs.go:484] found cert: /home/jenkins/minikube-integration/22353-9207/.minikube/certs/ca-key.pem (1675 bytes)
	I1229 07:17:00.160754  269280 certs.go:484] found cert: /home/jenkins/minikube-integration/22353-9207/.minikube/certs/ca.pem (1082 bytes)
	I1229 07:17:00.160787  269280 certs.go:484] found cert: /home/jenkins/minikube-integration/22353-9207/.minikube/certs/cert.pem (1123 bytes)
	I1229 07:17:00.160825  269280 certs.go:484] found cert: /home/jenkins/minikube-integration/22353-9207/.minikube/certs/key.pem (1675 bytes)
	I1229 07:17:00.160901  269280 certs.go:484] found cert: /home/jenkins/minikube-integration/22353-9207/.minikube/files/etc/ssl/certs/127332.pem (1708 bytes)
	I1229 07:17:00.161691  269280 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-9207/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1229 07:17:00.182440  269280 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-9207/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1229 07:17:00.205142  269280 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-9207/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1229 07:17:00.226543  269280 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-9207/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1229 07:17:00.252897  269280 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-9207/.minikube/profiles/default-k8s-diff-port-798607/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1229 07:17:00.275390  269280 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-9207/.minikube/profiles/default-k8s-diff-port-798607/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1229 07:17:00.293737  269280 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-9207/.minikube/profiles/default-k8s-diff-port-798607/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1229 07:17:00.310788  269280 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-9207/.minikube/profiles/default-k8s-diff-port-798607/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1229 07:17:00.327602  269280 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-9207/.minikube/certs/12733.pem --> /usr/share/ca-certificates/12733.pem (1338 bytes)
	I1229 07:17:00.345043  269280 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-9207/.minikube/files/etc/ssl/certs/127332.pem --> /usr/share/ca-certificates/127332.pem (1708 bytes)
	I1229 07:17:00.364798  269280 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-9207/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1229 07:17:00.384816  269280 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (722 bytes)
	I1229 07:17:00.397657  269280 ssh_runner.go:195] Run: openssl version
	I1229 07:17:00.403639  269280 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1229 07:17:00.411793  269280 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1229 07:17:00.419573  269280 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1229 07:17:00.423368  269280 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 29 06:46 /usr/share/ca-certificates/minikubeCA.pem
	I1229 07:17:00.423415  269280 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1229 07:17:00.461710  269280 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1229 07:17:00.469582  269280 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/12733.pem
	I1229 07:17:00.477954  269280 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/12733.pem /etc/ssl/certs/12733.pem
	I1229 07:17:00.485941  269280 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12733.pem
	I1229 07:17:00.489658  269280 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 29 06:49 /usr/share/ca-certificates/12733.pem
	I1229 07:17:00.489709  269280 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12733.pem
	I1229 07:17:00.528688  269280 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1229 07:17:00.537870  269280 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/127332.pem
	I1229 07:17:00.545649  269280 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/127332.pem /etc/ssl/certs/127332.pem
	I1229 07:17:00.553726  269280 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/127332.pem
	I1229 07:17:00.557805  269280 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 29 06:49 /usr/share/ca-certificates/127332.pem
	I1229 07:17:00.557869  269280 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/127332.pem
	I1229 07:17:00.594096  269280 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1229 07:17:00.601814  269280 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1229 07:17:00.605469  269280 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1229 07:17:00.641364  269280 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1229 07:17:00.678445  269280 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1229 07:17:00.734451  269280 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1229 07:17:00.779781  269280 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1229 07:17:00.837839  269280 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1229 07:17:00.878628  269280 kubeadm.go:401] StartCluster: {Name:default-k8s-diff-port-798607 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766979815-22353@sha256:20dad5895b49b986a1253c0faab60865204843ac97fd3a6e6210da5896244409 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:default-k8s-diff-port-798607 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType
:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1229 07:17:00.878727  269280 cri.go:61] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1229 07:17:00.878789  269280 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1229 07:17:00.908801  269280 cri.go:96] found id: "2b72f7f6b29d95aee779b60cd81822c9b177c8165e5f4b6f517ffabb7842f102"
	I1229 07:17:00.908823  269280 cri.go:96] found id: "b68c52dc0f0ed416a57bc48dc7336f1d94c6becc7da6d8e5dc24d055b6929608"
	I1229 07:17:00.908829  269280 cri.go:96] found id: "7adaca7a38cbd91d087cd7df5275e466d228d6e8dd4c54aa4a305ea9bee1f833"
	I1229 07:17:00.908836  269280 cri.go:96] found id: "c791e2da2999f159e921bf68b6eb0ff81a9e870d3867e046bd180bb6857643da"
	I1229 07:17:00.908841  269280 cri.go:96] found id: ""
	I1229 07:17:00.908884  269280 ssh_runner.go:195] Run: sudo runc list -f json
	W1229 07:17:00.921184  269280 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-29T07:17:00Z" level=error msg="open /run/runc: no such file or directory"
	I1229 07:17:00.921264  269280 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1229 07:17:00.929477  269280 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1229 07:17:00.929494  269280 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1229 07:17:00.929540  269280 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1229 07:17:00.937562  269280 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1229 07:17:00.938465  269280 kubeconfig.go:47] verify endpoint returned: get endpoint: "default-k8s-diff-port-798607" does not appear in /home/jenkins/minikube-integration/22353-9207/kubeconfig
	I1229 07:17:00.939015  269280 kubeconfig.go:62] /home/jenkins/minikube-integration/22353-9207/kubeconfig needs updating (will repair): [kubeconfig missing "default-k8s-diff-port-798607" cluster setting kubeconfig missing "default-k8s-diff-port-798607" context setting]
	I1229 07:17:00.939924  269280 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22353-9207/kubeconfig: {Name:mkdd37af1bd6d0f9cc034a64c07c8d33ee5c10ac Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1229 07:17:00.941618  269280 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1229 07:17:00.949752  269280 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.85.2
	I1229 07:17:00.949782  269280 kubeadm.go:602] duration metric: took 20.281612ms to restartPrimaryControlPlane
	I1229 07:17:00.949799  269280 kubeadm.go:403] duration metric: took 71.176228ms to StartCluster
	I1229 07:17:00.949816  269280 settings.go:142] acquiring lock: {Name:mkc0a676e8e9f981283a1b55a28ae5261bf35d87 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1229 07:17:00.949884  269280 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22353-9207/kubeconfig
	I1229 07:17:00.952411  269280 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22353-9207/kubeconfig: {Name:mkdd37af1bd6d0f9cc034a64c07c8d33ee5c10ac Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1229 07:17:00.952767  269280 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1229 07:17:00.952842  269280 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1229 07:17:00.952937  269280 addons.go:70] Setting storage-provisioner=true in profile "default-k8s-diff-port-798607"
	I1229 07:17:00.952953  269280 addons.go:239] Setting addon storage-provisioner=true in "default-k8s-diff-port-798607"
	W1229 07:17:00.952961  269280 addons.go:248] addon storage-provisioner should already be in state true
	I1229 07:17:00.952905  269280 config.go:182] Loaded profile config "default-k8s-diff-port-798607": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1229 07:17:00.952994  269280 host.go:66] Checking if "default-k8s-diff-port-798607" exists ...
	I1229 07:17:00.952989  269280 addons.go:70] Setting dashboard=true in profile "default-k8s-diff-port-798607"
	I1229 07:17:00.953010  269280 addons.go:239] Setting addon dashboard=true in "default-k8s-diff-port-798607"
	W1229 07:17:00.953024  269280 addons.go:248] addon dashboard should already be in state true
	I1229 07:17:00.953048  269280 host.go:66] Checking if "default-k8s-diff-port-798607" exists ...
	I1229 07:17:00.953041  269280 addons.go:70] Setting default-storageclass=true in profile "default-k8s-diff-port-798607"
	I1229 07:17:00.953075  269280 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-798607"
	I1229 07:17:00.953489  269280 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-798607 --format={{.State.Status}}
	I1229 07:17:00.953544  269280 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-798607 --format={{.State.Status}}
	I1229 07:17:00.953544  269280 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-798607 --format={{.State.Status}}
	I1229 07:17:00.954872  269280 out.go:179] * Verifying Kubernetes components...
	I1229 07:17:00.956186  269280 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1229 07:17:00.981243  269280 addons.go:239] Setting addon default-storageclass=true in "default-k8s-diff-port-798607"
	W1229 07:17:00.981267  269280 addons.go:248] addon default-storageclass should already be in state true
	I1229 07:17:00.981295  269280 host.go:66] Checking if "default-k8s-diff-port-798607" exists ...
	I1229 07:17:00.981422  269280 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1229 07:17:00.981428  269280 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1229 07:17:00.981739  269280 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-798607 --format={{.State.Status}}
	I1229 07:17:00.982984  269280 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1229 07:17:00.983021  269280 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1229 07:17:00.983070  269280 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-798607
	I1229 07:17:00.986305  269280 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1229 07:17:00.987464  269280 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1229 07:17:00.987493  269280 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1229 07:17:00.987547  269280 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-798607
	I1229 07:17:01.018440  269280 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1229 07:17:01.018536  269280 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1229 07:17:01.018624  269280 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-798607
	I1229 07:17:01.024615  269280 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33088 SSHKeyPath:/home/jenkins/minikube-integration/22353-9207/.minikube/machines/default-k8s-diff-port-798607/id_rsa Username:docker}
	I1229 07:17:01.025944  269280 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33088 SSHKeyPath:/home/jenkins/minikube-integration/22353-9207/.minikube/machines/default-k8s-diff-port-798607/id_rsa Username:docker}
	I1229 07:17:01.043350  269280 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33088 SSHKeyPath:/home/jenkins/minikube-integration/22353-9207/.minikube/machines/default-k8s-diff-port-798607/id_rsa Username:docker}
	I1229 07:17:01.098375  269280 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1229 07:17:01.110911  269280 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-798607" to be "Ready" ...
	I1229 07:17:01.136377  269280 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1229 07:17:01.138937  269280 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1229 07:17:01.138961  269280 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1229 07:17:01.153106  269280 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1229 07:17:01.153128  269280 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1229 07:17:01.156903  269280 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1229 07:17:01.166688  269280 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1229 07:17:01.166716  269280 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1229 07:17:01.180384  269280 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1229 07:17:01.180407  269280 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1229 07:17:01.196316  269280 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1229 07:17:01.196344  269280 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1229 07:17:01.209332  269280 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1229 07:17:01.209351  269280 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1229 07:17:01.222884  269280 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1229 07:17:01.222907  269280 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1229 07:17:01.236019  269280 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1229 07:17:01.236045  269280 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1229 07:17:01.249461  269280 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1229 07:17:01.249482  269280 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1229 07:17:01.262076  269280 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1229 07:17:02.899952  269280 node_ready.go:49] node "default-k8s-diff-port-798607" is "Ready"
	I1229 07:17:02.899987  269280 node_ready.go:38] duration metric: took 1.78904051s for node "default-k8s-diff-port-798607" to be "Ready" ...
	I1229 07:17:02.900003  269280 api_server.go:52] waiting for apiserver process to appear ...
	I1229 07:17:02.900053  269280 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1229 07:17:03.653408  269280 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.496471206s)
	I1229 07:17:03.653606  269280 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (2.391447046s)
	I1229 07:17:03.653417  269280 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.516981335s)
	I1229 07:17:03.653770  269280 api_server.go:72] duration metric: took 2.700967736s to wait for apiserver process to appear ...
	I1229 07:17:03.653783  269280 api_server.go:88] waiting for apiserver healthz status ...
	I1229 07:17:03.653800  269280 api_server.go:299] Checking apiserver healthz at https://192.168.85.2:8444/healthz ...
	I1229 07:17:03.655394  269280 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p default-k8s-diff-port-798607 addons enable metrics-server
	
	I1229 07:17:03.661118  269280 api_server.go:325] https://192.168.85.2:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1229 07:17:03.661144  269280 api_server.go:103] status: https://192.168.85.2:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1229 07:17:03.664741  269280 out.go:179] * Enabled addons: storage-provisioner, dashboard, default-storageclass
	I1229 07:17:03.665617  269280 addons.go:530] duration metric: took 2.712781499s for enable addons: enabled=[storage-provisioner dashboard default-storageclass]
	I1229 07:16:59.224356  268071 api_server.go:299] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1229 07:16:59.228577  268071 api_server.go:325] https://192.168.103.2:8443/healthz returned 200:
	ok
	I1229 07:16:59.229566  268071 api_server.go:141] control plane version: v1.35.0
	I1229 07:16:59.229590  268071 api_server.go:131] duration metric: took 1.005538983s to wait for apiserver health ...
	I1229 07:16:59.229598  268071 system_pods.go:43] waiting for kube-system pods to appear ...
	I1229 07:16:59.233207  268071 system_pods.go:59] 8 kube-system pods found
	I1229 07:16:59.233260  268071 system_pods.go:61] "coredns-7d764666f9-55529" [279a41bb-4bd1-4a8d-9999-27eb0a996229] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1229 07:16:59.233273  268071 system_pods.go:61] "etcd-embed-certs-739827" [c31026b1-ea55-4f4c-a4da-7acec9849459] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1229 07:16:59.233278  268071 system_pods.go:61] "kindnet-l6mxr" [8c745434-7be8-4f4a-9685-0b2ebdcd1a6f] Running
	I1229 07:16:59.233285  268071 system_pods.go:61] "kube-apiserver-embed-certs-739827" [0bc8eb4f-d400-4844-930d-20bd4547241d] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1229 07:16:59.233294  268071 system_pods.go:61] "kube-controller-manager-embed-certs-739827" [c97351b5-e77c-4fdf-909e-5953b4bf6a71] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1229 07:16:59.233298  268071 system_pods.go:61] "kube-proxy-hdmp6" [ddf343da-e4e2-4ea1-a49d-02ad395abdaa] Running
	I1229 07:16:59.233304  268071 system_pods.go:61] "kube-scheduler-embed-certs-739827" [db6cf8d3-cae8-4601-ae36-c2003c4d368d] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1229 07:16:59.233307  268071 system_pods.go:61] "storage-provisioner" [b7edc06a-181d-4c30-b979-9aa3f1f50ecb] Running
	I1229 07:16:59.233325  268071 system_pods.go:74] duration metric: took 3.722075ms to wait for pod list to return data ...
	I1229 07:16:59.233334  268071 default_sa.go:34] waiting for default service account to be created ...
	I1229 07:16:59.235660  268071 default_sa.go:45] found service account: "default"
	I1229 07:16:59.235683  268071 default_sa.go:55] duration metric: took 2.342702ms for default service account to be created ...
	I1229 07:16:59.235693  268071 system_pods.go:116] waiting for k8s-apps to be running ...
	I1229 07:16:59.240999  268071 system_pods.go:86] 8 kube-system pods found
	I1229 07:16:59.241039  268071 system_pods.go:89] "coredns-7d764666f9-55529" [279a41bb-4bd1-4a8d-9999-27eb0a996229] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1229 07:16:59.241050  268071 system_pods.go:89] "etcd-embed-certs-739827" [c31026b1-ea55-4f4c-a4da-7acec9849459] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1229 07:16:59.241057  268071 system_pods.go:89] "kindnet-l6mxr" [8c745434-7be8-4f4a-9685-0b2ebdcd1a6f] Running
	I1229 07:16:59.241067  268071 system_pods.go:89] "kube-apiserver-embed-certs-739827" [0bc8eb4f-d400-4844-930d-20bd4547241d] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1229 07:16:59.241074  268071 system_pods.go:89] "kube-controller-manager-embed-certs-739827" [c97351b5-e77c-4fdf-909e-5953b4bf6a71] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1229 07:16:59.241080  268071 system_pods.go:89] "kube-proxy-hdmp6" [ddf343da-e4e2-4ea1-a49d-02ad395abdaa] Running
	I1229 07:16:59.241089  268071 system_pods.go:89] "kube-scheduler-embed-certs-739827" [db6cf8d3-cae8-4601-ae36-c2003c4d368d] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1229 07:16:59.241093  268071 system_pods.go:89] "storage-provisioner" [b7edc06a-181d-4c30-b979-9aa3f1f50ecb] Running
	I1229 07:16:59.241104  268071 system_pods.go:126] duration metric: took 5.403763ms to wait for k8s-apps to be running ...
	I1229 07:16:59.241116  268071 system_svc.go:44] waiting for kubelet service to be running ....
	I1229 07:16:59.241163  268071 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1229 07:16:59.260325  268071 system_svc.go:56] duration metric: took 19.201478ms WaitForService to wait for kubelet
	I1229 07:16:59.260362  268071 kubeadm.go:587] duration metric: took 2.988758005s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1229 07:16:59.260386  268071 node_conditions.go:102] verifying NodePressure condition ...
	I1229 07:16:59.263806  268071 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1229 07:16:59.263835  268071 node_conditions.go:123] node cpu capacity is 8
	I1229 07:16:59.263853  268071 node_conditions.go:105] duration metric: took 3.46033ms to run NodePressure ...
	I1229 07:16:59.263877  268071 start.go:242] waiting for startup goroutines ...
	I1229 07:16:59.263889  268071 start.go:247] waiting for cluster config update ...
	I1229 07:16:59.263903  268071 start.go:256] writing updated cluster config ...
	I1229 07:16:59.264243  268071 ssh_runner.go:195] Run: rm -f paused
	I1229 07:16:59.268391  268071 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1229 07:16:59.271672  268071 pod_ready.go:83] waiting for pod "coredns-7d764666f9-55529" in "kube-system" namespace to be "Ready" or be gone ...
	W1229 07:17:01.277861  268071 pod_ready.go:104] pod "coredns-7d764666f9-55529" is not "Ready", error: <nil>
	W1229 07:17:03.278152  268071 pod_ready.go:104] pod "coredns-7d764666f9-55529" is not "Ready", error: <nil>
	I1229 07:16:59.817661  225445 api_server.go:299] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1229 07:16:59.818080  225445 api_server.go:315] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1229 07:16:59.818142  225445 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1229 07:16:59.818206  225445 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1229 07:16:59.847971  225445 cri.go:96] found id: "8b80982715281e14912ea726d2a5a182323febb349dfab3bcb2a610db7fa1d11"
	I1229 07:16:59.847997  225445 cri.go:96] found id: "3a9c461f6549cedfcb902d493a98dfdfd45573c28ec554e6b1d2ce904ee1fd67"
	I1229 07:16:59.848004  225445 cri.go:96] found id: ""
	I1229 07:16:59.848013  225445 logs.go:282] 2 containers: [8b80982715281e14912ea726d2a5a182323febb349dfab3bcb2a610db7fa1d11 3a9c461f6549cedfcb902d493a98dfdfd45573c28ec554e6b1d2ce904ee1fd67]
	I1229 07:16:59.848071  225445 ssh_runner.go:195] Run: which crictl
	I1229 07:16:59.852004  225445 ssh_runner.go:195] Run: which crictl
	I1229 07:16:59.856151  225445 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1229 07:16:59.856233  225445 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1229 07:16:59.886546  225445 cri.go:96] found id: "02694b727a7ef5615b676832bc306017d23da15eff0a3aa9c1e32630ffe9e1bd"
	I1229 07:16:59.886568  225445 cri.go:96] found id: ""
	I1229 07:16:59.886577  225445 logs.go:282] 1 containers: [02694b727a7ef5615b676832bc306017d23da15eff0a3aa9c1e32630ffe9e1bd]
	I1229 07:16:59.886632  225445 ssh_runner.go:195] Run: which crictl
	I1229 07:16:59.890671  225445 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1229 07:16:59.890754  225445 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1229 07:16:59.922157  225445 cri.go:96] found id: ""
	I1229 07:16:59.922184  225445 logs.go:282] 0 containers: []
	W1229 07:16:59.922193  225445 logs.go:284] No container was found matching "coredns"
	I1229 07:16:59.922199  225445 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1229 07:16:59.922269  225445 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1229 07:16:59.955240  225445 cri.go:96] found id: "14199977eb133b2984f339d2e84d171a190e387623bebdf7ed21e27d8cf71d90"
	I1229 07:16:59.955264  225445 cri.go:96] found id: "1a2a5ec730671fe2fec7f6b801e54afe17547e25677f0f6dc85052e44f7374d4"
	I1229 07:16:59.955270  225445 cri.go:96] found id: "83f0b45b14d0f5c0f2428e774b2c22cc1cbf1cdc631be13cb8017de1791ed9c1"
	I1229 07:16:59.955274  225445 cri.go:96] found id: ""
	I1229 07:16:59.955283  225445 logs.go:282] 3 containers: [14199977eb133b2984f339d2e84d171a190e387623bebdf7ed21e27d8cf71d90 1a2a5ec730671fe2fec7f6b801e54afe17547e25677f0f6dc85052e44f7374d4 83f0b45b14d0f5c0f2428e774b2c22cc1cbf1cdc631be13cb8017de1791ed9c1]
	I1229 07:16:59.955352  225445 ssh_runner.go:195] Run: which crictl
	I1229 07:16:59.960257  225445 ssh_runner.go:195] Run: which crictl
	I1229 07:16:59.964304  225445 ssh_runner.go:195] Run: which crictl
	I1229 07:16:59.967919  225445 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1229 07:16:59.967976  225445 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1229 07:16:59.998958  225445 cri.go:96] found id: ""
	I1229 07:16:59.998983  225445 logs.go:282] 0 containers: []
	W1229 07:16:59.998992  225445 logs.go:284] No container was found matching "kube-proxy"
	I1229 07:16:59.999000  225445 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1229 07:16:59.999053  225445 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1229 07:17:00.031391  225445 cri.go:96] found id: "d89f0865da0d6d00be5eaa57878033c5f8099b0390b297f0364ac2b1a1c1463e"
	I1229 07:17:00.031411  225445 cri.go:96] found id: "a67832fbf27ab39d26d7cd60ce6b9b5a496df64418a8f0557221862df96d6685"
	I1229 07:17:00.031414  225445 cri.go:96] found id: "c6904a1539983ab61b05abbaa0001209617c96ddc1921ff83557531b68191f66"
	I1229 07:17:00.031417  225445 cri.go:96] found id: ""
	I1229 07:17:00.031425  225445 logs.go:282] 3 containers: [d89f0865da0d6d00be5eaa57878033c5f8099b0390b297f0364ac2b1a1c1463e a67832fbf27ab39d26d7cd60ce6b9b5a496df64418a8f0557221862df96d6685 c6904a1539983ab61b05abbaa0001209617c96ddc1921ff83557531b68191f66]
	I1229 07:17:00.031479  225445 ssh_runner.go:195] Run: which crictl
	I1229 07:17:00.036097  225445 ssh_runner.go:195] Run: which crictl
	I1229 07:17:00.040261  225445 ssh_runner.go:195] Run: which crictl
	I1229 07:17:00.044030  225445 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1229 07:17:00.044095  225445 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1229 07:17:00.082824  225445 cri.go:96] found id: ""
	I1229 07:17:00.082857  225445 logs.go:282] 0 containers: []
	W1229 07:17:00.082869  225445 logs.go:284] No container was found matching "kindnet"
	I1229 07:17:00.082878  225445 cri.go:61] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1229 07:17:00.082939  225445 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=storage-provisioner
	I1229 07:17:00.113619  225445 cri.go:96] found id: ""
	I1229 07:17:00.113650  225445 logs.go:282] 0 containers: []
	W1229 07:17:00.113661  225445 logs.go:284] No container was found matching "storage-provisioner"
	I1229 07:17:00.113672  225445 logs.go:123] Gathering logs for kube-controller-manager [c6904a1539983ab61b05abbaa0001209617c96ddc1921ff83557531b68191f66] ...
	I1229 07:17:00.113689  225445 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c6904a1539983ab61b05abbaa0001209617c96ddc1921ff83557531b68191f66"
	I1229 07:17:00.142716  225445 logs.go:123] Gathering logs for CRI-O ...
	I1229 07:17:00.142741  225445 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1229 07:17:00.235813  225445 logs.go:123] Gathering logs for kubelet ...
	I1229 07:17:00.235902  225445 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1229 07:17:00.338703  225445 logs.go:123] Gathering logs for dmesg ...
	I1229 07:17:00.338733  225445 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1229 07:17:00.352779  225445 logs.go:123] Gathering logs for kube-apiserver [8b80982715281e14912ea726d2a5a182323febb349dfab3bcb2a610db7fa1d11] ...
	I1229 07:17:00.352804  225445 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 8b80982715281e14912ea726d2a5a182323febb349dfab3bcb2a610db7fa1d11"
	I1229 07:17:00.388501  225445 logs.go:123] Gathering logs for etcd [02694b727a7ef5615b676832bc306017d23da15eff0a3aa9c1e32630ffe9e1bd] ...
	I1229 07:17:00.388528  225445 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 02694b727a7ef5615b676832bc306017d23da15eff0a3aa9c1e32630ffe9e1bd"
	I1229 07:17:00.421889  225445 logs.go:123] Gathering logs for kube-controller-manager [d89f0865da0d6d00be5eaa57878033c5f8099b0390b297f0364ac2b1a1c1463e] ...
	I1229 07:17:00.421918  225445 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d89f0865da0d6d00be5eaa57878033c5f8099b0390b297f0364ac2b1a1c1463e"
	I1229 07:17:00.449936  225445 logs.go:123] Gathering logs for kube-controller-manager [a67832fbf27ab39d26d7cd60ce6b9b5a496df64418a8f0557221862df96d6685] ...
	I1229 07:17:00.449961  225445 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a67832fbf27ab39d26d7cd60ce6b9b5a496df64418a8f0557221862df96d6685"
	I1229 07:17:00.476666  225445 logs.go:123] Gathering logs for container status ...
	I1229 07:17:00.476693  225445 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1229 07:17:00.510330  225445 logs.go:123] Gathering logs for describe nodes ...
	I1229 07:17:00.510357  225445 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1229 07:17:04.153827  269280 api_server.go:299] Checking apiserver healthz at https://192.168.85.2:8444/healthz ...
	I1229 07:17:04.159130  269280 api_server.go:325] https://192.168.85.2:8444/healthz returned 200:
	ok
	I1229 07:17:04.160353  269280 api_server.go:141] control plane version: v1.35.0
	I1229 07:17:04.160380  269280 api_server.go:131] duration metric: took 506.591567ms to wait for apiserver health ...
	I1229 07:17:04.160389  269280 system_pods.go:43] waiting for kube-system pods to appear ...
	I1229 07:17:04.165903  269280 system_pods.go:59] 8 kube-system pods found
	I1229 07:17:04.165985  269280 system_pods.go:61] "coredns-7d764666f9-jwmww" [1ab5b614-62d4-4118-9c4b-2e12e7ae7aec] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1229 07:17:04.166009  269280 system_pods.go:61] "etcd-default-k8s-diff-port-798607" [e1c1af51-4014-4c32-bcff-e34907986cbd] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1229 07:17:04.166026  269280 system_pods.go:61] "kindnet-m6jd2" [eae39509-802b-4a6e-b436-904c44761153] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1229 07:17:04.166065  269280 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-798607" [45d77ffe-320b-4e0c-b70c-c8f5c10e462f] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1229 07:17:04.166083  269280 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-798607" [8babc737-acfc-4cad-9bd0-3f28bf89533b] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1229 07:17:04.166095  269280 system_pods.go:61] "kube-proxy-4mnzc" [c322649a-8539-4264-9165-2a2522f06078] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1229 07:17:04.166109  269280 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-798607" [89336461-1b92-451b-b02f-3fe54f3b6309] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1229 07:17:04.166145  269280 system_pods.go:61] "storage-provisioner" [77ec6576-1cba-401f-8b20-e6e97d7be45d] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1229 07:17:04.166160  269280 system_pods.go:74] duration metric: took 5.763295ms to wait for pod list to return data ...
	I1229 07:17:04.166170  269280 default_sa.go:34] waiting for default service account to be created ...
	I1229 07:17:04.170290  269280 default_sa.go:45] found service account: "default"
	I1229 07:17:04.170315  269280 default_sa.go:55] duration metric: took 4.134253ms for default service account to be created ...
	I1229 07:17:04.170325  269280 system_pods.go:116] waiting for k8s-apps to be running ...
	I1229 07:17:04.174072  269280 system_pods.go:86] 8 kube-system pods found
	I1229 07:17:04.174106  269280 system_pods.go:89] "coredns-7d764666f9-jwmww" [1ab5b614-62d4-4118-9c4b-2e12e7ae7aec] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1229 07:17:04.174115  269280 system_pods.go:89] "etcd-default-k8s-diff-port-798607" [e1c1af51-4014-4c32-bcff-e34907986cbd] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1229 07:17:04.174128  269280 system_pods.go:89] "kindnet-m6jd2" [eae39509-802b-4a6e-b436-904c44761153] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1229 07:17:04.174138  269280 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-798607" [45d77ffe-320b-4e0c-b70c-c8f5c10e462f] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1229 07:17:04.174147  269280 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-798607" [8babc737-acfc-4cad-9bd0-3f28bf89533b] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1229 07:17:04.174155  269280 system_pods.go:89] "kube-proxy-4mnzc" [c322649a-8539-4264-9165-2a2522f06078] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1229 07:17:04.174163  269280 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-798607" [89336461-1b92-451b-b02f-3fe54f3b6309] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1229 07:17:04.174178  269280 system_pods.go:89] "storage-provisioner" [77ec6576-1cba-401f-8b20-e6e97d7be45d] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1229 07:17:04.174187  269280 system_pods.go:126] duration metric: took 3.855906ms to wait for k8s-apps to be running ...
	I1229 07:17:04.174196  269280 system_svc.go:44] waiting for kubelet service to be running ....
	I1229 07:17:04.174258  269280 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1229 07:17:04.192159  269280 system_svc.go:56] duration metric: took 17.955298ms WaitForService to wait for kubelet
	I1229 07:17:04.192189  269280 kubeadm.go:587] duration metric: took 3.239385657s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1229 07:17:04.192231  269280 node_conditions.go:102] verifying NodePressure condition ...
	I1229 07:17:04.196080  269280 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1229 07:17:04.196109  269280 node_conditions.go:123] node cpu capacity is 8
	I1229 07:17:04.196125  269280 node_conditions.go:105] duration metric: took 3.888101ms to run NodePressure ...
	I1229 07:17:04.196140  269280 start.go:242] waiting for startup goroutines ...
	I1229 07:17:04.196161  269280 start.go:247] waiting for cluster config update ...
	I1229 07:17:04.196182  269280 start.go:256] writing updated cluster config ...
	I1229 07:17:04.196492  269280 ssh_runner.go:195] Run: rm -f paused
	I1229 07:17:04.201096  269280 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1229 07:17:04.204899  269280 pod_ready.go:83] waiting for pod "coredns-7d764666f9-jwmww" in "kube-system" namespace to be "Ready" or be gone ...
	W1229 07:17:06.211618  269280 pod_ready.go:104] pod "coredns-7d764666f9-jwmww" is not "Ready", error: <nil>
	W1229 07:17:08.211712  269280 pod_ready.go:104] pod "coredns-7d764666f9-jwmww" is not "Ready", error: <nil>
	W1229 07:17:05.778871  268071 pod_ready.go:104] pod "coredns-7d764666f9-55529" is not "Ready", error: <nil>
	W1229 07:17:07.779737  268071 pod_ready.go:104] pod "coredns-7d764666f9-55529" is not "Ready", error: <nil>
	
	
	==> CRI-O <==
	Dec 29 07:16:31 no-preload-122332 crio[580]: time="2025-12-29T07:16:31.469558385Z" level=info msg="Started container" PID=1791 containerID=4d2d448b4a7c3be44c2e8fc003543736bed79d9f1440dd278928896b191224a0 description=kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-8kjsc/dashboard-metrics-scraper id=99077551-8f48-4480-8193-6203bf551c66 name=/runtime.v1.RuntimeService/StartContainer sandboxID=3e4e6bf33b90cd084ab70e1629d95bb9642de99fd03f4957f8246aa41ff068c9
	Dec 29 07:16:32 no-preload-122332 crio[580]: time="2025-12-29T07:16:32.292339661Z" level=info msg="Removing container: f6077febbb8010d447c8c50ef72bb55285597337f424ce773b7ef2f351928145" id=9cdeb85c-12fa-4b94-b6c9-85b65ad7b2b0 name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 29 07:16:32 no-preload-122332 crio[580]: time="2025-12-29T07:16:32.304674566Z" level=info msg="Removed container f6077febbb8010d447c8c50ef72bb55285597337f424ce773b7ef2f351928145: kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-8kjsc/dashboard-metrics-scraper" id=9cdeb85c-12fa-4b94-b6c9-85b65ad7b2b0 name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 29 07:16:43 no-preload-122332 crio[580]: time="2025-12-29T07:16:43.318924017Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=f528f112-70db-4990-8483-9916e3e5301a name=/runtime.v1.ImageService/ImageStatus
	Dec 29 07:16:43 no-preload-122332 crio[580]: time="2025-12-29T07:16:43.319953887Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=58aab12a-1b18-4831-86b2-efd7c0c03641 name=/runtime.v1.ImageService/ImageStatus
	Dec 29 07:16:43 no-preload-122332 crio[580]: time="2025-12-29T07:16:43.320971906Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=2eaa6a59-7f8e-46c7-9d1e-d92f7b899879 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 29 07:16:43 no-preload-122332 crio[580]: time="2025-12-29T07:16:43.321112626Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 29 07:16:43 no-preload-122332 crio[580]: time="2025-12-29T07:16:43.325422665Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 29 07:16:43 no-preload-122332 crio[580]: time="2025-12-29T07:16:43.325619378Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/3432c767b3f9fe5d1c33d9a9ceb7dfbe8eef1b1fcc05b379b8ce5abe3b57c4b0/merged/etc/passwd: no such file or directory"
	Dec 29 07:16:43 no-preload-122332 crio[580]: time="2025-12-29T07:16:43.325649612Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/3432c767b3f9fe5d1c33d9a9ceb7dfbe8eef1b1fcc05b379b8ce5abe3b57c4b0/merged/etc/group: no such file or directory"
	Dec 29 07:16:43 no-preload-122332 crio[580]: time="2025-12-29T07:16:43.325948418Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 29 07:16:43 no-preload-122332 crio[580]: time="2025-12-29T07:16:43.352352079Z" level=info msg="Created container 545c1cbee5f14da1e2b27f7f896e2dc4c58720ea9d59b706ffa64166d5bb9f96: kube-system/storage-provisioner/storage-provisioner" id=2eaa6a59-7f8e-46c7-9d1e-d92f7b899879 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 29 07:16:43 no-preload-122332 crio[580]: time="2025-12-29T07:16:43.352972293Z" level=info msg="Starting container: 545c1cbee5f14da1e2b27f7f896e2dc4c58720ea9d59b706ffa64166d5bb9f96" id=41a57976-39ec-4bea-847c-dc8e948bb212 name=/runtime.v1.RuntimeService/StartContainer
	Dec 29 07:16:43 no-preload-122332 crio[580]: time="2025-12-29T07:16:43.354790776Z" level=info msg="Started container" PID=1808 containerID=545c1cbee5f14da1e2b27f7f896e2dc4c58720ea9d59b706ffa64166d5bb9f96 description=kube-system/storage-provisioner/storage-provisioner id=41a57976-39ec-4bea-847c-dc8e948bb212 name=/runtime.v1.RuntimeService/StartContainer sandboxID=3e08fbed83128e8a3aa813cf3f1f445a8cb3767b29a3f9d6f0218b7cbc487ef5
	Dec 29 07:16:54 no-preload-122332 crio[580]: time="2025-12-29T07:16:54.211687159Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=0284802b-5455-47cf-9132-9150a238764c name=/runtime.v1.ImageService/ImageStatus
	Dec 29 07:16:54 no-preload-122332 crio[580]: time="2025-12-29T07:16:54.212896671Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=4a359756-0b3f-4b02-a4f9-9f39d4a3ee74 name=/runtime.v1.ImageService/ImageStatus
	Dec 29 07:16:54 no-preload-122332 crio[580]: time="2025-12-29T07:16:54.214329854Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-8kjsc/dashboard-metrics-scraper" id=d6292ce8-38f0-4bd3-ae77-414b4cfe85d9 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 29 07:16:54 no-preload-122332 crio[580]: time="2025-12-29T07:16:54.214492575Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 29 07:16:54 no-preload-122332 crio[580]: time="2025-12-29T07:16:54.221178329Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 29 07:16:54 no-preload-122332 crio[580]: time="2025-12-29T07:16:54.221886491Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 29 07:16:54 no-preload-122332 crio[580]: time="2025-12-29T07:16:54.244514565Z" level=info msg="Created container 322e7b29c6c5691659866c9876262fb3eee6007fc245f7ce7d575d2de9068828: kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-8kjsc/dashboard-metrics-scraper" id=d6292ce8-38f0-4bd3-ae77-414b4cfe85d9 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 29 07:16:54 no-preload-122332 crio[580]: time="2025-12-29T07:16:54.245156699Z" level=info msg="Starting container: 322e7b29c6c5691659866c9876262fb3eee6007fc245f7ce7d575d2de9068828" id=e259a160-5096-4545-8cb2-b67ef0494dfb name=/runtime.v1.RuntimeService/StartContainer
	Dec 29 07:16:54 no-preload-122332 crio[580]: time="2025-12-29T07:16:54.246921154Z" level=info msg="Started container" PID=1848 containerID=322e7b29c6c5691659866c9876262fb3eee6007fc245f7ce7d575d2de9068828 description=kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-8kjsc/dashboard-metrics-scraper id=e259a160-5096-4545-8cb2-b67ef0494dfb name=/runtime.v1.RuntimeService/StartContainer sandboxID=3e4e6bf33b90cd084ab70e1629d95bb9642de99fd03f4957f8246aa41ff068c9
	Dec 29 07:16:54 no-preload-122332 crio[580]: time="2025-12-29T07:16:54.351037157Z" level=info msg="Removing container: 4d2d448b4a7c3be44c2e8fc003543736bed79d9f1440dd278928896b191224a0" id=70cabe84-f1bd-4e30-b53e-48c9eb8cec57 name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 29 07:16:54 no-preload-122332 crio[580]: time="2025-12-29T07:16:54.361526727Z" level=info msg="Removed container 4d2d448b4a7c3be44c2e8fc003543736bed79d9f1440dd278928896b191224a0: kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-8kjsc/dashboard-metrics-scraper" id=70cabe84-f1bd-4e30-b53e-48c9eb8cec57 name=/runtime.v1.RuntimeService/RemoveContainer
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED              STATE               NAME                        ATTEMPT             POD ID              POD                                          NAMESPACE
	322e7b29c6c56       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           15 seconds ago       Exited              dashboard-metrics-scraper   3                   3e4e6bf33b90c       dashboard-metrics-scraper-867fb5f87b-8kjsc   kubernetes-dashboard
	545c1cbee5f14       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           26 seconds ago       Running             storage-provisioner         1                   3e08fbed83128       storage-provisioner                          kube-system
	b4ab1c883154a       docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029   51 seconds ago       Running             kubernetes-dashboard        0                   6f98c18b2c171       kubernetes-dashboard-b84665fb8-vrx7d         kubernetes-dashboard
	f959fe071dc9a       56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c                                           57 seconds ago       Running             busybox                     1                   f49ee03213e08       busybox                                      default
	0da01eca9a562       aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139                                           57 seconds ago       Running             coredns                     0                   01d00e058e623       coredns-7d764666f9-6rcr2                     kube-system
	f6bda58857416       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           57 seconds ago       Exited              storage-provisioner         0                   3e08fbed83128       storage-provisioner                          kube-system
	83ebe55fd0c59       4921d7a6dffa922dd679732ba4797085c4f39e9a53bee8b6fdb1d463e8571251                                           57 seconds ago       Running             kindnet-cni                 0                   00c641f33ad10       kindnet-rq99f                                kube-system
	4749520de1b72       32652ff1bbe6b6df16a3dc621fcad4e6e185a2852bffc5847aab79e36c54bfb8                                           57 seconds ago       Running             kube-proxy                  0                   bc79067c0f5b7       kube-proxy-qvww2                             kube-system
	182221ab78b63       2c9a4b058bd7e6ec479c38f9e8a1dac2f5ee5b0a3ebda6dfac968e8720229508                                           About a minute ago   Running             kube-controller-manager     0                   55fca5419a43a       kube-controller-manager-no-preload-122332    kube-system
	3c840a729524e       5c6acd67e9cd10eb60a246cd233db251ef62ea97e6572f897e873f0cb648f499                                           About a minute ago   Running             kube-apiserver              0                   36ea1b63465cd       kube-apiserver-no-preload-122332             kube-system
	482322719dad6       550794e3b12ac21ec7fd940bdfb45f7c8dae3c52acd8c70e580b882de20c3dcc                                           About a minute ago   Running             kube-scheduler              0                   a46a8654dea5c       kube-scheduler-no-preload-122332             kube-system
	013472dcacb3d       0a108f7189562e99793bdecab61fdf1a7c9d913af3385de9da17fb9d6ff430e2                                           About a minute ago   Running             etcd                        0                   dc1be2ee47f52       etcd-no-preload-122332                       kube-system
	
	
	==> coredns [0da01eca9a562b5fe8053fa35b1c01007594c183cf9335c44971775cd1ec09d0] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = c7556d8fdf49c5e32a9077be8cfb9fc6947bb07e663a10d55b192eb63ad1f2bd9793e8e5f5a36fc9abb1957831eec5c997fd9821790e3990ae9531bf41ecea37
	CoreDNS-1.13.1
	linux/amd64, go1.25.2, 1db4568
	[INFO] 127.0.0.1:36395 - 25710 "HINFO IN 423111088672476072.3620522526429269876. udp 56 false 512" NXDOMAIN qr,rd,ra 131 0.01542756s
	[INFO] plugin/ready: Plugins not ready: "kubernetes"
	[INFO] plugin/ready: Plugins not ready: "kubernetes"
	[INFO] plugin/ready: Plugins not ready: "kubernetes"
	[INFO] plugin/ready: Plugins not ready: "kubernetes"
	[ERROR] plugin/kubernetes: Failed to watch
	[ERROR] plugin/kubernetes: Failed to watch
	[ERROR] plugin/kubernetes: Failed to watch
	
	
	==> describe nodes <==
	Name:               no-preload-122332
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=no-preload-122332
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=306c26d738eaa3534a776d7c684ed563998a48b8
	                    minikube.k8s.io/name=no-preload-122332
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_29T07_15_12_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 29 Dec 2025 07:15:09 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-122332
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 29 Dec 2025 07:17:02 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 29 Dec 2025 07:16:42 +0000   Mon, 29 Dec 2025 07:15:07 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 29 Dec 2025 07:16:42 +0000   Mon, 29 Dec 2025 07:15:07 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 29 Dec 2025 07:16:42 +0000   Mon, 29 Dec 2025 07:15:07 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 29 Dec 2025 07:16:42 +0000   Mon, 29 Dec 2025 07:15:29 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.94.2
	  Hostname:    no-preload-122332
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863360Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863360Ki
	  pods:               110
	System Info:
	  Machine ID:                 44f990608ba801cb32708aeb6951f96d
	  System UUID:                da04b11d-c694-431a-acb9-a897f234eb76
	  Boot ID:                    38b59c2f-2e15-4538-b2f7-46a8b6545a02
	  Kernel Version:             6.8.0-1045-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.35.0
	  Kubelet Version:            v1.35.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         98s
	  kube-system                 coredns-7d764666f9-6rcr2                      100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     113s
	  kube-system                 etcd-no-preload-122332                        100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         119s
	  kube-system                 kindnet-rq99f                                 100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      113s
	  kube-system                 kube-apiserver-no-preload-122332              250m (3%)     0 (0%)      0 (0%)           0 (0%)         119s
	  kube-system                 kube-controller-manager-no-preload-122332     200m (2%)     0 (0%)      0 (0%)           0 (0%)         119s
	  kube-system                 kube-proxy-qvww2                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         113s
	  kube-system                 kube-scheduler-no-preload-122332              100m (1%)     0 (0%)      0 (0%)           0 (0%)         2m
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         113s
	  kubernetes-dashboard        dashboard-metrics-scraper-867fb5f87b-8kjsc    0 (0%)        0 (0%)      0 (0%)           0 (0%)         55s
	  kubernetes-dashboard        kubernetes-dashboard-b84665fb8-vrx7d          0 (0%)        0 (0%)      0 (0%)           0 (0%)         55s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason          Age   From             Message
	  ----    ------          ----  ----             -------
	  Normal  RegisteredNode  114s  node-controller  Node no-preload-122332 event: Registered Node no-preload-122332 in Controller
	  Normal  RegisteredNode  56s   node-controller  Node no-preload-122332 event: Registered Node no-preload-122332 in Controller
	
	
	==> dmesg <==
	[Dec29 06:17] MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details.
	[  +0.001714] TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details.
	[  +0.001000] MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details.
	[  +0.084008] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge
	[  +0.380672] i8042: Warning: Keylock active
	[  +0.013979] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.493300] block sda: the capability attribute has been deprecated.
	[  +0.084603] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.024785] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +4.737934] kauditd_printk_skb: 47 callbacks suppressed
	
	
	==> etcd [013472dcacb3dee11074415629264465301e3f2be8dd69785de033ac3c97d206] <==
	{"level":"info","ts":"2025-12-29T07:16:09.769534Z","caller":"embed/etcd.go:640","msg":"serving peer traffic","address":"192.168.94.2:2380"}
	{"level":"info","ts":"2025-12-29T07:16:09.769581Z","caller":"embed/etcd.go:611","msg":"cmux::serve","address":"192.168.94.2:2380"}
	{"level":"info","ts":"2025-12-29T07:16:09.769603Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1981","msg":"dfc97eb0aae75b33 switched to configuration voters=(16125559238023404339)"}
	{"level":"info","ts":"2025-12-29T07:16:09.769715Z","caller":"embed/etcd.go:292","msg":"now serving peer/client/metrics","local-member-id":"dfc97eb0aae75b33","initial-advertise-peer-urls":["https://192.168.94.2:2380"],"listen-peer-urls":["https://192.168.94.2:2380"],"advertise-client-urls":["https://192.168.94.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.94.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2025-12-29T07:16:09.769742Z","caller":"membership/cluster.go:433","msg":"ignore already added member","cluster-id":"da400bbece288f5a","local-member-id":"dfc97eb0aae75b33","added-peer-id":"dfc97eb0aae75b33","added-peer-peer-urls":["https://192.168.94.2:2380"],"added-peer-is-learner":false}
	{"level":"info","ts":"2025-12-29T07:16:09.769778Z","caller":"embed/etcd.go:890","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2025-12-29T07:16:09.769858Z","caller":"membership/cluster.go:674","msg":"updated cluster version","cluster-id":"da400bbece288f5a","local-member-id":"dfc97eb0aae75b33","from":"3.6","to":"3.6"}
	{"level":"info","ts":"2025-12-29T07:16:10.460201Z","logger":"raft","caller":"v3@v3.6.0/raft.go:988","msg":"dfc97eb0aae75b33 is starting a new election at term 2"}
	{"level":"info","ts":"2025-12-29T07:16:10.460267Z","logger":"raft","caller":"v3@v3.6.0/raft.go:930","msg":"dfc97eb0aae75b33 became pre-candidate at term 2"}
	{"level":"info","ts":"2025-12-29T07:16:10.460308Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1077","msg":"dfc97eb0aae75b33 received MsgPreVoteResp from dfc97eb0aae75b33 at term 2"}
	{"level":"info","ts":"2025-12-29T07:16:10.460317Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1693","msg":"dfc97eb0aae75b33 has received 1 MsgPreVoteResp votes and 0 vote rejections"}
	{"level":"info","ts":"2025-12-29T07:16:10.460332Z","logger":"raft","caller":"v3@v3.6.0/raft.go:912","msg":"dfc97eb0aae75b33 became candidate at term 3"}
	{"level":"info","ts":"2025-12-29T07:16:10.460896Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1077","msg":"dfc97eb0aae75b33 received MsgVoteResp from dfc97eb0aae75b33 at term 3"}
	{"level":"info","ts":"2025-12-29T07:16:10.460929Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1693","msg":"dfc97eb0aae75b33 has received 1 MsgVoteResp votes and 0 vote rejections"}
	{"level":"info","ts":"2025-12-29T07:16:10.460964Z","logger":"raft","caller":"v3@v3.6.0/raft.go:970","msg":"dfc97eb0aae75b33 became leader at term 3"}
	{"level":"info","ts":"2025-12-29T07:16:10.460979Z","logger":"raft","caller":"v3@v3.6.0/node.go:370","msg":"raft.node: dfc97eb0aae75b33 elected leader dfc97eb0aae75b33 at term 3"}
	{"level":"info","ts":"2025-12-29T07:16:10.462299Z","caller":"embed/serve.go:138","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-12-29T07:16:10.462294Z","caller":"etcdserver/server.go:1820","msg":"published local member to cluster through raft","local-member-id":"dfc97eb0aae75b33","local-member-attributes":"{Name:no-preload-122332 ClientURLs:[https://192.168.94.2:2379]}","cluster-id":"da400bbece288f5a","publish-timeout":"7s"}
	{"level":"info","ts":"2025-12-29T07:16:10.462321Z","caller":"embed/serve.go:138","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-12-29T07:16:10.462718Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-12-29T07:16:10.462746Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-12-29T07:16:10.464441Z","caller":"v3rpc/health.go:63","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-12-29T07:16:10.464554Z","caller":"v3rpc/health.go:63","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-12-29T07:16:10.466740Z","caller":"embed/serve.go:283","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.94.2:2379"}
	{"level":"info","ts":"2025-12-29T07:16:10.466806Z","caller":"embed/serve.go:283","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	
	
	==> kernel <==
	 07:17:10 up 59 min,  0 user,  load average: 2.59, 2.68, 2.01
	Linux no-preload-122332 6.8.0-1045-gcp #48~22.04.1-Ubuntu SMP Tue Nov 25 13:07:56 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [83ebe55fd0c5939b15566c7fa2cb8186d179a5062dc285850807eb6f771c21bb] <==
	I1229 07:16:12.811901       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1229 07:16:12.812191       1 main.go:139] hostIP = 192.168.94.2
	podIP = 192.168.94.2
	I1229 07:16:12.812399       1 main.go:148] setting mtu 1500 for CNI 
	I1229 07:16:12.812428       1 main.go:178] kindnetd IP family: "ipv4"
	I1229 07:16:12.812443       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-12-29T07:16:13Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1229 07:16:13.108371       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1229 07:16:13.108623       1 controller.go:381] "Waiting for informer caches to sync"
	I1229 07:16:13.207008       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1229 07:16:13.208160       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1229 07:16:13.507836       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1229 07:16:13.507863       1 metrics.go:72] Registering metrics
	I1229 07:16:13.507915       1 controller.go:711] "Syncing nftables rules"
	I1229 07:16:23.107974       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1229 07:16:23.108057       1 main.go:301] handling current node
	I1229 07:16:33.107759       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1229 07:16:33.107788       1 main.go:301] handling current node
	I1229 07:16:43.107940       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1229 07:16:43.108000       1 main.go:301] handling current node
	I1229 07:16:53.107606       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1229 07:16:53.107644       1 main.go:301] handling current node
	I1229 07:17:03.107825       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1229 07:17:03.107870       1 main.go:301] handling current node
	
	
	==> kube-apiserver [3c840a729524e5af9fc1ab0924ee6323875c1b5066189ad27582f5313c496cbc] <==
	I1229 07:16:11.587336       1 shared_informer.go:377] "Caches are synced"
	I1229 07:16:11.587394       1 aggregator.go:187] initial CRD sync complete...
	I1229 07:16:11.587454       1 autoregister_controller.go:144] Starting autoregister controller
	I1229 07:16:11.587463       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1229 07:16:11.587470       1 cache.go:39] Caches are synced for autoregister controller
	I1229 07:16:11.587647       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1229 07:16:11.587659       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1229 07:16:11.587934       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	E1229 07:16:11.593934       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1229 07:16:11.594602       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1229 07:16:11.600264       1 cidrallocator.go:302] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1229 07:16:11.603083       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1229 07:16:11.603100       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1229 07:16:11.633166       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1229 07:16:11.876801       1 controller.go:667] quota admission added evaluator for: namespaces
	I1229 07:16:11.903184       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1229 07:16:11.920235       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1229 07:16:11.926159       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1229 07:16:11.933197       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1229 07:16:11.963794       1 alloc.go:329] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.106.138.101"}
	I1229 07:16:11.974302       1 alloc.go:329] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.107.161.243"}
	I1229 07:16:12.492769       1 storage_scheduling.go:139] all system priority classes are created successfully or already exist.
	I1229 07:16:15.207352       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1229 07:16:15.257296       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1229 07:16:15.456766       1 controller.go:667] quota admission added evaluator for: endpoints
	
	
	==> kube-controller-manager [182221ab78b63253e283f5b17e6c4eefd8ff0cf8a867399484c79718b382becd] <==
	I1229 07:16:14.759460       1 shared_informer.go:377] "Caches are synced"
	I1229 07:16:14.759477       1 shared_informer.go:377] "Caches are synced"
	I1229 07:16:14.759493       1 shared_informer.go:377] "Caches are synced"
	I1229 07:16:14.759398       1 shared_informer.go:377] "Caches are synced"
	I1229 07:16:14.759460       1 shared_informer.go:377] "Caches are synced"
	I1229 07:16:14.760036       1 shared_informer.go:377] "Caches are synced"
	I1229 07:16:14.760085       1 shared_informer.go:377] "Caches are synced"
	I1229 07:16:14.760415       1 shared_informer.go:377] "Caches are synced"
	I1229 07:16:14.760425       1 shared_informer.go:377] "Caches are synced"
	I1229 07:16:14.760435       1 shared_informer.go:377] "Caches are synced"
	I1229 07:16:14.760459       1 shared_informer.go:377] "Caches are synced"
	I1229 07:16:14.760630       1 shared_informer.go:377] "Caches are synced"
	I1229 07:16:14.760427       1 shared_informer.go:377] "Caches are synced"
	I1229 07:16:14.760686       1 shared_informer.go:377] "Caches are synced"
	I1229 07:16:14.760415       1 shared_informer.go:377] "Caches are synced"
	I1229 07:16:14.760740       1 node_lifecycle_controller.go:1234] "Initializing eviction metric for zone" zone=""
	I1229 07:16:14.760821       1 node_lifecycle_controller.go:886] "Missing timestamp for Node. Assuming now as a timestamp" node="no-preload-122332"
	I1229 07:16:14.760887       1 node_lifecycle_controller.go:1080] "Controller detected that zone is now in new state" zone="" newState="Normal"
	I1229 07:16:14.762883       1 shared_informer.go:377] "Caches are synced"
	I1229 07:16:14.763519       1 shared_informer.go:377] "Caches are synced"
	I1229 07:16:14.770385       1 shared_informer.go:370] "Waiting for caches to sync"
	I1229 07:16:14.860529       1 shared_informer.go:377] "Caches are synced"
	I1229 07:16:14.860547       1 garbagecollector.go:166] "Garbage collector: all resource monitors have synced"
	I1229 07:16:14.860554       1 garbagecollector.go:169] "Proceeding to collect garbage"
	I1229 07:16:14.870930       1 shared_informer.go:377] "Caches are synced"
	
	
	==> kube-proxy [4749520de1b726f631eef5a9218e09908cae4d296fcd6920b8b44725efffa5f9] <==
	I1229 07:16:12.659884       1 server_linux.go:53] "Using iptables proxy"
	I1229 07:16:12.740653       1 shared_informer.go:370] "Waiting for caches to sync"
	I1229 07:16:12.841762       1 shared_informer.go:377] "Caches are synced"
	I1229 07:16:12.841825       1 server.go:218] "Successfully retrieved NodeIPs" NodeIPs=["192.168.94.2"]
	E1229 07:16:12.841930       1 server.go:255] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1229 07:16:12.865462       1 server.go:264] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1229 07:16:12.865533       1 server_linux.go:136] "Using iptables Proxier"
	I1229 07:16:12.872021       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1229 07:16:12.872517       1 server.go:529] "Version info" version="v1.35.0"
	I1229 07:16:12.872532       1 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1229 07:16:12.874214       1 config.go:403] "Starting serviceCIDR config controller"
	I1229 07:16:12.874243       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1229 07:16:12.874270       1 config.go:106] "Starting endpoint slice config controller"
	I1229 07:16:12.874276       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1229 07:16:12.874353       1 config.go:309] "Starting node config controller"
	I1229 07:16:12.874367       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1229 07:16:12.874374       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1229 07:16:12.874085       1 config.go:200] "Starting service config controller"
	I1229 07:16:12.874390       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1229 07:16:12.974508       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1229 07:16:12.974535       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1229 07:16:12.975096       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-scheduler [482322719dad640690982288c2258e90836d194891b2179cab964e1340265902] <==
	I1229 07:16:10.025238       1 serving.go:386] Generated self-signed cert in-memory
	W1229 07:16:11.501865       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1229 07:16:11.501916       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1229 07:16:11.501927       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1229 07:16:11.501937       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1229 07:16:11.546602       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.35.0"
	I1229 07:16:11.546637       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1229 07:16:11.550479       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1229 07:16:11.550579       1 shared_informer.go:370] "Waiting for caches to sync"
	I1229 07:16:11.550945       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1229 07:16:11.551018       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1229 07:16:11.651085       1 shared_informer.go:377] "Caches are synced"
	
	
	==> kubelet <==
	Dec 29 07:16:26 no-preload-122332 kubelet[732]: E1229 07:16:26.831289     732 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-controller-manager-no-preload-122332" containerName="kube-controller-manager"
	Dec 29 07:16:31 no-preload-122332 kubelet[732]: E1229 07:16:31.421108     732 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-8kjsc" containerName="dashboard-metrics-scraper"
	Dec 29 07:16:31 no-preload-122332 kubelet[732]: I1229 07:16:31.421142     732 scope.go:122] "RemoveContainer" containerID="f6077febbb8010d447c8c50ef72bb55285597337f424ce773b7ef2f351928145"
	Dec 29 07:16:32 no-preload-122332 kubelet[732]: I1229 07:16:32.291024     732 scope.go:122] "RemoveContainer" containerID="f6077febbb8010d447c8c50ef72bb55285597337f424ce773b7ef2f351928145"
	Dec 29 07:16:32 no-preload-122332 kubelet[732]: E1229 07:16:32.291299     732 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-8kjsc" containerName="dashboard-metrics-scraper"
	Dec 29 07:16:32 no-preload-122332 kubelet[732]: I1229 07:16:32.291338     732 scope.go:122] "RemoveContainer" containerID="4d2d448b4a7c3be44c2e8fc003543736bed79d9f1440dd278928896b191224a0"
	Dec 29 07:16:32 no-preload-122332 kubelet[732]: E1229 07:16:32.291545     732 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-867fb5f87b-8kjsc_kubernetes-dashboard(ebddbcf3-af17-41a9-8034-37df434c96e9)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-8kjsc" podUID="ebddbcf3-af17-41a9-8034-37df434c96e9"
	Dec 29 07:16:41 no-preload-122332 kubelet[732]: E1229 07:16:41.421154     732 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-8kjsc" containerName="dashboard-metrics-scraper"
	Dec 29 07:16:41 no-preload-122332 kubelet[732]: I1229 07:16:41.421193     732 scope.go:122] "RemoveContainer" containerID="4d2d448b4a7c3be44c2e8fc003543736bed79d9f1440dd278928896b191224a0"
	Dec 29 07:16:41 no-preload-122332 kubelet[732]: E1229 07:16:41.421392     732 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-867fb5f87b-8kjsc_kubernetes-dashboard(ebddbcf3-af17-41a9-8034-37df434c96e9)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-8kjsc" podUID="ebddbcf3-af17-41a9-8034-37df434c96e9"
	Dec 29 07:16:43 no-preload-122332 kubelet[732]: I1229 07:16:43.318579     732 scope.go:122] "RemoveContainer" containerID="f6bda588574168156c2fbabe167417553897fbea83ffd12be951a62f9ebeef8b"
	Dec 29 07:16:50 no-preload-122332 kubelet[732]: E1229 07:16:50.951630     732 prober_manager.go:209] "Readiness probe already exists for container" pod="kube-system/coredns-7d764666f9-6rcr2" containerName="coredns"
	Dec 29 07:16:54 no-preload-122332 kubelet[732]: E1229 07:16:54.211013     732 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-8kjsc" containerName="dashboard-metrics-scraper"
	Dec 29 07:16:54 no-preload-122332 kubelet[732]: I1229 07:16:54.211049     732 scope.go:122] "RemoveContainer" containerID="4d2d448b4a7c3be44c2e8fc003543736bed79d9f1440dd278928896b191224a0"
	Dec 29 07:16:54 no-preload-122332 kubelet[732]: I1229 07:16:54.349303     732 scope.go:122] "RemoveContainer" containerID="4d2d448b4a7c3be44c2e8fc003543736bed79d9f1440dd278928896b191224a0"
	Dec 29 07:16:54 no-preload-122332 kubelet[732]: E1229 07:16:54.349551     732 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-8kjsc" containerName="dashboard-metrics-scraper"
	Dec 29 07:16:54 no-preload-122332 kubelet[732]: I1229 07:16:54.349596     732 scope.go:122] "RemoveContainer" containerID="322e7b29c6c5691659866c9876262fb3eee6007fc245f7ce7d575d2de9068828"
	Dec 29 07:16:54 no-preload-122332 kubelet[732]: E1229 07:16:54.349786     732 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-867fb5f87b-8kjsc_kubernetes-dashboard(ebddbcf3-af17-41a9-8034-37df434c96e9)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-8kjsc" podUID="ebddbcf3-af17-41a9-8034-37df434c96e9"
	Dec 29 07:17:01 no-preload-122332 kubelet[732]: E1229 07:17:01.421129     732 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-8kjsc" containerName="dashboard-metrics-scraper"
	Dec 29 07:17:01 no-preload-122332 kubelet[732]: I1229 07:17:01.421183     732 scope.go:122] "RemoveContainer" containerID="322e7b29c6c5691659866c9876262fb3eee6007fc245f7ce7d575d2de9068828"
	Dec 29 07:17:01 no-preload-122332 kubelet[732]: E1229 07:17:01.421424     732 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-867fb5f87b-8kjsc_kubernetes-dashboard(ebddbcf3-af17-41a9-8034-37df434c96e9)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-8kjsc" podUID="ebddbcf3-af17-41a9-8034-37df434c96e9"
	Dec 29 07:17:04 no-preload-122332 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Dec 29 07:17:04 no-preload-122332 systemd[1]: kubelet.service: Deactivated successfully.
	Dec 29 07:17:04 no-preload-122332 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 29 07:17:04 no-preload-122332 systemd[1]: kubelet.service: Consumed 1.728s CPU time.
	
	
	==> kubernetes-dashboard [b4ab1c883154a271188d140f15f54d642fc3b90bc67d3be7f26173073eed79c9] <==
	2025/12/29 07:16:18 Using namespace: kubernetes-dashboard
	2025/12/29 07:16:18 Using in-cluster config to connect to apiserver
	2025/12/29 07:16:18 Using secret token for csrf signing
	2025/12/29 07:16:18 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/12/29 07:16:18 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/12/29 07:16:18 Successful initial request to the apiserver, version: v1.35.0
	2025/12/29 07:16:18 Generating JWE encryption key
	2025/12/29 07:16:18 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/12/29 07:16:18 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/12/29 07:16:18 Initializing JWE encryption key from synchronized object
	2025/12/29 07:16:18 Creating in-cluster Sidecar client
	2025/12/29 07:16:18 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/12/29 07:16:18 Serving insecurely on HTTP port: 9090
	2025/12/29 07:16:48 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/12/29 07:16:18 Starting overwatch
	
	
	==> storage-provisioner [545c1cbee5f14da1e2b27f7f896e2dc4c58720ea9d59b706ffa64166d5bb9f96] <==
	I1229 07:16:43.366908       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1229 07:16:43.374369       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1229 07:16:43.374414       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1229 07:16:43.376471       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1229 07:16:46.831396       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1229 07:16:51.091621       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1229 07:16:54.691108       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1229 07:16:57.745584       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1229 07:17:00.768407       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1229 07:17:00.774189       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 07:17:00.774543       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1229 07:17:00.774712       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-122332_8f4ee5ed-151a-4e41-a32f-de4e707c566a!
	I1229 07:17:00.775093       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"643709f2-3cd4-4ace-8f28-a3dfde29064a", APIVersion:"v1", ResourceVersion:"675", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-122332_8f4ee5ed-151a-4e41-a32f-de4e707c566a became leader
	W1229 07:17:00.780500       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1229 07:17:00.786915       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 07:17:00.875831       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-122332_8f4ee5ed-151a-4e41-a32f-de4e707c566a!
	W1229 07:17:02.790919       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1229 07:17:02.801208       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1229 07:17:04.805088       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1229 07:17:04.812643       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1229 07:17:06.822271       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1229 07:17:06.860678       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1229 07:17:08.864193       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1229 07:17:08.869737       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	
	
	==> storage-provisioner [f6bda588574168156c2fbabe167417553897fbea83ffd12be951a62f9ebeef8b] <==
	I1229 07:16:12.626233       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1229 07:16:42.629303       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-122332 -n no-preload-122332
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-122332 -n no-preload-122332: exit status 2 (397.206781ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:270: (dbg) Run:  kubectl --context no-preload-122332 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:294: <<< TestStartStop/group/no-preload/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:295: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/no-preload/serial/Pause (6.69s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (2.38s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-067566 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-067566 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 11 (302.576524ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-29T07:17:39Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-067566 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 11
start_stop_delete_test.go:209: WARNING: cni mode requires additional setup before pods can schedule :(
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/newest-cni/serial/EnableAddonWhileActive]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/newest-cni/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect newest-cni-067566
helpers_test.go:244: (dbg) docker inspect newest-cni-067566:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "b76ee009518ec14b0c932931cbe7212bb296f3a3ddab5a4a534790020b0ed16b",
	        "Created": "2025-12-29T07:17:19.198674026Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 276194,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-29T07:17:19.237515266Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:a2c772d8c9235c5e8a66a3d0af7f5184436aa141d74f90a5659fe0d594ea14d5",
	        "ResolvConfPath": "/var/lib/docker/containers/b76ee009518ec14b0c932931cbe7212bb296f3a3ddab5a4a534790020b0ed16b/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/b76ee009518ec14b0c932931cbe7212bb296f3a3ddab5a4a534790020b0ed16b/hostname",
	        "HostsPath": "/var/lib/docker/containers/b76ee009518ec14b0c932931cbe7212bb296f3a3ddab5a4a534790020b0ed16b/hosts",
	        "LogPath": "/var/lib/docker/containers/b76ee009518ec14b0c932931cbe7212bb296f3a3ddab5a4a534790020b0ed16b/b76ee009518ec14b0c932931cbe7212bb296f3a3ddab5a4a534790020b0ed16b-json.log",
	        "Name": "/newest-cni-067566",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "newest-cni-067566:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "newest-cni-067566",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "b76ee009518ec14b0c932931cbe7212bb296f3a3ddab5a4a534790020b0ed16b",
	                "LowerDir": "/var/lib/docker/overlay2/fda93af0dbbbb86c0eaf303db055c7aa4292d50ef2979641234b302fe67b93af-init/diff:/var/lib/docker/overlay2/3c9e424a3298c821d4195922375c499065668db3230de879006c717dc268a220/diff",
	                "MergedDir": "/var/lib/docker/overlay2/fda93af0dbbbb86c0eaf303db055c7aa4292d50ef2979641234b302fe67b93af/merged",
	                "UpperDir": "/var/lib/docker/overlay2/fda93af0dbbbb86c0eaf303db055c7aa4292d50ef2979641234b302fe67b93af/diff",
	                "WorkDir": "/var/lib/docker/overlay2/fda93af0dbbbb86c0eaf303db055c7aa4292d50ef2979641234b302fe67b93af/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "newest-cni-067566",
	                "Source": "/var/lib/docker/volumes/newest-cni-067566/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "newest-cni-067566",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766979815-22353@sha256:20dad5895b49b986a1253c0faab60865204843ac97fd3a6e6210da5896244409",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "newest-cni-067566",
	                "name.minikube.sigs.k8s.io": "newest-cni-067566",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "2265902aa6eb0f26ef179fab68b18348bc7f3725481b4be0e1f17685ac5f9156",
	            "SandboxKey": "/var/run/docker/netns/2265902aa6eb",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33093"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33094"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33097"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33095"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33096"
	                    }
	                ]
	            },
	            "Networks": {
	                "newest-cni-067566": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.94.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "f04963e259f7b5d31f9667ec06f8c8e0c565f69ad587935b8feaa506efff99b2",
	                    "EndpointID": "1084739fe6aa15895af0dae64c2ed9b3af68a7e4a604f5236b642d9085b749ff",
	                    "Gateway": "192.168.94.1",
	                    "IPAddress": "192.168.94.2",
	                    "MacAddress": "8e:fa:aa:e3:63:94",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "newest-cni-067566",
	                        "b76ee009518e"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-067566 -n newest-cni-067566
helpers_test.go:253: <<< TestStartStop/group/newest-cni/serial/EnableAddonWhileActive FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/newest-cni/serial/EnableAddonWhileActive]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-067566 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-amd64 -p newest-cni-067566 logs -n 25: (1.06044126s)
helpers_test.go:261: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ delete  │ -p old-k8s-version-876718                                                                                                                                                                                                                     │ old-k8s-version-876718       │ jenkins │ v1.37.0 │ 29 Dec 25 07:15 UTC │ 29 Dec 25 07:15 UTC │
	│ delete  │ -p old-k8s-version-876718                                                                                                                                                                                                                     │ old-k8s-version-876718       │ jenkins │ v1.37.0 │ 29 Dec 25 07:15 UTC │ 29 Dec 25 07:15 UTC │
	│ start   │ -p embed-certs-739827 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0                                                                                        │ embed-certs-739827           │ jenkins │ v1.37.0 │ 29 Dec 25 07:15 UTC │ 29 Dec 25 07:16 UTC │
	│ addons  │ enable metrics-server -p no-preload-122332 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-122332            │ jenkins │ v1.37.0 │ 29 Dec 25 07:15 UTC │                     │
	│ start   │ -p cert-expiration-452455 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio                                                                                                                                     │ cert-expiration-452455       │ jenkins │ v1.37.0 │ 29 Dec 25 07:15 UTC │ 29 Dec 25 07:15 UTC │
	│ stop    │ -p no-preload-122332 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-122332            │ jenkins │ v1.37.0 │ 29 Dec 25 07:15 UTC │ 29 Dec 25 07:16 UTC │
	│ delete  │ -p cert-expiration-452455                                                                                                                                                                                                                     │ cert-expiration-452455       │ jenkins │ v1.37.0 │ 29 Dec 25 07:15 UTC │ 29 Dec 25 07:15 UTC │
	│ delete  │ -p disable-driver-mounts-708770                                                                                                                                                                                                               │ disable-driver-mounts-708770 │ jenkins │ v1.37.0 │ 29 Dec 25 07:15 UTC │ 29 Dec 25 07:15 UTC │
	│ start   │ -p default-k8s-diff-port-798607 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0                                                                      │ default-k8s-diff-port-798607 │ jenkins │ v1.37.0 │ 29 Dec 25 07:15 UTC │ 29 Dec 25 07:16 UTC │
	│ addons  │ enable dashboard -p no-preload-122332 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-122332            │ jenkins │ v1.37.0 │ 29 Dec 25 07:16 UTC │ 29 Dec 25 07:16 UTC │
	│ start   │ -p no-preload-122332 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0                                                                                       │ no-preload-122332            │ jenkins │ v1.37.0 │ 29 Dec 25 07:16 UTC │ 29 Dec 25 07:16 UTC │
	│ addons  │ enable metrics-server -p embed-certs-739827 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-739827           │ jenkins │ v1.37.0 │ 29 Dec 25 07:16 UTC │                     │
	│ stop    │ -p embed-certs-739827 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-739827           │ jenkins │ v1.37.0 │ 29 Dec 25 07:16 UTC │ 29 Dec 25 07:16 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-798607 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-798607 │ jenkins │ v1.37.0 │ 29 Dec 25 07:16 UTC │                     │
	│ stop    │ -p default-k8s-diff-port-798607 --alsologtostderr -v=3                                                                                                                                                                                        │ default-k8s-diff-port-798607 │ jenkins │ v1.37.0 │ 29 Dec 25 07:16 UTC │ 29 Dec 25 07:16 UTC │
	│ addons  │ enable dashboard -p embed-certs-739827 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-739827           │ jenkins │ v1.37.0 │ 29 Dec 25 07:16 UTC │ 29 Dec 25 07:16 UTC │
	│ start   │ -p embed-certs-739827 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0                                                                                        │ embed-certs-739827           │ jenkins │ v1.37.0 │ 29 Dec 25 07:16 UTC │ 29 Dec 25 07:17 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-798607 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                       │ default-k8s-diff-port-798607 │ jenkins │ v1.37.0 │ 29 Dec 25 07:16 UTC │ 29 Dec 25 07:16 UTC │
	│ start   │ -p default-k8s-diff-port-798607 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0                                                                      │ default-k8s-diff-port-798607 │ jenkins │ v1.37.0 │ 29 Dec 25 07:16 UTC │ 29 Dec 25 07:17 UTC │
	│ image   │ no-preload-122332 image list --format=json                                                                                                                                                                                                    │ no-preload-122332            │ jenkins │ v1.37.0 │ 29 Dec 25 07:17 UTC │ 29 Dec 25 07:17 UTC │
	│ pause   │ -p no-preload-122332 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-122332            │ jenkins │ v1.37.0 │ 29 Dec 25 07:17 UTC │                     │
	│ delete  │ -p no-preload-122332                                                                                                                                                                                                                          │ no-preload-122332            │ jenkins │ v1.37.0 │ 29 Dec 25 07:17 UTC │ 29 Dec 25 07:17 UTC │
	│ delete  │ -p no-preload-122332                                                                                                                                                                                                                          │ no-preload-122332            │ jenkins │ v1.37.0 │ 29 Dec 25 07:17 UTC │ 29 Dec 25 07:17 UTC │
	│ start   │ -p newest-cni-067566 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0 │ newest-cni-067566            │ jenkins │ v1.37.0 │ 29 Dec 25 07:17 UTC │ 29 Dec 25 07:17 UTC │
	│ addons  │ enable metrics-server -p newest-cni-067566 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ newest-cni-067566            │ jenkins │ v1.37.0 │ 29 Dec 25 07:17 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/29 07:17:14
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1229 07:17:14.656712  275624 out.go:360] Setting OutFile to fd 1 ...
	I1229 07:17:14.656972  275624 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1229 07:17:14.656982  275624 out.go:374] Setting ErrFile to fd 2...
	I1229 07:17:14.656987  275624 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1229 07:17:14.657152  275624 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22353-9207/.minikube/bin
	I1229 07:17:14.657641  275624 out.go:368] Setting JSON to false
	I1229 07:17:14.658745  275624 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":3587,"bootTime":1766989048,"procs":339,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1229 07:17:14.658803  275624 start.go:143] virtualization: kvm guest
	I1229 07:17:14.660767  275624 out.go:179] * [newest-cni-067566] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1229 07:17:14.662126  275624 out.go:179]   - MINIKUBE_LOCATION=22353
	I1229 07:17:14.662132  275624 notify.go:221] Checking for updates...
	I1229 07:17:14.663569  275624 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1229 07:17:14.664953  275624 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22353-9207/kubeconfig
	I1229 07:17:14.666745  275624 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22353-9207/.minikube
	I1229 07:17:14.668140  275624 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1229 07:17:14.669434  275624 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1229 07:17:14.671108  275624 config.go:182] Loaded profile config "default-k8s-diff-port-798607": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1229 07:17:14.671215  275624 config.go:182] Loaded profile config "embed-certs-739827": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1229 07:17:14.671334  275624 config.go:182] Loaded profile config "kubernetes-upgrade-174577": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1229 07:17:14.671434  275624 driver.go:422] Setting default libvirt URI to qemu:///system
	I1229 07:17:14.695400  275624 docker.go:124] docker version: linux-29.1.3:Docker Engine - Community
	I1229 07:17:14.695475  275624 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1229 07:17:14.753512  275624 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:65 OomKillDisable:false NGoroutines:76 SystemTime:2025-12-29 07:17:14.74279624 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x8
6_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:dea7da592f5d1d2b7755e3a161be07f43fad8f75 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[ma
p[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1229 07:17:14.753616  275624 docker.go:319] overlay module found
	I1229 07:17:14.755300  275624 out.go:179] * Using the docker driver based on user configuration
	I1229 07:17:14.756448  275624 start.go:309] selected driver: docker
	I1229 07:17:14.756466  275624 start.go:928] validating driver "docker" against <nil>
	I1229 07:17:14.756480  275624 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1229 07:17:14.757107  275624 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1229 07:17:14.813637  275624 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:65 OomKillDisable:false NGoroutines:76 SystemTime:2025-12-29 07:17:14.803539682 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:dea7da592f5d1d2b7755e3a161be07f43fad8f75 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1229 07:17:14.813883  275624 start_flags.go:333] no existing cluster config was found, will generate one from the flags 
	W1229 07:17:14.813975  275624 out.go:285] ! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	I1229 07:17:14.814251  275624 start_flags.go:1038] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1229 07:17:14.816523  275624 out.go:179] * Using Docker driver with root privileges
	I1229 07:17:14.817803  275624 cni.go:84] Creating CNI manager for ""
	I1229 07:17:14.817861  275624 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1229 07:17:14.817872  275624 start_flags.go:342] Found "CNI" CNI - setting NetworkPlugin=cni
	I1229 07:17:14.817928  275624 start.go:353] cluster config:
	{Name:newest-cni-067566 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766979815-22353@sha256:20dad5895b49b986a1253c0faab60865204843ac97fd3a6e6210da5896244409 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:newest-cni-067566 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnet
ClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1229 07:17:14.819152  275624 out.go:179] * Starting "newest-cni-067566" primary control-plane node in "newest-cni-067566" cluster
	I1229 07:17:14.820154  275624 cache.go:134] Beginning downloading kic base image for docker with crio
	I1229 07:17:14.821319  275624 out.go:179] * Pulling base image v0.0.48-1766979815-22353 ...
	I1229 07:17:14.822789  275624 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime crio
	I1229 07:17:14.822826  275624 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766979815-22353@sha256:20dad5895b49b986a1253c0faab60865204843ac97fd3a6e6210da5896244409 in local docker daemon
	I1229 07:17:14.822836  275624 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22353-9207/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-cri-o-overlay-amd64.tar.lz4
	I1229 07:17:14.822851  275624 cache.go:65] Caching tarball of preloaded images
	I1229 07:17:14.822938  275624 preload.go:251] Found /home/jenkins/minikube-integration/22353-9207/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1229 07:17:14.822952  275624 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0 on crio
	I1229 07:17:14.823062  275624 profile.go:143] Saving config to /home/jenkins/minikube-integration/22353-9207/.minikube/profiles/newest-cni-067566/config.json ...
	I1229 07:17:14.823090  275624 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22353-9207/.minikube/profiles/newest-cni-067566/config.json: {Name:mk2bc0e4d4478938819bd53062f0792a0309e53e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1229 07:17:14.843664  275624 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766979815-22353@sha256:20dad5895b49b986a1253c0faab60865204843ac97fd3a6e6210da5896244409 in local docker daemon, skipping pull
	I1229 07:17:14.843686  275624 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766979815-22353@sha256:20dad5895b49b986a1253c0faab60865204843ac97fd3a6e6210da5896244409 exists in daemon, skipping load
	I1229 07:17:14.843702  275624 cache.go:243] Successfully downloaded all kic artifacts
	I1229 07:17:14.843738  275624 start.go:360] acquireMachinesLock for newest-cni-067566: {Name:mkf05fa3f36d58ec22c6bd1a8fd9fcba373b113a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1229 07:17:14.843852  275624 start.go:364] duration metric: took 92.539µs to acquireMachinesLock for "newest-cni-067566"
	I1229 07:17:14.843881  275624 start.go:93] Provisioning new machine with config: &{Name:newest-cni-067566 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766979815-22353@sha256:20dad5895b49b986a1253c0faab60865204843ac97fd3a6e6210da5896244409 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:newest-cni-067566 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fals
e DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false} &{Name: IP: Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1229 07:17:14.843984  275624 start.go:125] createHost starting for "" (driver="docker")
	W1229 07:17:15.711308  269280 pod_ready.go:104] pod "coredns-7d764666f9-jwmww" is not "Ready", error: <nil>
	W1229 07:17:18.210553  269280 pod_ready.go:104] pod "coredns-7d764666f9-jwmww" is not "Ready", error: <nil>
	W1229 07:17:14.277628  268071 pod_ready.go:104] pod "coredns-7d764666f9-55529" is not "Ready", error: <nil>
	W1229 07:17:16.277853  268071 pod_ready.go:104] pod "coredns-7d764666f9-55529" is not "Ready", error: <nil>
	W1229 07:17:18.490747  268071 pod_ready.go:104] pod "coredns-7d764666f9-55529" is not "Ready", error: <nil>
	I1229 07:17:14.845859  275624 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1229 07:17:14.846106  275624 start.go:159] libmachine.API.Create for "newest-cni-067566" (driver="docker")
	I1229 07:17:14.846137  275624 client.go:173] LocalClient.Create starting
	I1229 07:17:14.846262  275624 main.go:144] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22353-9207/.minikube/certs/ca.pem
	I1229 07:17:14.846307  275624 main.go:144] libmachine: Decoding PEM data...
	I1229 07:17:14.846334  275624 main.go:144] libmachine: Parsing certificate...
	I1229 07:17:14.846404  275624 main.go:144] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22353-9207/.minikube/certs/cert.pem
	I1229 07:17:14.846432  275624 main.go:144] libmachine: Decoding PEM data...
	I1229 07:17:14.846450  275624 main.go:144] libmachine: Parsing certificate...
	I1229 07:17:14.846814  275624 cli_runner.go:164] Run: docker network inspect newest-cni-067566 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1229 07:17:14.864964  275624 cli_runner.go:211] docker network inspect newest-cni-067566 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1229 07:17:14.865037  275624 network_create.go:284] running [docker network inspect newest-cni-067566] to gather additional debugging logs...
	I1229 07:17:14.865061  275624 cli_runner.go:164] Run: docker network inspect newest-cni-067566
	W1229 07:17:14.881616  275624 cli_runner.go:211] docker network inspect newest-cni-067566 returned with exit code 1
	I1229 07:17:14.881644  275624 network_create.go:287] error running [docker network inspect newest-cni-067566]: docker network inspect newest-cni-067566: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network newest-cni-067566 not found
	I1229 07:17:14.881661  275624 network_create.go:289] output of [docker network inspect newest-cni-067566]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network newest-cni-067566 not found
	
	** /stderr **
	I1229 07:17:14.881806  275624 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1229 07:17:14.900340  275624 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-0cdc02b57a9c IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:d2:92:f5:d8:8c:53} reservation:<nil>}
	I1229 07:17:14.901011  275624 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-09c86d5ed1ab IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:da:3f:ba:d0:a8:f3} reservation:<nil>}
	I1229 07:17:14.901755  275624 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-5eb2f52e9e64 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:9e:e7:f2:5b:43:1d} reservation:<nil>}
	I1229 07:17:14.902180  275624 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-66e171323e2a IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:2a:d9:01:28:19:dc} reservation:<nil>}
	I1229 07:17:14.902744  275624 network.go:211] skipping subnet 192.168.85.0/24 that is taken: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName:br-a50196d85ec6 IfaceIPv4:192.168.85.1 IfaceMTU:1500 IfaceMAC:52:30:53:e5:57:03} reservation:<nil>}
	I1229 07:17:14.903524  275624 network.go:206] using free private subnet 192.168.94.0/24: &{IP:192.168.94.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.94.0/24 Gateway:192.168.94.1 ClientMin:192.168.94.2 ClientMax:192.168.94.254 Broadcast:192.168.94.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001ff8680}
	I1229 07:17:14.903552  275624 network_create.go:124] attempt to create docker network newest-cni-067566 192.168.94.0/24 with gateway 192.168.94.1 and MTU of 1500 ...
	I1229 07:17:14.903594  275624 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.94.0/24 --gateway=192.168.94.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=newest-cni-067566 newest-cni-067566
	I1229 07:17:14.952951  275624 network_create.go:108] docker network newest-cni-067566 192.168.94.0/24 created
	I1229 07:17:14.952985  275624 kic.go:121] calculated static IP "192.168.94.2" for the "newest-cni-067566" container
	I1229 07:17:14.953077  275624 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1229 07:17:14.972364  275624 cli_runner.go:164] Run: docker volume create newest-cni-067566 --label name.minikube.sigs.k8s.io=newest-cni-067566 --label created_by.minikube.sigs.k8s.io=true
	I1229 07:17:14.989939  275624 oci.go:103] Successfully created a docker volume newest-cni-067566
	I1229 07:17:14.990020  275624 cli_runner.go:164] Run: docker run --rm --name newest-cni-067566-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-067566 --entrypoint /usr/bin/test -v newest-cni-067566:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766979815-22353@sha256:20dad5895b49b986a1253c0faab60865204843ac97fd3a6e6210da5896244409 -d /var/lib
	I1229 07:17:15.379363  275624 oci.go:107] Successfully prepared a docker volume newest-cni-067566
	I1229 07:17:15.379459  275624 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime crio
	I1229 07:17:15.379494  275624 kic.go:194] Starting extracting preloaded images to volume ...
	I1229 07:17:15.379570  275624 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22353-9207/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v newest-cni-067566:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766979815-22353@sha256:20dad5895b49b986a1253c0faab60865204843ac97fd3a6e6210da5896244409 -I lz4 -xf /preloaded.tar -C /extractDir
	I1229 07:17:19.126564  275624 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22353-9207/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v newest-cni-067566:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766979815-22353@sha256:20dad5895b49b986a1253c0faab60865204843ac97fd3a6e6210da5896244409 -I lz4 -xf /preloaded.tar -C /extractDir: (3.746954282s)
	I1229 07:17:19.126594  275624 kic.go:203] duration metric: took 3.747113555s to extract preloaded images to volume ...
	W1229 07:17:19.126678  275624 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1229 07:17:19.126705  275624 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1229 07:17:19.126740  275624 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1229 07:17:19.181298  275624 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname newest-cni-067566 --name newest-cni-067566 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-067566 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=newest-cni-067566 --network newest-cni-067566 --ip 192.168.94.2 --volume newest-cni-067566:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766979815-22353@sha256:20dad5895b49b986a1253c0faab60865204843ac97fd3a6e6210da5896244409
	I1229 07:17:19.464257  275624 cli_runner.go:164] Run: docker container inspect newest-cni-067566 --format={{.State.Running}}
	I1229 07:17:19.482420  275624 cli_runner.go:164] Run: docker container inspect newest-cni-067566 --format={{.State.Status}}
	I1229 07:17:19.500504  275624 cli_runner.go:164] Run: docker exec newest-cni-067566 stat /var/lib/dpkg/alternatives/iptables
	I1229 07:17:19.548729  275624 oci.go:144] the created container "newest-cni-067566" has a running status.
	I1229 07:17:19.548766  275624 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/22353-9207/.minikube/machines/newest-cni-067566/id_rsa...
	I1229 07:17:19.589605  275624 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22353-9207/.minikube/machines/newest-cni-067566/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1229 07:17:19.620774  275624 cli_runner.go:164] Run: docker container inspect newest-cni-067566 --format={{.State.Status}}
	I1229 07:17:19.637834  275624 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1229 07:17:19.637855  275624 kic_runner.go:114] Args: [docker exec --privileged newest-cni-067566 chown docker:docker /home/docker/.ssh/authorized_keys]
	W1229 07:17:20.710326  269280 pod_ready.go:104] pod "coredns-7d764666f9-jwmww" is not "Ready", error: <nil>
	W1229 07:17:23.210519  269280 pod_ready.go:104] pod "coredns-7d764666f9-jwmww" is not "Ready", error: <nil>
	W1229 07:17:20.776728  268071 pod_ready.go:104] pod "coredns-7d764666f9-55529" is not "Ready", error: <nil>
	W1229 07:17:22.777159  268071 pod_ready.go:104] pod "coredns-7d764666f9-55529" is not "Ready", error: <nil>
	I1229 07:17:20.795981  225445 api_server.go:299] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1229 07:17:20.796464  225445 api_server.go:315] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1229 07:17:20.796530  225445 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1229 07:17:20.796587  225445 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1229 07:17:20.823882  225445 cri.go:96] found id: "5fbecbb2283ceda81439283eacce2cbf6249d02f5d1e89b1777962e1e91d663d"
	I1229 07:17:20.823905  225445 cri.go:96] found id: "3a9c461f6549cedfcb902d493a98dfdfd45573c28ec554e6b1d2ce904ee1fd67"
	I1229 07:17:20.823911  225445 cri.go:96] found id: ""
	I1229 07:17:20.823920  225445 logs.go:282] 2 containers: [5fbecbb2283ceda81439283eacce2cbf6249d02f5d1e89b1777962e1e91d663d 3a9c461f6549cedfcb902d493a98dfdfd45573c28ec554e6b1d2ce904ee1fd67]
	I1229 07:17:20.823973  225445 ssh_runner.go:195] Run: which crictl
	I1229 07:17:20.828163  225445 ssh_runner.go:195] Run: which crictl
	I1229 07:17:20.832182  225445 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1229 07:17:20.832262  225445 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1229 07:17:20.859517  225445 cri.go:96] found id: "02694b727a7ef5615b676832bc306017d23da15eff0a3aa9c1e32630ffe9e1bd"
	I1229 07:17:20.859545  225445 cri.go:96] found id: ""
	I1229 07:17:20.859555  225445 logs.go:282] 1 containers: [02694b727a7ef5615b676832bc306017d23da15eff0a3aa9c1e32630ffe9e1bd]
	I1229 07:17:20.859616  225445 ssh_runner.go:195] Run: which crictl
	I1229 07:17:20.863554  225445 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1229 07:17:20.863615  225445 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1229 07:17:20.889764  225445 cri.go:96] found id: ""
	I1229 07:17:20.889790  225445 logs.go:282] 0 containers: []
	W1229 07:17:20.889801  225445 logs.go:284] No container was found matching "coredns"
	I1229 07:17:20.889809  225445 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1229 07:17:20.889852  225445 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1229 07:17:20.915848  225445 cri.go:96] found id: "14199977eb133b2984f339d2e84d171a190e387623bebdf7ed21e27d8cf71d90"
	I1229 07:17:20.915874  225445 cri.go:96] found id: "1a2a5ec730671fe2fec7f6b801e54afe17547e25677f0f6dc85052e44f7374d4"
	I1229 07:17:20.915879  225445 cri.go:96] found id: "83f0b45b14d0f5c0f2428e774b2c22cc1cbf1cdc631be13cb8017de1791ed9c1"
	I1229 07:17:20.915882  225445 cri.go:96] found id: ""
	I1229 07:17:20.915888  225445 logs.go:282] 3 containers: [14199977eb133b2984f339d2e84d171a190e387623bebdf7ed21e27d8cf71d90 1a2a5ec730671fe2fec7f6b801e54afe17547e25677f0f6dc85052e44f7374d4 83f0b45b14d0f5c0f2428e774b2c22cc1cbf1cdc631be13cb8017de1791ed9c1]
	I1229 07:17:20.915950  225445 ssh_runner.go:195] Run: which crictl
	I1229 07:17:20.920017  225445 ssh_runner.go:195] Run: which crictl
	I1229 07:17:20.923669  225445 ssh_runner.go:195] Run: which crictl
	I1229 07:17:20.927302  225445 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1229 07:17:20.927347  225445 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1229 07:17:20.954366  225445 cri.go:96] found id: ""
	I1229 07:17:20.954396  225445 logs.go:282] 0 containers: []
	W1229 07:17:20.954407  225445 logs.go:284] No container was found matching "kube-proxy"
	I1229 07:17:20.954415  225445 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1229 07:17:20.954483  225445 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1229 07:17:20.981912  225445 cri.go:96] found id: "d89f0865da0d6d00be5eaa57878033c5f8099b0390b297f0364ac2b1a1c1463e"
	I1229 07:17:20.981940  225445 cri.go:96] found id: "c6904a1539983ab61b05abbaa0001209617c96ddc1921ff83557531b68191f66"
	I1229 07:17:20.981946  225445 cri.go:96] found id: ""
	I1229 07:17:20.981954  225445 logs.go:282] 2 containers: [d89f0865da0d6d00be5eaa57878033c5f8099b0390b297f0364ac2b1a1c1463e c6904a1539983ab61b05abbaa0001209617c96ddc1921ff83557531b68191f66]
	I1229 07:17:20.982009  225445 ssh_runner.go:195] Run: which crictl
	I1229 07:17:20.985927  225445 ssh_runner.go:195] Run: which crictl
	I1229 07:17:20.989464  225445 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1229 07:17:20.989518  225445 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1229 07:17:21.016462  225445 cri.go:96] found id: ""
	I1229 07:17:21.016487  225445 logs.go:282] 0 containers: []
	W1229 07:17:21.016496  225445 logs.go:284] No container was found matching "kindnet"
	I1229 07:17:21.016504  225445 cri.go:61] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1229 07:17:21.016562  225445 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=storage-provisioner
	I1229 07:17:21.042933  225445 cri.go:96] found id: ""
	I1229 07:17:21.042965  225445 logs.go:282] 0 containers: []
	W1229 07:17:21.042977  225445 logs.go:284] No container was found matching "storage-provisioner"
	I1229 07:17:21.042989  225445 logs.go:123] Gathering logs for kubelet ...
	I1229 07:17:21.043003  225445 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1229 07:17:21.134124  225445 logs.go:123] Gathering logs for describe nodes ...
	I1229 07:17:21.134157  225445 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1229 07:17:21.189968  225445 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1229 07:17:21.189995  225445 logs.go:123] Gathering logs for kube-controller-manager [d89f0865da0d6d00be5eaa57878033c5f8099b0390b297f0364ac2b1a1c1463e] ...
	I1229 07:17:21.190007  225445 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d89f0865da0d6d00be5eaa57878033c5f8099b0390b297f0364ac2b1a1c1463e"
	I1229 07:17:21.217738  225445 logs.go:123] Gathering logs for CRI-O ...
	I1229 07:17:21.217769  225445 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1229 07:17:21.291008  225445 logs.go:123] Gathering logs for container status ...
	I1229 07:17:21.291042  225445 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1229 07:17:21.326739  225445 logs.go:123] Gathering logs for dmesg ...
	I1229 07:17:21.326776  225445 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1229 07:17:21.341333  225445 logs.go:123] Gathering logs for kube-apiserver [5fbecbb2283ceda81439283eacce2cbf6249d02f5d1e89b1777962e1e91d663d] ...
	I1229 07:17:21.341366  225445 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5fbecbb2283ceda81439283eacce2cbf6249d02f5d1e89b1777962e1e91d663d"
	I1229 07:17:21.375409  225445 logs.go:123] Gathering logs for kube-apiserver [3a9c461f6549cedfcb902d493a98dfdfd45573c28ec554e6b1d2ce904ee1fd67] ...
	I1229 07:17:21.375437  225445 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3a9c461f6549cedfcb902d493a98dfdfd45573c28ec554e6b1d2ce904ee1fd67"
	I1229 07:17:21.407042  225445 logs.go:123] Gathering logs for etcd [02694b727a7ef5615b676832bc306017d23da15eff0a3aa9c1e32630ffe9e1bd] ...
	I1229 07:17:21.407072  225445 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 02694b727a7ef5615b676832bc306017d23da15eff0a3aa9c1e32630ffe9e1bd"
	I1229 07:17:21.440625  225445 logs.go:123] Gathering logs for kube-scheduler [14199977eb133b2984f339d2e84d171a190e387623bebdf7ed21e27d8cf71d90] ...
	I1229 07:17:21.440655  225445 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 14199977eb133b2984f339d2e84d171a190e387623bebdf7ed21e27d8cf71d90"
	W1229 07:17:21.466722  225445 logs.go:138] Found kube-scheduler [14199977eb133b2984f339d2e84d171a190e387623bebdf7ed21e27d8cf71d90] problem: E1229 07:16:24.252692       1 run.go:72] "command failed" err="failed to create listener: failed to listen on 127.0.0.1:10259: listen tcp 127.0.0.1:10259: bind: address already in use"
	I1229 07:17:21.466756  225445 logs.go:123] Gathering logs for kube-scheduler [1a2a5ec730671fe2fec7f6b801e54afe17547e25677f0f6dc85052e44f7374d4] ...
	I1229 07:17:21.466770  225445 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 1a2a5ec730671fe2fec7f6b801e54afe17547e25677f0f6dc85052e44f7374d4"
	I1229 07:17:21.541245  225445 logs.go:123] Gathering logs for kube-scheduler [83f0b45b14d0f5c0f2428e774b2c22cc1cbf1cdc631be13cb8017de1791ed9c1] ...
	I1229 07:17:21.541284  225445 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 83f0b45b14d0f5c0f2428e774b2c22cc1cbf1cdc631be13cb8017de1791ed9c1"
	I1229 07:17:21.569507  225445 logs.go:123] Gathering logs for kube-controller-manager [c6904a1539983ab61b05abbaa0001209617c96ddc1921ff83557531b68191f66] ...
	I1229 07:17:21.569534  225445 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c6904a1539983ab61b05abbaa0001209617c96ddc1921ff83557531b68191f66"
	I1229 07:17:21.597886  225445 out.go:374] Setting ErrFile to fd 2...
	I1229 07:17:21.597910  225445 out.go:408] TERM=,COLORTERM=, which probably does not support color
	W1229 07:17:21.597970  225445 out.go:285] X Problems detected in kube-scheduler [14199977eb133b2984f339d2e84d171a190e387623bebdf7ed21e27d8cf71d90]:
	W1229 07:17:21.597988  225445 out.go:285]   E1229 07:16:24.252692       1 run.go:72] "command failed" err="failed to create listener: failed to listen on 127.0.0.1:10259: listen tcp 127.0.0.1:10259: bind: address already in use"
	I1229 07:17:21.597997  225445 out.go:374] Setting ErrFile to fd 2...
	I1229 07:17:21.598003  225445 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1229 07:17:19.676783  275624 cli_runner.go:164] Run: docker container inspect newest-cni-067566 --format={{.State.Status}}
	I1229 07:17:19.698768  275624 machine.go:94] provisionDockerMachine start ...
	I1229 07:17:19.698878  275624 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-067566
	I1229 07:17:19.719419  275624 main.go:144] libmachine: Using SSH client type: native
	I1229 07:17:19.719667  275624 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e300] 0x850fa0 <nil>  [] 0s} 127.0.0.1 33093 <nil> <nil>}
	I1229 07:17:19.719682  275624 main.go:144] libmachine: About to run SSH command:
	hostname
	I1229 07:17:19.720363  275624 main.go:144] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:41634->127.0.0.1:33093: read: connection reset by peer
	I1229 07:17:22.858085  275624 main.go:144] libmachine: SSH cmd err, output: <nil>: newest-cni-067566
	
	I1229 07:17:22.858110  275624 ubuntu.go:182] provisioning hostname "newest-cni-067566"
	I1229 07:17:22.858174  275624 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-067566
	I1229 07:17:22.877095  275624 main.go:144] libmachine: Using SSH client type: native
	I1229 07:17:22.877338  275624 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e300] 0x850fa0 <nil>  [] 0s} 127.0.0.1 33093 <nil> <nil>}
	I1229 07:17:22.877353  275624 main.go:144] libmachine: About to run SSH command:
	sudo hostname newest-cni-067566 && echo "newest-cni-067566" | sudo tee /etc/hostname
	I1229 07:17:23.022711  275624 main.go:144] libmachine: SSH cmd err, output: <nil>: newest-cni-067566
	
	I1229 07:17:23.022787  275624 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-067566
	I1229 07:17:23.040306  275624 main.go:144] libmachine: Using SSH client type: native
	I1229 07:17:23.040514  275624 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e300] 0x850fa0 <nil>  [] 0s} 127.0.0.1 33093 <nil> <nil>}
	I1229 07:17:23.040529  275624 main.go:144] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-067566' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-067566/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-067566' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1229 07:17:23.176729  275624 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	I1229 07:17:23.176759  275624 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22353-9207/.minikube CaCertPath:/home/jenkins/minikube-integration/22353-9207/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22353-9207/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22353-9207/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22353-9207/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22353-9207/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22353-9207/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22353-9207/.minikube}
	I1229 07:17:23.176783  275624 ubuntu.go:190] setting up certificates
	I1229 07:17:23.176796  275624 provision.go:84] configureAuth start
	I1229 07:17:23.176902  275624 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-067566
	I1229 07:17:23.195239  275624 provision.go:143] copyHostCerts
	I1229 07:17:23.195294  275624 exec_runner.go:144] found /home/jenkins/minikube-integration/22353-9207/.minikube/ca.pem, removing ...
	I1229 07:17:23.195311  275624 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22353-9207/.minikube/ca.pem
	I1229 07:17:23.195397  275624 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22353-9207/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22353-9207/.minikube/ca.pem (1082 bytes)
	I1229 07:17:23.195518  275624 exec_runner.go:144] found /home/jenkins/minikube-integration/22353-9207/.minikube/cert.pem, removing ...
	I1229 07:17:23.195531  275624 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22353-9207/.minikube/cert.pem
	I1229 07:17:23.195573  275624 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22353-9207/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22353-9207/.minikube/cert.pem (1123 bytes)
	I1229 07:17:23.195676  275624 exec_runner.go:144] found /home/jenkins/minikube-integration/22353-9207/.minikube/key.pem, removing ...
	I1229 07:17:23.195687  275624 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22353-9207/.minikube/key.pem
	I1229 07:17:23.195726  275624 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22353-9207/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22353-9207/.minikube/key.pem (1675 bytes)
	I1229 07:17:23.195817  275624 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22353-9207/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22353-9207/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22353-9207/.minikube/certs/ca-key.pem org=jenkins.newest-cni-067566 san=[127.0.0.1 192.168.94.2 localhost minikube newest-cni-067566]
	I1229 07:17:23.236752  275624 provision.go:177] copyRemoteCerts
	I1229 07:17:23.236801  275624 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1229 07:17:23.236861  275624 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-067566
	I1229 07:17:23.255029  275624 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33093 SSHKeyPath:/home/jenkins/minikube-integration/22353-9207/.minikube/machines/newest-cni-067566/id_rsa Username:docker}
	I1229 07:17:23.353289  275624 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-9207/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1229 07:17:23.372521  275624 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-9207/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1229 07:17:23.390549  275624 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-9207/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1229 07:17:23.407773  275624 provision.go:87] duration metric: took 230.951219ms to configureAuth
	I1229 07:17:23.407800  275624 ubuntu.go:206] setting minikube options for container-runtime
	I1229 07:17:23.407973  275624 config.go:182] Loaded profile config "newest-cni-067566": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1229 07:17:23.408085  275624 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-067566
	I1229 07:17:23.426117  275624 main.go:144] libmachine: Using SSH client type: native
	I1229 07:17:23.426350  275624 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e300] 0x850fa0 <nil>  [] 0s} 127.0.0.1 33093 <nil> <nil>}
	I1229 07:17:23.426369  275624 main.go:144] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1229 07:17:23.707396  275624 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1229 07:17:23.707418  275624 machine.go:97] duration metric: took 4.008625729s to provisionDockerMachine
	I1229 07:17:23.707428  275624 client.go:176] duration metric: took 8.861273524s to LocalClient.Create
	I1229 07:17:23.707447  275624 start.go:167] duration metric: took 8.861342113s to libmachine.API.Create "newest-cni-067566"
	I1229 07:17:23.707457  275624 start.go:293] postStartSetup for "newest-cni-067566" (driver="docker")
	I1229 07:17:23.707470  275624 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1229 07:17:23.707537  275624 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1229 07:17:23.707576  275624 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-067566
	I1229 07:17:23.726752  275624 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33093 SSHKeyPath:/home/jenkins/minikube-integration/22353-9207/.minikube/machines/newest-cni-067566/id_rsa Username:docker}
	I1229 07:17:23.827490  275624 ssh_runner.go:195] Run: cat /etc/os-release
	I1229 07:17:23.831066  275624 main.go:144] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1229 07:17:23.831087  275624 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1229 07:17:23.831096  275624 filesync.go:126] Scanning /home/jenkins/minikube-integration/22353-9207/.minikube/addons for local assets ...
	I1229 07:17:23.831149  275624 filesync.go:126] Scanning /home/jenkins/minikube-integration/22353-9207/.minikube/files for local assets ...
	I1229 07:17:23.831266  275624 filesync.go:149] local asset: /home/jenkins/minikube-integration/22353-9207/.minikube/files/etc/ssl/certs/127332.pem -> 127332.pem in /etc/ssl/certs
	I1229 07:17:23.831380  275624 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1229 07:17:23.839022  275624 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-9207/.minikube/files/etc/ssl/certs/127332.pem --> /etc/ssl/certs/127332.pem (1708 bytes)
	I1229 07:17:23.859454  275624 start.go:296] duration metric: took 151.984611ms for postStartSetup
	I1229 07:17:23.859815  275624 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-067566
	I1229 07:17:23.878525  275624 profile.go:143] Saving config to /home/jenkins/minikube-integration/22353-9207/.minikube/profiles/newest-cni-067566/config.json ...
	I1229 07:17:23.878767  275624 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1229 07:17:23.878807  275624 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-067566
	I1229 07:17:23.895799  275624 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33093 SSHKeyPath:/home/jenkins/minikube-integration/22353-9207/.minikube/machines/newest-cni-067566/id_rsa Username:docker}
	I1229 07:17:23.990589  275624 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1229 07:17:23.995334  275624 start.go:128] duration metric: took 9.151327665s to createHost
	I1229 07:17:23.995358  275624 start.go:83] releasing machines lock for "newest-cni-067566", held for 9.151492549s
	I1229 07:17:23.995443  275624 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-067566
	I1229 07:17:24.013822  275624 ssh_runner.go:195] Run: cat /version.json
	I1229 07:17:24.013861  275624 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-067566
	I1229 07:17:24.013910  275624 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1229 07:17:24.013967  275624 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-067566
	I1229 07:17:24.032719  275624 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33093 SSHKeyPath:/home/jenkins/minikube-integration/22353-9207/.minikube/machines/newest-cni-067566/id_rsa Username:docker}
	I1229 07:17:24.033055  275624 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33093 SSHKeyPath:/home/jenkins/minikube-integration/22353-9207/.minikube/machines/newest-cni-067566/id_rsa Username:docker}
	I1229 07:17:24.126135  275624 ssh_runner.go:195] Run: systemctl --version
	I1229 07:17:24.179240  275624 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1229 07:17:24.214442  275624 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1229 07:17:24.219601  275624 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1229 07:17:24.219670  275624 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1229 07:17:24.245439  275624 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1229 07:17:24.245463  275624 start.go:496] detecting cgroup driver to use...
	I1229 07:17:24.245495  275624 detect.go:190] detected "systemd" cgroup driver on host os
	I1229 07:17:24.245545  275624 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1229 07:17:24.261540  275624 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1229 07:17:24.274042  275624 docker.go:218] disabling cri-docker service (if available) ...
	I1229 07:17:24.274104  275624 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1229 07:17:24.290668  275624 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1229 07:17:24.307838  275624 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1229 07:17:24.393496  275624 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1229 07:17:24.480696  275624 docker.go:234] disabling docker service ...
	I1229 07:17:24.480760  275624 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1229 07:17:24.498604  275624 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1229 07:17:24.512195  275624 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1229 07:17:24.595794  275624 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1229 07:17:24.676688  275624 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1229 07:17:24.688767  275624 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1229 07:17:24.702264  275624 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1229 07:17:24.702318  275624 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1229 07:17:24.712693  275624 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1229 07:17:24.712747  275624 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1229 07:17:24.721471  275624 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1229 07:17:24.729920  275624 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1229 07:17:24.738261  275624 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1229 07:17:24.745837  275624 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1229 07:17:24.754209  275624 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1229 07:17:24.766904  275624 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1229 07:17:24.775989  275624 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1229 07:17:24.783442  275624 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1229 07:17:24.790415  275624 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1229 07:17:24.869871  275624 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1229 07:17:25.000630  275624 start.go:553] Will wait 60s for socket path /var/run/crio/crio.sock
	I1229 07:17:25.000708  275624 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1229 07:17:25.004627  275624 start.go:574] Will wait 60s for crictl version
	I1229 07:17:25.004672  275624 ssh_runner.go:195] Run: which crictl
	I1229 07:17:25.008245  275624 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1229 07:17:25.034063  275624 start.go:590] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.35.0
	RuntimeApiVersion:  v1
	I1229 07:17:25.034158  275624 ssh_runner.go:195] Run: crio --version
	I1229 07:17:25.061365  275624 ssh_runner.go:195] Run: crio --version
	I1229 07:17:25.089913  275624 out.go:179] * Preparing Kubernetes v1.35.0 on CRI-O 1.35.0 ...
	I1229 07:17:25.091082  275624 cli_runner.go:164] Run: docker network inspect newest-cni-067566 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1229 07:17:25.108975  275624 ssh_runner.go:195] Run: grep 192.168.94.1	host.minikube.internal$ /etc/hosts
	I1229 07:17:25.113145  275624 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.94.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1229 07:17:25.124574  275624 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	W1229 07:17:25.210623  269280 pod_ready.go:104] pod "coredns-7d764666f9-jwmww" is not "Ready", error: <nil>
	W1229 07:17:27.710384  269280 pod_ready.go:104] pod "coredns-7d764666f9-jwmww" is not "Ready", error: <nil>
	W1229 07:17:25.278423  268071 pod_ready.go:104] pod "coredns-7d764666f9-55529" is not "Ready", error: <nil>
	W1229 07:17:27.777096  268071 pod_ready.go:104] pod "coredns-7d764666f9-55529" is not "Ready", error: <nil>
	I1229 07:17:25.125650  275624 kubeadm.go:884] updating cluster {Name:newest-cni-067566 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766979815-22353@sha256:20dad5895b49b986a1253c0faab60865204843ac97fd3a6e6210da5896244409 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:newest-cni-067566 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Disab
leMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false} ...
	I1229 07:17:25.125755  275624 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime crio
	I1229 07:17:25.125808  275624 ssh_runner.go:195] Run: sudo crictl images --output json
	I1229 07:17:25.158613  275624 crio.go:561] all images are preloaded for cri-o runtime.
	I1229 07:17:25.158632  275624 crio.go:433] Images already preloaded, skipping extraction
	I1229 07:17:25.158680  275624 ssh_runner.go:195] Run: sudo crictl images --output json
	I1229 07:17:25.184507  275624 crio.go:561] all images are preloaded for cri-o runtime.
	I1229 07:17:25.184527  275624 cache_images.go:86] Images are preloaded, skipping loading
	I1229 07:17:25.184533  275624 kubeadm.go:935] updating node { 192.168.94.2 8443 v1.35.0 crio true true} ...
	I1229 07:17:25.184612  275624 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=newest-cni-067566 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.94.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0 ClusterName:newest-cni-067566 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1229 07:17:25.184673  275624 ssh_runner.go:195] Run: crio config
	I1229 07:17:25.229484  275624 cni.go:84] Creating CNI manager for ""
	I1229 07:17:25.229503  275624 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1229 07:17:25.229518  275624 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1229 07:17:25.229539  275624 kubeadm.go:197] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.94.2 APIServerPort:8443 KubernetesVersion:v1.35.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-067566 NodeName:newest-cni-067566 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.94.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.94.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/k
ubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1229 07:17:25.229666  275624 kubeadm.go:203] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.94.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-067566"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.94.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.94.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1229 07:17:25.229723  275624 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0
	I1229 07:17:25.237911  275624 binaries.go:51] Found k8s binaries, skipping transfer
	I1229 07:17:25.237970  275624 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1229 07:17:25.245575  275624 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I1229 07:17:25.257929  275624 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1229 07:17:25.272641  275624 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2211 bytes)
	I1229 07:17:25.286026  275624 ssh_runner.go:195] Run: grep 192.168.94.2	control-plane.minikube.internal$ /etc/hosts
	I1229 07:17:25.289745  275624 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.94.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1229 07:17:25.299413  275624 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1229 07:17:25.379326  275624 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1229 07:17:25.411711  275624 certs.go:69] Setting up /home/jenkins/minikube-integration/22353-9207/.minikube/profiles/newest-cni-067566 for IP: 192.168.94.2
	I1229 07:17:25.411730  275624 certs.go:195] generating shared ca certs ...
	I1229 07:17:25.411744  275624 certs.go:227] acquiring lock for ca certs: {Name:mk9ea2caa33885d3eff30cd6b31b0954826457bd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1229 07:17:25.411882  275624 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22353-9207/.minikube/ca.key
	I1229 07:17:25.411963  275624 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22353-9207/.minikube/proxy-client-ca.key
	I1229 07:17:25.411975  275624 certs.go:257] generating profile certs ...
	I1229 07:17:25.412044  275624 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22353-9207/.minikube/profiles/newest-cni-067566/client.key
	I1229 07:17:25.412063  275624 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22353-9207/.minikube/profiles/newest-cni-067566/client.crt with IP's: []
	I1229 07:17:25.489095  275624 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22353-9207/.minikube/profiles/newest-cni-067566/client.crt ...
	I1229 07:17:25.489125  275624 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22353-9207/.minikube/profiles/newest-cni-067566/client.crt: {Name:mk8bde36eaceb6cffaeab09ab88668e6ed8ca288 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1229 07:17:25.489302  275624 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22353-9207/.minikube/profiles/newest-cni-067566/client.key ...
	I1229 07:17:25.489315  275624 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22353-9207/.minikube/profiles/newest-cni-067566/client.key: {Name:mka18134c9ba0a989bd8d59718dc219e1f592781 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1229 07:17:25.489401  275624 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22353-9207/.minikube/profiles/newest-cni-067566/apiserver.key.f6ce96bf
	I1229 07:17:25.489416  275624 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22353-9207/.minikube/profiles/newest-cni-067566/apiserver.crt.f6ce96bf with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.94.2]
	I1229 07:17:25.563031  275624 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22353-9207/.minikube/profiles/newest-cni-067566/apiserver.crt.f6ce96bf ...
	I1229 07:17:25.563060  275624 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22353-9207/.minikube/profiles/newest-cni-067566/apiserver.crt.f6ce96bf: {Name:mk77a3910868fd3df1f247ee1a6ad6619ff158dd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1229 07:17:25.563227  275624 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22353-9207/.minikube/profiles/newest-cni-067566/apiserver.key.f6ce96bf ...
	I1229 07:17:25.563240  275624 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22353-9207/.minikube/profiles/newest-cni-067566/apiserver.key.f6ce96bf: {Name:mk133eaf37ddb447d130e55abe1bc5af76402242 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1229 07:17:25.563329  275624 certs.go:382] copying /home/jenkins/minikube-integration/22353-9207/.minikube/profiles/newest-cni-067566/apiserver.crt.f6ce96bf -> /home/jenkins/minikube-integration/22353-9207/.minikube/profiles/newest-cni-067566/apiserver.crt
	I1229 07:17:25.563403  275624 certs.go:386] copying /home/jenkins/minikube-integration/22353-9207/.minikube/profiles/newest-cni-067566/apiserver.key.f6ce96bf -> /home/jenkins/minikube-integration/22353-9207/.minikube/profiles/newest-cni-067566/apiserver.key
	I1229 07:17:25.563465  275624 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22353-9207/.minikube/profiles/newest-cni-067566/proxy-client.key
	I1229 07:17:25.563482  275624 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22353-9207/.minikube/profiles/newest-cni-067566/proxy-client.crt with IP's: []
	I1229 07:17:25.597778  275624 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22353-9207/.minikube/profiles/newest-cni-067566/proxy-client.crt ...
	I1229 07:17:25.597803  275624 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22353-9207/.minikube/profiles/newest-cni-067566/proxy-client.crt: {Name:mkaeadc003a8d37ec778a4c1c52159d675ffa287 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1229 07:17:25.597948  275624 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22353-9207/.minikube/profiles/newest-cni-067566/proxy-client.key ...
	I1229 07:17:25.597960  275624 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22353-9207/.minikube/profiles/newest-cni-067566/proxy-client.key: {Name:mk3c07b5c7ea5a3fa7f15f88e04567f78fb45577 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1229 07:17:25.598123  275624 certs.go:484] found cert: /home/jenkins/minikube-integration/22353-9207/.minikube/certs/12733.pem (1338 bytes)
	W1229 07:17:25.598162  275624 certs.go:480] ignoring /home/jenkins/minikube-integration/22353-9207/.minikube/certs/12733_empty.pem, impossibly tiny 0 bytes
	I1229 07:17:25.598172  275624 certs.go:484] found cert: /home/jenkins/minikube-integration/22353-9207/.minikube/certs/ca-key.pem (1675 bytes)
	I1229 07:17:25.598196  275624 certs.go:484] found cert: /home/jenkins/minikube-integration/22353-9207/.minikube/certs/ca.pem (1082 bytes)
	I1229 07:17:25.598232  275624 certs.go:484] found cert: /home/jenkins/minikube-integration/22353-9207/.minikube/certs/cert.pem (1123 bytes)
	I1229 07:17:25.598256  275624 certs.go:484] found cert: /home/jenkins/minikube-integration/22353-9207/.minikube/certs/key.pem (1675 bytes)
	I1229 07:17:25.598296  275624 certs.go:484] found cert: /home/jenkins/minikube-integration/22353-9207/.minikube/files/etc/ssl/certs/127332.pem (1708 bytes)
	I1229 07:17:25.598806  275624 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-9207/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1229 07:17:25.617464  275624 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-9207/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1229 07:17:25.636617  275624 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-9207/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1229 07:17:25.653910  275624 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-9207/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1229 07:17:25.670888  275624 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-9207/.minikube/profiles/newest-cni-067566/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1229 07:17:25.687754  275624 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-9207/.minikube/profiles/newest-cni-067566/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1229 07:17:25.704883  275624 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-9207/.minikube/profiles/newest-cni-067566/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1229 07:17:25.722956  275624 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-9207/.minikube/profiles/newest-cni-067566/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1229 07:17:25.740391  275624 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-9207/.minikube/files/etc/ssl/certs/127332.pem --> /usr/share/ca-certificates/127332.pem (1708 bytes)
	I1229 07:17:25.758765  275624 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-9207/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1229 07:17:25.776737  275624 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-9207/.minikube/certs/12733.pem --> /usr/share/ca-certificates/12733.pem (1338 bytes)
	I1229 07:17:25.793725  275624 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (722 bytes)
	I1229 07:17:25.805582  275624 ssh_runner.go:195] Run: openssl version
	I1229 07:17:25.811287  275624 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/127332.pem
	I1229 07:17:25.818342  275624 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/127332.pem /etc/ssl/certs/127332.pem
	I1229 07:17:25.825569  275624 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/127332.pem
	I1229 07:17:25.829056  275624 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 29 06:49 /usr/share/ca-certificates/127332.pem
	I1229 07:17:25.829092  275624 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/127332.pem
	I1229 07:17:25.864751  275624 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1229 07:17:25.872286  275624 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/127332.pem /etc/ssl/certs/3ec20f2e.0
	I1229 07:17:25.879399  275624 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1229 07:17:25.886679  275624 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1229 07:17:25.893804  275624 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1229 07:17:25.897430  275624 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 29 06:46 /usr/share/ca-certificates/minikubeCA.pem
	I1229 07:17:25.897474  275624 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1229 07:17:25.932286  275624 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1229 07:17:25.940404  275624 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1229 07:17:25.948751  275624 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/12733.pem
	I1229 07:17:25.956341  275624 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/12733.pem /etc/ssl/certs/12733.pem
	I1229 07:17:25.963504  275624 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12733.pem
	I1229 07:17:25.967202  275624 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 29 06:49 /usr/share/ca-certificates/12733.pem
	I1229 07:17:25.967301  275624 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12733.pem
	I1229 07:17:26.001280  275624 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1229 07:17:26.009022  275624 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/12733.pem /etc/ssl/certs/51391683.0
	I1229 07:17:26.016460  275624 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1229 07:17:26.020081  275624 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1229 07:17:26.020135  275624 kubeadm.go:401] StartCluster: {Name:newest-cni-067566 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766979815-22353@sha256:20dad5895b49b986a1253c0faab60865204843ac97fd3a6e6210da5896244409 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:newest-cni-067566 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableM
etrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1229 07:17:26.020203  275624 cri.go:61] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1229 07:17:26.020271  275624 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1229 07:17:26.046578  275624 cri.go:96] found id: ""
	I1229 07:17:26.046650  275624 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1229 07:17:26.054818  275624 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1229 07:17:26.062847  275624 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1229 07:17:26.062921  275624 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1229 07:17:26.070578  275624 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1229 07:17:26.070596  275624 kubeadm.go:158] found existing configuration files:
	
	I1229 07:17:26.070638  275624 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1229 07:17:26.078067  275624 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1229 07:17:26.078119  275624 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1229 07:17:26.085291  275624 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1229 07:17:26.092800  275624 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1229 07:17:26.092851  275624 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1229 07:17:26.099918  275624 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1229 07:17:26.107133  275624 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1229 07:17:26.107173  275624 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1229 07:17:26.114124  275624 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1229 07:17:26.121722  275624 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1229 07:17:26.121776  275624 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1229 07:17:26.129629  275624 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1229 07:17:26.242349  275624 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1045-gcp\n", err: exit status 1
	I1229 07:17:26.299121  275624 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1229 07:17:31.599307  225445 api_server.go:299] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1229 07:17:31.599745  225445 api_server.go:315] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1229 07:17:31.599832  225445 kubeadm.go:602] duration metric: took 4m17.270791655s to restartPrimaryControlPlane
	W1229 07:17:31.599898  225445 out.go:285] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1229 07:17:31.599966  225445 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1229 07:17:32.629807  225445 ssh_runner.go:235] Completed: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (1.029817305s)
	I1229 07:17:32.629866  225445 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1229 07:17:32.642561  225445 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1229 07:17:32.650839  225445 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1229 07:17:32.650894  225445 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1229 07:17:32.658849  225445 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1229 07:17:32.658868  225445 kubeadm.go:158] found existing configuration files:
	
	I1229 07:17:32.658910  225445 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1229 07:17:32.666496  225445 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1229 07:17:32.666546  225445 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1229 07:17:32.674130  225445 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1229 07:17:32.682449  225445 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1229 07:17:32.682515  225445 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1229 07:17:32.690442  225445 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1229 07:17:32.698242  225445 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1229 07:17:32.698282  225445 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1229 07:17:32.705927  225445 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1229 07:17:32.714195  225445 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1229 07:17:32.714263  225445 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1229 07:17:32.721540  225445 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1229 07:17:32.757413  225445 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0
	I1229 07:17:32.757549  225445 kubeadm.go:319] [preflight] Running pre-flight checks
	I1229 07:17:32.821569  225445 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1229 07:17:32.821663  225445 kubeadm.go:319] KERNEL_VERSION: 6.8.0-1045-gcp
	I1229 07:17:32.821740  225445 kubeadm.go:319] OS: Linux
	I1229 07:17:32.821806  225445 kubeadm.go:319] CGROUPS_CPU: enabled
	I1229 07:17:32.821875  225445 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1229 07:17:32.821945  225445 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1229 07:17:32.821998  225445 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1229 07:17:32.822073  225445 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1229 07:17:32.822147  225445 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1229 07:17:32.822241  225445 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1229 07:17:32.822328  225445 kubeadm.go:319] CGROUPS_IO: enabled
	I1229 07:17:32.878609  225445 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1229 07:17:32.878760  225445 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1229 07:17:32.878886  225445 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1229 07:17:32.885979  225445 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1229 07:17:33.174731  275624 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0
	I1229 07:17:33.174824  275624 kubeadm.go:319] [preflight] Running pre-flight checks
	I1229 07:17:33.174944  275624 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1229 07:17:33.175041  275624 kubeadm.go:319] KERNEL_VERSION: 6.8.0-1045-gcp
	I1229 07:17:33.175108  275624 kubeadm.go:319] OS: Linux
	I1229 07:17:33.175178  275624 kubeadm.go:319] CGROUPS_CPU: enabled
	I1229 07:17:33.175292  275624 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1229 07:17:33.175360  275624 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1229 07:17:33.175445  275624 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1229 07:17:33.175514  275624 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1229 07:17:33.175589  275624 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1229 07:17:33.175678  275624 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1229 07:17:33.175756  275624 kubeadm.go:319] CGROUPS_IO: enabled
	I1229 07:17:33.175873  275624 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1229 07:17:33.176034  275624 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1229 07:17:33.176165  275624 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1229 07:17:33.176284  275624 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1229 07:17:33.177867  275624 out.go:252]   - Generating certificates and keys ...
	I1229 07:17:33.177954  275624 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1229 07:17:33.178010  275624 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1229 07:17:33.178124  275624 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1229 07:17:33.178255  275624 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1229 07:17:33.178352  275624 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1229 07:17:33.178406  275624 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1229 07:17:33.178458  275624 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1229 07:17:33.178580  275624 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [localhost newest-cni-067566] and IPs [192.168.94.2 127.0.0.1 ::1]
	I1229 07:17:33.178658  275624 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1229 07:17:33.178808  275624 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [localhost newest-cni-067566] and IPs [192.168.94.2 127.0.0.1 ::1]
	I1229 07:17:33.178878  275624 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1229 07:17:33.178952  275624 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1229 07:17:33.179013  275624 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1229 07:17:33.179109  275624 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1229 07:17:33.179176  275624 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1229 07:17:33.179269  275624 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1229 07:17:33.179342  275624 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1229 07:17:33.179408  275624 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1229 07:17:33.179464  275624 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1229 07:17:33.179530  275624 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1229 07:17:33.179587  275624 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1229 07:17:33.180882  275624 out.go:252]   - Booting up control plane ...
	I1229 07:17:33.180975  275624 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1229 07:17:33.181045  275624 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1229 07:17:33.181111  275624 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1229 07:17:33.181240  275624 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1229 07:17:33.181359  275624 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1229 07:17:33.181485  275624 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1229 07:17:33.181567  275624 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1229 07:17:33.181603  275624 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1229 07:17:33.181741  275624 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1229 07:17:33.181836  275624 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1229 07:17:33.181891  275624 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 501.783047ms
	I1229 07:17:33.181972  275624 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1229 07:17:33.182045  275624 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.94.2:8443/livez
	I1229 07:17:33.182130  275624 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1229 07:17:33.182206  275624 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1229 07:17:33.182297  275624 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 1.004115708s
	I1229 07:17:33.182373  275624 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 1.909577181s
	I1229 07:17:33.182455  275624 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 3.50180713s
	I1229 07:17:33.182606  275624 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1229 07:17:33.182756  275624 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1229 07:17:33.182840  275624 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1229 07:17:33.183016  275624 kubeadm.go:319] [mark-control-plane] Marking the node newest-cni-067566 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1229 07:17:33.183073  275624 kubeadm.go:319] [bootstrap-token] Using token: aqn0bv.5scivwhms5r84246
	I1229 07:17:33.184369  275624 out.go:252]   - Configuring RBAC rules ...
	I1229 07:17:33.184458  275624 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1229 07:17:33.184545  275624 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1229 07:17:33.184700  275624 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1229 07:17:33.184822  275624 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1229 07:17:33.184947  275624 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1229 07:17:33.185024  275624 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1229 07:17:33.185129  275624 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1229 07:17:33.185169  275624 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1229 07:17:33.185234  275624 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1229 07:17:33.185247  275624 kubeadm.go:319] 
	I1229 07:17:33.185311  275624 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1229 07:17:33.185318  275624 kubeadm.go:319] 
	I1229 07:17:33.185389  275624 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1229 07:17:33.185396  275624 kubeadm.go:319] 
	I1229 07:17:33.185426  275624 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1229 07:17:33.185477  275624 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1229 07:17:33.185524  275624 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1229 07:17:33.185530  275624 kubeadm.go:319] 
	I1229 07:17:33.185579  275624 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1229 07:17:33.185585  275624 kubeadm.go:319] 
	I1229 07:17:33.185624  275624 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1229 07:17:33.185630  275624 kubeadm.go:319] 
	I1229 07:17:33.185700  275624 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1229 07:17:33.185803  275624 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1229 07:17:33.185872  275624 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1229 07:17:33.185877  275624 kubeadm.go:319] 
	I1229 07:17:33.185948  275624 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1229 07:17:33.186039  275624 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1229 07:17:33.186046  275624 kubeadm.go:319] 
	I1229 07:17:33.186111  275624 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token aqn0bv.5scivwhms5r84246 \
	I1229 07:17:33.186197  275624 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:d97bd7e33b97d9ccbd4ec53c25ea13f0ec6aabee3d7ac32cc2829ba2c3b93811 \
	I1229 07:17:33.186228  275624 kubeadm.go:319] 	--control-plane 
	I1229 07:17:33.186238  275624 kubeadm.go:319] 
	I1229 07:17:33.186308  275624 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1229 07:17:33.186313  275624 kubeadm.go:319] 
	I1229 07:17:33.186383  275624 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token aqn0bv.5scivwhms5r84246 \
	I1229 07:17:33.186484  275624 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:d97bd7e33b97d9ccbd4ec53c25ea13f0ec6aabee3d7ac32cc2829ba2c3b93811 
	I1229 07:17:33.186493  275624 cni.go:84] Creating CNI manager for ""
	I1229 07:17:33.186500  275624 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1229 07:17:33.187753  275624 out.go:179] * Configuring CNI (Container Networking Interface) ...
	W1229 07:17:29.710486  269280 pod_ready.go:104] pod "coredns-7d764666f9-jwmww" is not "Ready", error: <nil>
	W1229 07:17:31.712207  269280 pod_ready.go:104] pod "coredns-7d764666f9-jwmww" is not "Ready", error: <nil>
	I1229 07:17:32.887754  225445 out.go:252]   - Generating certificates and keys ...
	I1229 07:17:32.887869  225445 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1229 07:17:32.887962  225445 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1229 07:17:32.888061  225445 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1229 07:17:32.888149  225445 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1229 07:17:32.888257  225445 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1229 07:17:32.888347  225445 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1229 07:17:32.888432  225445 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1229 07:17:32.888501  225445 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1229 07:17:32.888578  225445 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1229 07:17:32.888659  225445 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1229 07:17:32.888725  225445 kubeadm.go:319] [certs] Using the existing "sa" key
	I1229 07:17:32.888798  225445 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1229 07:17:32.958744  225445 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1229 07:17:33.016769  225445 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1229 07:17:33.581042  225445 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1229 07:17:33.669265  225445 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1229 07:17:33.728757  225445 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1229 07:17:33.729351  225445 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1229 07:17:33.731463  225445 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	W1229 07:17:29.777703  268071 pod_ready.go:104] pod "coredns-7d764666f9-55529" is not "Ready", error: <nil>
	W1229 07:17:32.277271  268071 pod_ready.go:104] pod "coredns-7d764666f9-55529" is not "Ready", error: <nil>
	I1229 07:17:33.732894  225445 out.go:252]   - Booting up control plane ...
	I1229 07:17:33.733006  225445 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1229 07:17:33.733100  225445 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1229 07:17:33.733765  225445 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1229 07:17:33.747108  225445 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1229 07:17:33.747245  225445 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1229 07:17:33.754240  225445 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1229 07:17:33.754476  225445 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1229 07:17:33.754523  225445 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1229 07:17:33.856301  225445 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1229 07:17:33.856486  225445 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1229 07:17:33.188770  275624 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1229 07:17:33.193313  275624 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.35.0/kubectl ...
	I1229 07:17:33.193331  275624 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2620 bytes)
	I1229 07:17:33.206932  275624 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1229 07:17:33.435992  275624 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1229 07:17:33.436070  275624 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1229 07:17:33.436184  275624 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes newest-cni-067566 minikube.k8s.io/updated_at=2025_12_29T07_17_33_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=306c26d738eaa3534a776d7c684ed563998a48b8 minikube.k8s.io/name=newest-cni-067566 minikube.k8s.io/primary=true
	I1229 07:17:33.518714  275624 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1229 07:17:33.518731  275624 ops.go:34] apiserver oom_adj: -16
	I1229 07:17:34.019501  275624 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1229 07:17:34.519465  275624 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	W1229 07:17:34.277886  268071 pod_ready.go:104] pod "coredns-7d764666f9-55529" is not "Ready", error: <nil>
	I1229 07:17:34.778273  268071 pod_ready.go:94] pod "coredns-7d764666f9-55529" is "Ready"
	I1229 07:17:34.778313  268071 pod_ready.go:86] duration metric: took 35.506613971s for pod "coredns-7d764666f9-55529" in "kube-system" namespace to be "Ready" or be gone ...
	I1229 07:17:34.781251  268071 pod_ready.go:83] waiting for pod "etcd-embed-certs-739827" in "kube-system" namespace to be "Ready" or be gone ...
	I1229 07:17:34.785984  268071 pod_ready.go:94] pod "etcd-embed-certs-739827" is "Ready"
	I1229 07:17:34.786013  268071 pod_ready.go:86] duration metric: took 4.739165ms for pod "etcd-embed-certs-739827" in "kube-system" namespace to be "Ready" or be gone ...
	I1229 07:17:34.788636  268071 pod_ready.go:83] waiting for pod "kube-apiserver-embed-certs-739827" in "kube-system" namespace to be "Ready" or be gone ...
	I1229 07:17:34.794245  268071 pod_ready.go:94] pod "kube-apiserver-embed-certs-739827" is "Ready"
	I1229 07:17:34.794276  268071 pod_ready.go:86] duration metric: took 5.576443ms for pod "kube-apiserver-embed-certs-739827" in "kube-system" namespace to be "Ready" or be gone ...
	I1229 07:17:34.796330  268071 pod_ready.go:83] waiting for pod "kube-controller-manager-embed-certs-739827" in "kube-system" namespace to be "Ready" or be gone ...
	I1229 07:17:34.976516  268071 pod_ready.go:94] pod "kube-controller-manager-embed-certs-739827" is "Ready"
	I1229 07:17:34.976598  268071 pod_ready.go:86] duration metric: took 180.244748ms for pod "kube-controller-manager-embed-certs-739827" in "kube-system" namespace to be "Ready" or be gone ...
	I1229 07:17:35.176111  268071 pod_ready.go:83] waiting for pod "kube-proxy-hdmp6" in "kube-system" namespace to be "Ready" or be gone ...
	I1229 07:17:35.576282  268071 pod_ready.go:94] pod "kube-proxy-hdmp6" is "Ready"
	I1229 07:17:35.576315  268071 pod_ready.go:86] duration metric: took 400.175041ms for pod "kube-proxy-hdmp6" in "kube-system" namespace to be "Ready" or be gone ...
	I1229 07:17:35.775300  268071 pod_ready.go:83] waiting for pod "kube-scheduler-embed-certs-739827" in "kube-system" namespace to be "Ready" or be gone ...
	I1229 07:17:36.176419  268071 pod_ready.go:94] pod "kube-scheduler-embed-certs-739827" is "Ready"
	I1229 07:17:36.176451  268071 pod_ready.go:86] duration metric: took 401.122966ms for pod "kube-scheduler-embed-certs-739827" in "kube-system" namespace to be "Ready" or be gone ...
	I1229 07:17:36.176470  268071 pod_ready.go:40] duration metric: took 36.908046678s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1229 07:17:36.223034  268071 start.go:625] kubectl: 1.35.0, cluster: 1.35.0 (minor skew: 0)
	I1229 07:17:36.224836  268071 out.go:179] * Done! kubectl is now configured to use "embed-certs-739827" cluster and "default" namespace by default
	W1229 07:17:34.210428  269280 pod_ready.go:104] pod "coredns-7d764666f9-jwmww" is not "Ready", error: <nil>
	I1229 07:17:35.711850  269280 pod_ready.go:94] pod "coredns-7d764666f9-jwmww" is "Ready"
	I1229 07:17:35.711877  269280 pod_ready.go:86] duration metric: took 31.506957154s for pod "coredns-7d764666f9-jwmww" in "kube-system" namespace to be "Ready" or be gone ...
	I1229 07:17:35.713999  269280 pod_ready.go:83] waiting for pod "etcd-default-k8s-diff-port-798607" in "kube-system" namespace to be "Ready" or be gone ...
	I1229 07:17:35.717990  269280 pod_ready.go:94] pod "etcd-default-k8s-diff-port-798607" is "Ready"
	I1229 07:17:35.718010  269280 pod_ready.go:86] duration metric: took 3.99127ms for pod "etcd-default-k8s-diff-port-798607" in "kube-system" namespace to be "Ready" or be gone ...
	I1229 07:17:35.720146  269280 pod_ready.go:83] waiting for pod "kube-apiserver-default-k8s-diff-port-798607" in "kube-system" namespace to be "Ready" or be gone ...
	I1229 07:17:35.724158  269280 pod_ready.go:94] pod "kube-apiserver-default-k8s-diff-port-798607" is "Ready"
	I1229 07:17:35.724188  269280 pod_ready.go:86] duration metric: took 4.006427ms for pod "kube-apiserver-default-k8s-diff-port-798607" in "kube-system" namespace to be "Ready" or be gone ...
	I1229 07:17:35.726516  269280 pod_ready.go:83] waiting for pod "kube-controller-manager-default-k8s-diff-port-798607" in "kube-system" namespace to be "Ready" or be gone ...
	I1229 07:17:35.909506  269280 pod_ready.go:94] pod "kube-controller-manager-default-k8s-diff-port-798607" is "Ready"
	I1229 07:17:35.909539  269280 pod_ready.go:86] duration metric: took 182.998659ms for pod "kube-controller-manager-default-k8s-diff-port-798607" in "kube-system" namespace to be "Ready" or be gone ...
	I1229 07:17:36.109842  269280 pod_ready.go:83] waiting for pod "kube-proxy-4mnzc" in "kube-system" namespace to be "Ready" or be gone ...
	I1229 07:17:36.510317  269280 pod_ready.go:94] pod "kube-proxy-4mnzc" is "Ready"
	I1229 07:17:36.510396  269280 pod_ready.go:86] duration metric: took 400.524735ms for pod "kube-proxy-4mnzc" in "kube-system" namespace to be "Ready" or be gone ...
	I1229 07:17:36.709447  269280 pod_ready.go:83] waiting for pod "kube-scheduler-default-k8s-diff-port-798607" in "kube-system" namespace to be "Ready" or be gone ...
	I1229 07:17:37.110184  269280 pod_ready.go:94] pod "kube-scheduler-default-k8s-diff-port-798607" is "Ready"
	I1229 07:17:37.110246  269280 pod_ready.go:86] duration metric: took 400.773017ms for pod "kube-scheduler-default-k8s-diff-port-798607" in "kube-system" namespace to be "Ready" or be gone ...
	I1229 07:17:37.110264  269280 pod_ready.go:40] duration metric: took 32.909141753s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1229 07:17:37.164406  269280 start.go:625] kubectl: 1.35.0, cluster: 1.35.0 (minor skew: 0)
	I1229 07:17:37.165810  269280 out.go:179] * Done! kubectl is now configured to use "default-k8s-diff-port-798607" cluster and "default" namespace by default
	I1229 07:17:34.357854  225445 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 501.670649ms
	I1229 07:17:34.360780  225445 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1229 07:17:34.360889  225445 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.76.2:8443/livez
	I1229 07:17:34.360980  225445 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1229 07:17:34.361071  225445 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1229 07:17:34.866418  225445 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 505.521423ms
	I1229 07:17:35.989869  225445 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 1.628552966s
	I1229 07:17:37.862874  225445 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 3.502033853s
	I1229 07:17:37.881399  225445 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1229 07:17:37.894692  225445 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1229 07:17:37.904873  225445 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1229 07:17:37.905209  225445 kubeadm.go:319] [mark-control-plane] Marking the node kubernetes-upgrade-174577 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1229 07:17:37.914355  225445 kubeadm.go:319] [bootstrap-token] Using token: yk2713.yxkrz8ut2386g30u
	I1229 07:17:35.018764  275624 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1229 07:17:35.519460  275624 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1229 07:17:36.019453  275624 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1229 07:17:36.519505  275624 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1229 07:17:37.018998  275624 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1229 07:17:37.518794  275624 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1229 07:17:38.019598  275624 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1229 07:17:38.093907  275624 kubeadm.go:1114] duration metric: took 4.657896521s to wait for elevateKubeSystemPrivileges
	I1229 07:17:38.093961  275624 kubeadm.go:403] duration metric: took 12.073826923s to StartCluster
	I1229 07:17:38.093987  275624 settings.go:142] acquiring lock: {Name:mkc0a676e8e9f981283a1b55a28ae5261bf35d87 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1229 07:17:38.094077  275624 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22353-9207/kubeconfig
	I1229 07:17:38.095856  275624 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22353-9207/kubeconfig: {Name:mkdd37af1bd6d0f9cc034a64c07c8d33ee5c10ac Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1229 07:17:38.096133  275624 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1229 07:17:38.096168  275624 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1229 07:17:38.096246  275624 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1229 07:17:38.096361  275624 addons.go:70] Setting storage-provisioner=true in profile "newest-cni-067566"
	I1229 07:17:38.096379  275624 addons.go:239] Setting addon storage-provisioner=true in "newest-cni-067566"
	I1229 07:17:38.096399  275624 addons.go:70] Setting default-storageclass=true in profile "newest-cni-067566"
	I1229 07:17:38.096435  275624 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-067566"
	I1229 07:17:38.096449  275624 config.go:182] Loaded profile config "newest-cni-067566": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1229 07:17:38.096409  275624 host.go:66] Checking if "newest-cni-067566" exists ...
	I1229 07:17:38.096877  275624 cli_runner.go:164] Run: docker container inspect newest-cni-067566 --format={{.State.Status}}
	I1229 07:17:38.097093  275624 cli_runner.go:164] Run: docker container inspect newest-cni-067566 --format={{.State.Status}}
	I1229 07:17:38.097887  275624 out.go:179] * Verifying Kubernetes components...
	I1229 07:17:38.099067  275624 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1229 07:17:38.124899  275624 addons.go:239] Setting addon default-storageclass=true in "newest-cni-067566"
	I1229 07:17:38.124949  275624 host.go:66] Checking if "newest-cni-067566" exists ...
	I1229 07:17:38.125449  275624 cli_runner.go:164] Run: docker container inspect newest-cni-067566 --format={{.State.Status}}
	I1229 07:17:38.134730  275624 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1229 07:17:38.136387  275624 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1229 07:17:38.136410  275624 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1229 07:17:38.136486  275624 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-067566
	I1229 07:17:38.164049  275624 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1229 07:17:38.164132  275624 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1229 07:17:38.164287  275624 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-067566
	I1229 07:17:38.177054  275624 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33093 SSHKeyPath:/home/jenkins/minikube-integration/22353-9207/.minikube/machines/newest-cni-067566/id_rsa Username:docker}
	I1229 07:17:38.192086  275624 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33093 SSHKeyPath:/home/jenkins/minikube-integration/22353-9207/.minikube/machines/newest-cni-067566/id_rsa Username:docker}
	I1229 07:17:38.209120  275624 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.94.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.35.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1229 07:17:38.273429  275624 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1229 07:17:38.303154  275624 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1229 07:17:38.327679  275624 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1229 07:17:38.416044  275624 start.go:987] {"host.minikube.internal": 192.168.94.1} host record injected into CoreDNS's ConfigMap
	I1229 07:17:38.417853  275624 api_server.go:52] waiting for apiserver process to appear ...
	I1229 07:17:38.417919  275624 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1229 07:17:38.656187  275624 api_server.go:72] duration metric: took 559.980772ms to wait for apiserver process to appear ...
	I1229 07:17:38.656215  275624 api_server.go:88] waiting for apiserver healthz status ...
	I1229 07:17:38.656250  275624 api_server.go:299] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1229 07:17:38.662060  275624 api_server.go:325] https://192.168.94.2:8443/healthz returned 200:
	ok
	I1229 07:17:38.662931  275624 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1229 07:17:38.662945  275624 api_server.go:141] control plane version: v1.35.0
	I1229 07:17:38.662997  275624 api_server.go:131] duration metric: took 6.758384ms to wait for apiserver health ...
	I1229 07:17:38.663011  275624 system_pods.go:43] waiting for kube-system pods to appear ...
	I1229 07:17:38.664346  275624 addons.go:530] duration metric: took 568.122463ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I1229 07:17:38.665765  275624 system_pods.go:59] 8 kube-system pods found
	I1229 07:17:38.665786  275624 system_pods.go:61] "coredns-7d764666f9-8z8sl" [aac77867-0cee-4b2d-b90e-f627a866275e] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint(s). no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1229 07:17:38.665792  275624 system_pods.go:61] "etcd-newest-cni-067566" [92fcffb5-9a22-4785-b231-2f104990f3d7] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1229 07:17:38.665799  275624 system_pods.go:61] "kindnet-xsh5z" [8b8c4415-a221-4dfa-a159-aafc30482453] Pending / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1229 07:17:38.665808  275624 system_pods.go:61] "kube-apiserver-newest-cni-067566" [3730b000-9678-4845-9c2a-38f366ff5062] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1229 07:17:38.665814  275624 system_pods.go:61] "kube-controller-manager-newest-cni-067566" [cc995cf2-d4dd-4487-b7b9-a867435eb3fc] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1229 07:17:38.665823  275624 system_pods.go:61] "kube-proxy-bgwp5" [a08835fd-da4b-4946-8106-ef878654d316] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1229 07:17:38.665831  275624 system_pods.go:61] "kube-scheduler-newest-cni-067566" [2e674e29-a841-4b03-bdf8-f08f4c1c66ca] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1229 07:17:38.665841  275624 system_pods.go:61] "storage-provisioner" [93f5f264-8cbc-4101-a7df-485eba3450d2] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint(s). no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1229 07:17:38.665850  275624 system_pods.go:74] duration metric: took 2.829541ms to wait for pod list to return data ...
	I1229 07:17:38.665856  275624 default_sa.go:34] waiting for default service account to be created ...
	I1229 07:17:38.668094  275624 default_sa.go:45] found service account: "default"
	I1229 07:17:38.668114  275624 default_sa.go:55] duration metric: took 2.25209ms for default service account to be created ...
	I1229 07:17:38.668126  275624 kubeadm.go:587] duration metric: took 571.927703ms to wait for: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1229 07:17:38.668148  275624 node_conditions.go:102] verifying NodePressure condition ...
	I1229 07:17:38.670572  275624 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1229 07:17:38.670592  275624 node_conditions.go:123] node cpu capacity is 8
	I1229 07:17:38.670605  275624 node_conditions.go:105] duration metric: took 2.452829ms to run NodePressure ...
	I1229 07:17:38.670616  275624 start.go:242] waiting for startup goroutines ...
	I1229 07:17:38.921735  275624 kapi.go:214] "coredns" deployment in "kube-system" namespace and "newest-cni-067566" context rescaled to 1 replicas
	I1229 07:17:38.921778  275624 start.go:247] waiting for cluster config update ...
	I1229 07:17:38.921793  275624 start.go:256] writing updated cluster config ...
	I1229 07:17:38.922090  275624 ssh_runner.go:195] Run: rm -f paused
	I1229 07:17:38.988252  275624 start.go:625] kubectl: 1.35.0, cluster: 1.35.0 (minor skew: 0)
	I1229 07:17:38.989951  275624 out.go:179] * Done! kubectl is now configured to use "newest-cni-067566" cluster and "default" namespace by default
	I1229 07:17:37.915759  225445 out.go:252]   - Configuring RBAC rules ...
	I1229 07:17:37.915935  225445 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1229 07:17:37.920781  225445 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1229 07:17:37.926750  225445 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1229 07:17:37.929424  225445 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1229 07:17:37.932063  225445 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1229 07:17:37.935161  225445 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1229 07:17:38.271030  225445 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1229 07:17:38.685422  225445 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1229 07:17:39.270843  225445 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1229 07:17:39.271989  225445 kubeadm.go:319] 
	I1229 07:17:39.272091  225445 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1229 07:17:39.272103  225445 kubeadm.go:319] 
	I1229 07:17:39.272207  225445 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1229 07:17:39.272246  225445 kubeadm.go:319] 
	I1229 07:17:39.272307  225445 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1229 07:17:39.272401  225445 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1229 07:17:39.272471  225445 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1229 07:17:39.272484  225445 kubeadm.go:319] 
	I1229 07:17:39.272558  225445 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1229 07:17:39.272563  225445 kubeadm.go:319] 
	I1229 07:17:39.272626  225445 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1229 07:17:39.272632  225445 kubeadm.go:319] 
	I1229 07:17:39.272698  225445 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1229 07:17:39.272794  225445 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1229 07:17:39.272858  225445 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1229 07:17:39.272865  225445 kubeadm.go:319] 
	I1229 07:17:39.272986  225445 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1229 07:17:39.273103  225445 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1229 07:17:39.273119  225445 kubeadm.go:319] 
	I1229 07:17:39.273278  225445 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token yk2713.yxkrz8ut2386g30u \
	I1229 07:17:39.273404  225445 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:d97bd7e33b97d9ccbd4ec53c25ea13f0ec6aabee3d7ac32cc2829ba2c3b93811 \
	I1229 07:17:39.273450  225445 kubeadm.go:319] 	--control-plane 
	I1229 07:17:39.273466  225445 kubeadm.go:319] 
	I1229 07:17:39.273576  225445 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1229 07:17:39.273585  225445 kubeadm.go:319] 
	I1229 07:17:39.273700  225445 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token yk2713.yxkrz8ut2386g30u \
	I1229 07:17:39.273843  225445 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:d97bd7e33b97d9ccbd4ec53c25ea13f0ec6aabee3d7ac32cc2829ba2c3b93811 
	I1229 07:17:39.276283  225445 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1045-gcp\n", err: exit status 1
	I1229 07:17:39.276469  225445 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1229 07:17:39.276509  225445 cni.go:84] Creating CNI manager for ""
	I1229 07:17:39.276527  225445 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1229 07:17:39.278555  225445 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1229 07:17:39.279780  225445 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1229 07:17:39.284844  225445 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.35.0/kubectl ...
	I1229 07:17:39.284909  225445 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2620 bytes)
	I1229 07:17:39.302299  225445 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1229 07:17:39.586198  225445 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1229 07:17:39.586285  225445 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1229 07:17:39.586311  225445 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes kubernetes-upgrade-174577 minikube.k8s.io/updated_at=2025_12_29T07_17_39_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=306c26d738eaa3534a776d7c684ed563998a48b8 minikube.k8s.io/name=kubernetes-upgrade-174577 minikube.k8s.io/primary=true
	I1229 07:17:39.712285  225445 kubeadm.go:1114] duration metric: took 126.084492ms to wait for elevateKubeSystemPrivileges
	I1229 07:17:39.712313  225445 ops.go:34] apiserver oom_adj: -16
	I1229 07:17:39.712362  225445 kubeadm.go:403] duration metric: took 4m25.447013617s to StartCluster
	I1229 07:17:39.712388  225445 settings.go:142] acquiring lock: {Name:mkc0a676e8e9f981283a1b55a28ae5261bf35d87 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1229 07:17:39.712471  225445 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22353-9207/kubeconfig
	I1229 07:17:39.714963  225445 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22353-9207/kubeconfig: {Name:mkdd37af1bd6d0f9cc034a64c07c8d33ee5c10ac Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1229 07:17:39.715278  225445 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1229 07:17:39.715608  225445 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1229 07:17:39.715770  225445 addons.go:70] Setting storage-provisioner=true in profile "kubernetes-upgrade-174577"
	I1229 07:17:39.715775  225445 config.go:182] Loaded profile config "kubernetes-upgrade-174577": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1229 07:17:39.715774  225445 addons.go:70] Setting default-storageclass=true in profile "kubernetes-upgrade-174577"
	I1229 07:17:39.715789  225445 addons.go:239] Setting addon storage-provisioner=true in "kubernetes-upgrade-174577"
	W1229 07:17:39.715799  225445 addons.go:248] addon storage-provisioner should already be in state true
	I1229 07:17:39.715817  225445 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "kubernetes-upgrade-174577"
	I1229 07:17:39.715837  225445 host.go:66] Checking if "kubernetes-upgrade-174577" exists ...
	I1229 07:17:39.716170  225445 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-174577 --format={{.State.Status}}
	I1229 07:17:39.716578  225445 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-174577 --format={{.State.Status}}
	I1229 07:17:39.716676  225445 out.go:179] * Verifying Kubernetes components...
	I1229 07:17:39.718039  225445 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1229 07:17:39.744389  225445 kapi.go:59] client config for kubernetes-upgrade-174577: &rest.Config{Host:"https://192.168.76.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22353-9207/.minikube/profiles/kubernetes-upgrade-174577/client.crt", KeyFile:"/home/jenkins/minikube-integration/22353-9207/.minikube/profiles/kubernetes-upgrade-174577/client.key", CAFile:"/home/jenkins/minikube-integration/22353-9207/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CA
Data:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2780200), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1229 07:17:39.744734  225445 addons.go:239] Setting addon default-storageclass=true in "kubernetes-upgrade-174577"
	W1229 07:17:39.744749  225445 addons.go:248] addon default-storageclass should already be in state true
	I1229 07:17:39.744775  225445 host.go:66] Checking if "kubernetes-upgrade-174577" exists ...
	I1229 07:17:39.745254  225445 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-174577 --format={{.State.Status}}
	I1229 07:17:39.746247  225445 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	
	
	==> CRI-O <==
	Dec 29 07:17:38 newest-cni-067566 crio[775]: time="2025-12-29T07:17:38.49409648Z" level=info msg="Ran pod sandbox 0d2941a25f8c7fab2c86f68122e80fee1c61cc902c905613c564a23e61fbbf37 with infra container: kube-system/kube-proxy-bgwp5/POD" id=3201bf7a-b0d0-4aaf-b58f-7be9608ad815 name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 29 07:17:38 newest-cni-067566 crio[775]: time="2025-12-29T07:17:38.495422014Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20251212-v0.29.0-alpha-105-g20ccfc88" id=131eefd2-a979-4f6c-b1ee-0f212366b437 name=/runtime.v1.ImageService/ImageStatus
	Dec 29 07:17:38 newest-cni-067566 crio[775]: time="2025-12-29T07:17:38.49542664Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.35.0" id=32fd4aa0-9321-4478-9c17-3a93c51e3387 name=/runtime.v1.ImageService/ImageStatus
	Dec 29 07:17:38 newest-cni-067566 crio[775]: time="2025-12-29T07:17:38.495599006Z" level=info msg="Image docker.io/kindest/kindnetd:v20251212-v0.29.0-alpha-105-g20ccfc88 not found" id=131eefd2-a979-4f6c-b1ee-0f212366b437 name=/runtime.v1.ImageService/ImageStatus
	Dec 29 07:17:38 newest-cni-067566 crio[775]: time="2025-12-29T07:17:38.495700069Z" level=info msg="Neither image nor artfiact docker.io/kindest/kindnetd:v20251212-v0.29.0-alpha-105-g20ccfc88 found" id=131eefd2-a979-4f6c-b1ee-0f212366b437 name=/runtime.v1.ImageService/ImageStatus
	Dec 29 07:17:38 newest-cni-067566 crio[775]: time="2025-12-29T07:17:38.496485867Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.35.0" id=58dde0aa-d176-43e0-b662-e6f9091b6a6d name=/runtime.v1.ImageService/ImageStatus
	Dec 29 07:17:38 newest-cni-067566 crio[775]: time="2025-12-29T07:17:38.496859699Z" level=info msg="Pulling image: docker.io/kindest/kindnetd:v20251212-v0.29.0-alpha-105-g20ccfc88" id=b835915d-1898-4b05-b4bd-167b60b98ed0 name=/runtime.v1.ImageService/PullImage
	Dec 29 07:17:38 newest-cni-067566 crio[775]: time="2025-12-29T07:17:38.497313517Z" level=info msg="Trying to access \"docker.io/kindest/kindnetd:v20251212-v0.29.0-alpha-105-g20ccfc88\""
	Dec 29 07:17:38 newest-cni-067566 crio[775]: time="2025-12-29T07:17:38.500175095Z" level=info msg="Creating container: kube-system/kube-proxy-bgwp5/kube-proxy" id=faaf5449-933d-4d7b-bf3f-78f6494bc35c name=/runtime.v1.RuntimeService/CreateContainer
	Dec 29 07:17:38 newest-cni-067566 crio[775]: time="2025-12-29T07:17:38.500411066Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 29 07:17:38 newest-cni-067566 crio[775]: time="2025-12-29T07:17:38.50486307Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 29 07:17:38 newest-cni-067566 crio[775]: time="2025-12-29T07:17:38.505556555Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 29 07:17:38 newest-cni-067566 crio[775]: time="2025-12-29T07:17:38.538292349Z" level=info msg="Created container 9c651a3885a4c5ceac7d958d59217651cf25a67b2465216de7b0f6312477cb70: kube-system/kube-proxy-bgwp5/kube-proxy" id=faaf5449-933d-4d7b-bf3f-78f6494bc35c name=/runtime.v1.RuntimeService/CreateContainer
	Dec 29 07:17:38 newest-cni-067566 crio[775]: time="2025-12-29T07:17:38.540659368Z" level=info msg="Starting container: 9c651a3885a4c5ceac7d958d59217651cf25a67b2465216de7b0f6312477cb70" id=a5e66ae4-7e69-447c-9461-83ef8dd1bc9b name=/runtime.v1.RuntimeService/StartContainer
	Dec 29 07:17:38 newest-cni-067566 crio[775]: time="2025-12-29T07:17:38.54408947Z" level=info msg="Started container" PID=1593 containerID=9c651a3885a4c5ceac7d958d59217651cf25a67b2465216de7b0f6312477cb70 description=kube-system/kube-proxy-bgwp5/kube-proxy id=a5e66ae4-7e69-447c-9461-83ef8dd1bc9b name=/runtime.v1.RuntimeService/StartContainer sandboxID=0d2941a25f8c7fab2c86f68122e80fee1c61cc902c905613c564a23e61fbbf37
	Dec 29 07:17:39 newest-cni-067566 crio[775]: time="2025-12-29T07:17:39.854020769Z" level=info msg="Pulled image: docker.io/kindest/kindnetd@sha256:7c22558dc06a570d46ea6e8a73b23cdc754eb81f7c08d3441a3171ad359ffc27" id=b835915d-1898-4b05-b4bd-167b60b98ed0 name=/runtime.v1.ImageService/PullImage
	Dec 29 07:17:39 newest-cni-067566 crio[775]: time="2025-12-29T07:17:39.85476196Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20251212-v0.29.0-alpha-105-g20ccfc88" id=11640dcc-1d4c-4bd0-857c-680418a02d9c name=/runtime.v1.ImageService/ImageStatus
	Dec 29 07:17:39 newest-cni-067566 crio[775]: time="2025-12-29T07:17:39.857001852Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20251212-v0.29.0-alpha-105-g20ccfc88" id=5f117df0-0ea0-4b25-99a5-532a2c9584e8 name=/runtime.v1.ImageService/ImageStatus
	Dec 29 07:17:39 newest-cni-067566 crio[775]: time="2025-12-29T07:17:39.865256582Z" level=info msg="Creating container: kube-system/kindnet-xsh5z/kindnet-cni" id=751cd730-e944-44a6-a873-2f66f2adccb4 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 29 07:17:39 newest-cni-067566 crio[775]: time="2025-12-29T07:17:39.865477935Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 29 07:17:39 newest-cni-067566 crio[775]: time="2025-12-29T07:17:39.872810891Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 29 07:17:39 newest-cni-067566 crio[775]: time="2025-12-29T07:17:39.873501553Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 29 07:17:39 newest-cni-067566 crio[775]: time="2025-12-29T07:17:39.903910298Z" level=info msg="Created container c378f2cfebc4a581b229757ba2be5a6c8f611993a1b61319b6824e29eba14cd4: kube-system/kindnet-xsh5z/kindnet-cni" id=751cd730-e944-44a6-a873-2f66f2adccb4 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 29 07:17:39 newest-cni-067566 crio[775]: time="2025-12-29T07:17:39.906128557Z" level=info msg="Starting container: c378f2cfebc4a581b229757ba2be5a6c8f611993a1b61319b6824e29eba14cd4" id=0d443c84-15f4-4eba-913f-34d5b4a13f7c name=/runtime.v1.RuntimeService/StartContainer
	Dec 29 07:17:39 newest-cni-067566 crio[775]: time="2025-12-29T07:17:39.908420215Z" level=info msg="Started container" PID=1838 containerID=c378f2cfebc4a581b229757ba2be5a6c8f611993a1b61319b6824e29eba14cd4 description=kube-system/kindnet-xsh5z/kindnet-cni id=0d443c84-15f4-4eba-913f-34d5b4a13f7c name=/runtime.v1.RuntimeService/StartContainer sandboxID=1b2760155df350d28be94d65006c12c025ed66f8557215d166634d4347bbb32f
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                CREATED                  STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	c378f2cfebc4a       docker.io/kindest/kindnetd@sha256:7c22558dc06a570d46ea6e8a73b23cdc754eb81f7c08d3441a3171ad359ffc27   Less than a second ago   Running             kindnet-cni               0                   1b2760155df35       kindnet-xsh5z                               kube-system
	9c651a3885a4c       32652ff1bbe6b6df16a3dc621fcad4e6e185a2852bffc5847aab79e36c54bfb8                                     1 second ago             Running             kube-proxy                0                   0d2941a25f8c7       kube-proxy-bgwp5                            kube-system
	5f0346a550908       2c9a4b058bd7e6ec479c38f9e8a1dac2f5ee5b0a3ebda6dfac968e8720229508                                     12 seconds ago           Running             kube-controller-manager   0                   b79de269ffea3       kube-controller-manager-newest-cni-067566   kube-system
	d480102af703c       5c6acd67e9cd10eb60a246cd233db251ef62ea97e6572f897e873f0cb648f499                                     12 seconds ago           Running             kube-apiserver            0                   b6f9fafab1227       kube-apiserver-newest-cni-067566            kube-system
	fae1781df89d1       0a108f7189562e99793bdecab61fdf1a7c9d913af3385de9da17fb9d6ff430e2                                     12 seconds ago           Running             etcd                      0                   259ab1e960de2       etcd-newest-cni-067566                      kube-system
	a594e2d55b4a4       550794e3b12ac21ec7fd940bdfb45f7c8dae3c52acd8c70e580b882de20c3dcc                                     12 seconds ago           Running             kube-scheduler            0                   b3cbdd60a3ccf       kube-scheduler-newest-cni-067566            kube-system
	
	
	==> describe nodes <==
	Name:               newest-cni-067566
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=newest-cni-067566
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=306c26d738eaa3534a776d7c684ed563998a48b8
	                    minikube.k8s.io/name=newest-cni-067566
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_29T07_17_33_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 29 Dec 2025 07:17:30 +0000
	Taints:             node.kubernetes.io/not-ready:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  newest-cni-067566
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 29 Dec 2025 07:17:32 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 29 Dec 2025 07:17:32 +0000   Mon, 29 Dec 2025 07:17:28 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 29 Dec 2025 07:17:32 +0000   Mon, 29 Dec 2025 07:17:28 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 29 Dec 2025 07:17:32 +0000   Mon, 29 Dec 2025 07:17:28 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            False   Mon, 29 Dec 2025 07:17:32 +0000   Mon, 29 Dec 2025 07:17:28 +0000   KubeletNotReady              container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/cni/net.d/. Has your network provider started?
	Addresses:
	  InternalIP:  192.168.94.2
	  Hostname:    newest-cni-067566
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863360Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863360Ki
	  pods:               110
	System Info:
	  Machine ID:                 44f990608ba801cb32708aeb6951f96d
	  System UUID:                914357fb-65ed-487f-9aef-7a75495f3546
	  Boot ID:                    38b59c2f-2e15-4538-b2f7-46a8b6545a02
	  Kernel Version:             6.8.0-1045-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.35.0
	  Kubelet Version:            v1.35.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.42.0.0/24
	PodCIDRs:                     10.42.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  kube-system                 etcd-newest-cni-067566                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         8s
	  kube-system                 kindnet-xsh5z                                100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      2s
	  kube-system                 kube-apiserver-newest-cni-067566             250m (3%)     0 (0%)      0 (0%)           0 (0%)         8s
	  kube-system                 kube-controller-manager-newest-cni-067566    200m (2%)     0 (0%)      0 (0%)           0 (0%)         8s
	  kube-system                 kube-proxy-bgwp5                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         2s
	  kube-system                 kube-scheduler-newest-cni-067566             100m (1%)     0 (0%)      0 (0%)           0 (0%)         8s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (9%)   100m (1%)
	  memory             150Mi (0%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason          Age   From             Message
	  ----    ------          ----  ----             -------
	  Normal  RegisteredNode  4s    node-controller  Node newest-cni-067566 event: Registered Node newest-cni-067566 in Controller
	
	
	==> dmesg <==
	[Dec29 06:17] MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details.
	[  +0.001714] TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details.
	[  +0.001000] MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details.
	[  +0.084008] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge
	[  +0.380672] i8042: Warning: Keylock active
	[  +0.013979] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.493300] block sda: the capability attribute has been deprecated.
	[  +0.084603] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.024785] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +4.737934] kauditd_printk_skb: 47 callbacks suppressed
	
	
	==> etcd [fae1781df89d1c4738dda458aabce49436f013b46fdbb84f60e3a93c6c2ee54f] <==
	{"level":"info","ts":"2025-12-29T07:17:28.550168Z","caller":"membership/cluster.go:424","msg":"added member","cluster-id":"da400bbece288f5a","local-member-id":"dfc97eb0aae75b33","added-peer-id":"dfc97eb0aae75b33","added-peer-peer-urls":["https://192.168.94.2:2380"],"added-peer-is-learner":false}
	{"level":"info","ts":"2025-12-29T07:17:29.241014Z","logger":"raft","caller":"v3@v3.6.0/raft.go:988","msg":"dfc97eb0aae75b33 is starting a new election at term 1"}
	{"level":"info","ts":"2025-12-29T07:17:29.241064Z","logger":"raft","caller":"v3@v3.6.0/raft.go:930","msg":"dfc97eb0aae75b33 became pre-candidate at term 1"}
	{"level":"info","ts":"2025-12-29T07:17:29.241139Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1077","msg":"dfc97eb0aae75b33 received MsgPreVoteResp from dfc97eb0aae75b33 at term 1"}
	{"level":"info","ts":"2025-12-29T07:17:29.241168Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1693","msg":"dfc97eb0aae75b33 has received 1 MsgPreVoteResp votes and 0 vote rejections"}
	{"level":"info","ts":"2025-12-29T07:17:29.241188Z","logger":"raft","caller":"v3@v3.6.0/raft.go:912","msg":"dfc97eb0aae75b33 became candidate at term 2"}
	{"level":"info","ts":"2025-12-29T07:17:29.241792Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1077","msg":"dfc97eb0aae75b33 received MsgVoteResp from dfc97eb0aae75b33 at term 2"}
	{"level":"info","ts":"2025-12-29T07:17:29.241816Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1693","msg":"dfc97eb0aae75b33 has received 1 MsgVoteResp votes and 0 vote rejections"}
	{"level":"info","ts":"2025-12-29T07:17:29.241831Z","logger":"raft","caller":"v3@v3.6.0/raft.go:970","msg":"dfc97eb0aae75b33 became leader at term 2"}
	{"level":"info","ts":"2025-12-29T07:17:29.241841Z","logger":"raft","caller":"v3@v3.6.0/node.go:370","msg":"raft.node: dfc97eb0aae75b33 elected leader dfc97eb0aae75b33 at term 2"}
	{"level":"info","ts":"2025-12-29T07:17:29.242518Z","caller":"etcdserver/server.go:1820","msg":"published local member to cluster through raft","local-member-id":"dfc97eb0aae75b33","local-member-attributes":"{Name:newest-cni-067566 ClientURLs:[https://192.168.94.2:2379]}","cluster-id":"da400bbece288f5a","publish-timeout":"7s"}
	{"level":"info","ts":"2025-12-29T07:17:29.242575Z","caller":"embed/serve.go:138","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-12-29T07:17:29.242631Z","caller":"embed/serve.go:138","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-12-29T07:17:29.242670Z","caller":"etcdserver/server.go:2420","msg":"setting up initial cluster version using v3 API","cluster-version":"3.6"}
	{"level":"info","ts":"2025-12-29T07:17:29.242902Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-12-29T07:17:29.243009Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-12-29T07:17:29.243250Z","caller":"membership/cluster.go:682","msg":"set initial cluster version","cluster-id":"da400bbece288f5a","local-member-id":"dfc97eb0aae75b33","cluster-version":"3.6"}
	{"level":"info","ts":"2025-12-29T07:17:29.243504Z","caller":"api/capability.go:76","msg":"enabled capabilities for version","cluster-version":"3.6"}
	{"level":"info","ts":"2025-12-29T07:17:29.243569Z","caller":"etcdserver/server.go:2440","msg":"cluster version is updated","cluster-version":"3.6"}
	{"level":"info","ts":"2025-12-29T07:17:29.243604Z","caller":"version/monitor.go:116","msg":"cluster version differs from storage version.","cluster-version":"3.6.0","storage-version":"3.5.0"}
	{"level":"info","ts":"2025-12-29T07:17:29.243754Z","caller":"schema/migration.go:65","msg":"updated storage version","new-storage-version":"3.6.0"}
	{"level":"info","ts":"2025-12-29T07:17:29.244097Z","caller":"v3rpc/health.go:63","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-12-29T07:17:29.244095Z","caller":"v3rpc/health.go:63","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-12-29T07:17:29.246855Z","caller":"embed/serve.go:283","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2025-12-29T07:17:29.246924Z","caller":"embed/serve.go:283","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.94.2:2379"}
	
	
	==> kernel <==
	 07:17:40 up  1:00,  0 user,  load average: 3.95, 3.00, 2.13
	Linux newest-cni-067566 6.8.0-1045-gcp #48~22.04.1-Ubuntu SMP Tue Nov 25 13:07:56 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [c378f2cfebc4a581b229757ba2be5a6c8f611993a1b61319b6824e29eba14cd4] <==
	I1229 07:17:40.033086       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1229 07:17:40.128035       1 main.go:139] hostIP = 192.168.94.2
	podIP = 192.168.94.2
	I1229 07:17:40.128210       1 main.go:148] setting mtu 1500 for CNI 
	I1229 07:17:40.128263       1 main.go:178] kindnetd IP family: "ipv4"
	I1229 07:17:40.128291       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-12-29T07:17:40Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1229 07:17:40.328768       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1229 07:17:40.328802       1 controller.go:381] "Waiting for informer caches to sync"
	I1229 07:17:40.328818       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1229 07:17:40.329365       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	
	
	==> kube-apiserver [d480102af703c0f09e6b4f51a24b18f074e3ea40fdf5156026f445927b937ab8] <==
	I1229 07:17:30.209586       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1229 07:17:30.209598       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	E1229 07:17:30.210106       1 controller.go:201] "Failed to ensure lease exists, will retry" err="namespaces \"kube-system\" not found" interval="200ms"
	I1229 07:17:30.211838       1 controller.go:667] quota admission added evaluator for: namespaces
	I1229 07:17:30.213031       1 default_servicecidr_controller.go:231] Setting default ServiceCIDR condition Ready to True
	I1229 07:17:30.213148       1 cidrallocator.go:302] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1229 07:17:30.217753       1 cidrallocator.go:278] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1229 07:17:30.413153       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1229 07:17:31.116154       1 storage_scheduling.go:123] created PriorityClass system-node-critical with value 2000001000
	I1229 07:17:31.120363       1 storage_scheduling.go:123] created PriorityClass system-cluster-critical with value 2000000000
	I1229 07:17:31.120378       1 storage_scheduling.go:139] all system priority classes are created successfully or already exist.
	I1229 07:17:31.552913       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1229 07:17:31.586731       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1229 07:17:31.718940       1 alloc.go:329] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1229 07:17:31.725597       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.94.2]
	I1229 07:17:31.726786       1 controller.go:667] quota admission added evaluator for: endpoints
	I1229 07:17:31.730634       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1229 07:17:32.127753       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1229 07:17:32.576491       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1229 07:17:32.586047       1 alloc.go:329] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1229 07:17:32.593212       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1229 07:17:37.729335       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1229 07:17:37.882323       1 cidrallocator.go:278] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1229 07:17:37.886465       1 cidrallocator.go:278] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1229 07:17:38.131329       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	
	
	==> kube-controller-manager [5f0346a550908e978c45b5ea66d903dc1c69a215a601140bc70693645ffbd27a] <==
	I1229 07:17:36.940832       1 shared_informer.go:377] "Caches are synced"
	I1229 07:17:36.948612       1 shared_informer.go:377] "Caches are synced"
	I1229 07:17:36.940916       1 shared_informer.go:377] "Caches are synced"
	I1229 07:17:36.940983       1 shared_informer.go:377] "Caches are synced"
	I1229 07:17:36.940654       1 shared_informer.go:377] "Caches are synced"
	I1229 07:17:36.949769       1 shared_informer.go:377] "Caches are synced"
	I1229 07:17:36.950759       1 shared_informer.go:377] "Caches are synced"
	I1229 07:17:36.951768       1 shared_informer.go:377] "Caches are synced"
	I1229 07:17:36.952047       1 range_allocator.go:177] "Sending events to api server"
	I1229 07:17:36.953985       1 shared_informer.go:377] "Caches are synced"
	I1229 07:17:36.954061       1 shared_informer.go:377] "Caches are synced"
	I1229 07:17:36.954164       1 shared_informer.go:377] "Caches are synced"
	I1229 07:17:36.954296       1 shared_informer.go:377] "Caches are synced"
	I1229 07:17:36.954475       1 shared_informer.go:377] "Caches are synced"
	I1229 07:17:36.954615       1 shared_informer.go:377] "Caches are synced"
	I1229 07:17:36.954643       1 shared_informer.go:377] "Caches are synced"
	I1229 07:17:36.954820       1 range_allocator.go:181] "Starting range CIDR allocator"
	I1229 07:17:36.954835       1 shared_informer.go:370] "Waiting for caches to sync"
	I1229 07:17:36.954842       1 shared_informer.go:377] "Caches are synced"
	I1229 07:17:36.955157       1 shared_informer.go:377] "Caches are synced"
	I1229 07:17:36.962161       1 range_allocator.go:433] "Set node PodCIDR" node="newest-cni-067566" podCIDRs=["10.42.0.0/24"]
	I1229 07:17:37.038542       1 shared_informer.go:377] "Caches are synced"
	I1229 07:17:37.038693       1 garbagecollector.go:166] "Garbage collector: all resource monitors have synced"
	I1229 07:17:37.038744       1 garbagecollector.go:169] "Proceeding to collect garbage"
	I1229 07:17:37.045710       1 shared_informer.go:377] "Caches are synced"
	
	
	==> kube-proxy [9c651a3885a4c5ceac7d958d59217651cf25a67b2465216de7b0f6312477cb70] <==
	I1229 07:17:38.604246       1 server_linux.go:53] "Using iptables proxy"
	I1229 07:17:38.693485       1 shared_informer.go:370] "Waiting for caches to sync"
	I1229 07:17:38.794001       1 shared_informer.go:377] "Caches are synced"
	I1229 07:17:38.794036       1 server.go:218] "Successfully retrieved NodeIPs" NodeIPs=["192.168.94.2"]
	E1229 07:17:38.794531       1 server.go:255] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1229 07:17:38.820281       1 server.go:264] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1229 07:17:38.820346       1 server_linux.go:136] "Using iptables Proxier"
	I1229 07:17:38.826941       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1229 07:17:38.827337       1 server.go:529] "Version info" version="v1.35.0"
	I1229 07:17:38.827358       1 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1229 07:17:38.828767       1 config.go:403] "Starting serviceCIDR config controller"
	I1229 07:17:38.828833       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1229 07:17:38.828929       1 config.go:200] "Starting service config controller"
	I1229 07:17:38.828992       1 config.go:106] "Starting endpoint slice config controller"
	I1229 07:17:38.829465       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1229 07:17:38.829254       1 config.go:309] "Starting node config controller"
	I1229 07:17:38.829507       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1229 07:17:38.829517       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1229 07:17:38.829600       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1229 07:17:38.929848       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1229 07:17:38.929898       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1229 07:17:38.929932       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [a594e2d55b4a460b4f2aff79f0ea6cb9ab1b13bac8cba8b8d9d89409410306e5] <==
	E1229 07:17:30.170863       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Node"
	E1229 07:17:30.170928       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ReplicaSet"
	E1229 07:17:30.170979       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ReplicationController"
	E1229 07:17:30.171046       1 reflector.go:204] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.StatefulSet"
	E1229 07:17:30.171046       1 reflector.go:204] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.PodDisruptionBudget"
	E1229 07:17:30.171059       1 reflector.go:204] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.DeviceClass"
	E1229 07:17:30.171080       1 reflector.go:204] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.CSINode"
	E1229 07:17:30.171162       1 reflector.go:204] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.CSIDriver"
	E1229 07:17:30.171115       1 reflector.go:204] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.PersistentVolume"
	E1229 07:17:30.171202       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Service"
	E1229 07:17:30.171202       1 reflector.go:204] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.PersistentVolumeClaim"
	E1229 07:17:30.171273       1 reflector.go:204] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.VolumeAttachment"
	E1229 07:17:30.171299       1 reflector.go:204] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.StorageClass"
	E1229 07:17:31.001368       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ReplicaSet"
	E1229 07:17:31.043606       1 reflector.go:204] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.DeviceClass"
	E1229 07:17:31.075791       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Namespace"
	E1229 07:17:31.120130       1 reflector.go:204] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.PersistentVolume"
	E1229 07:17:31.154344       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Node"
	E1229 07:17:31.162338       1 reflector.go:204] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.StorageClass"
	E1229 07:17:31.195993       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ResourceClaim"
	E1229 07:17:31.232868       1 reflector.go:204] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.CSINode"
	E1229 07:17:31.256213       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Pod"
	E1229 07:17:31.339570       1 reflector.go:204] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.StatefulSet"
	E1229 07:17:31.341005       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1693" type="*v1.ConfigMap"
	I1229 07:17:33.264151       1 shared_informer.go:377] "Caches are synced"
	
	
	==> kubelet <==
	Dec 29 07:17:33 newest-cni-067566 kubelet[1302]: E1229 07:17:33.430233    1302 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/etcd-newest-cni-067566" containerName="etcd"
	Dec 29 07:17:33 newest-cni-067566 kubelet[1302]: I1229 07:17:33.446523    1302 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kube-controller-manager-newest-cni-067566" podStartSLOduration=1.4465032930000001 podStartE2EDuration="1.446503293s" podCreationTimestamp="2025-12-29 07:17:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-29 07:17:33.43738764 +0000 UTC m=+1.115175760" watchObservedRunningTime="2025-12-29 07:17:33.446503293 +0000 UTC m=+1.124291417"
	Dec 29 07:17:33 newest-cni-067566 kubelet[1302]: I1229 07:17:33.446772    1302 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/etcd-newest-cni-067566" podStartSLOduration=1.446760249 podStartE2EDuration="1.446760249s" podCreationTimestamp="2025-12-29 07:17:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-29 07:17:33.446706763 +0000 UTC m=+1.124494881" watchObservedRunningTime="2025-12-29 07:17:33.446760249 +0000 UTC m=+1.124548371"
	Dec 29 07:17:33 newest-cni-067566 kubelet[1302]: I1229 07:17:33.469493    1302 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kube-scheduler-newest-cni-067566" podStartSLOduration=1.469475967 podStartE2EDuration="1.469475967s" podCreationTimestamp="2025-12-29 07:17:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-29 07:17:33.469258769 +0000 UTC m=+1.147046891" watchObservedRunningTime="2025-12-29 07:17:33.469475967 +0000 UTC m=+1.147264086"
	Dec 29 07:17:33 newest-cni-067566 kubelet[1302]: I1229 07:17:33.469666    1302 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kube-apiserver-newest-cni-067566" podStartSLOduration=1.46965627 podStartE2EDuration="1.46965627s" podCreationTimestamp="2025-12-29 07:17:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-29 07:17:33.460524125 +0000 UTC m=+1.138312247" watchObservedRunningTime="2025-12-29 07:17:33.46965627 +0000 UTC m=+1.147444392"
	Dec 29 07:17:34 newest-cni-067566 kubelet[1302]: E1229 07:17:34.422695    1302 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-controller-manager-newest-cni-067566" containerName="kube-controller-manager"
	Dec 29 07:17:34 newest-cni-067566 kubelet[1302]: E1229 07:17:34.422785    1302 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/etcd-newest-cni-067566" containerName="etcd"
	Dec 29 07:17:34 newest-cni-067566 kubelet[1302]: E1229 07:17:34.422980    1302 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-newest-cni-067566" containerName="kube-apiserver"
	Dec 29 07:17:34 newest-cni-067566 kubelet[1302]: E1229 07:17:34.423162    1302 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-newest-cni-067566" containerName="kube-scheduler"
	Dec 29 07:17:35 newest-cni-067566 kubelet[1302]: E1229 07:17:35.425313    1302 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-newest-cni-067566" containerName="kube-scheduler"
	Dec 29 07:17:35 newest-cni-067566 kubelet[1302]: E1229 07:17:35.425418    1302 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-newest-cni-067566" containerName="kube-apiserver"
	Dec 29 07:17:36 newest-cni-067566 kubelet[1302]: E1229 07:17:36.427347    1302 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-newest-cni-067566" containerName="kube-scheduler"
	Dec 29 07:17:36 newest-cni-067566 kubelet[1302]: E1229 07:17:36.652939    1302 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-controller-manager-newest-cni-067566" containerName="kube-controller-manager"
	Dec 29 07:17:37 newest-cni-067566 kubelet[1302]: I1229 07:17:37.034452    1302 kuberuntime_manager.go:2062] "Updating runtime config through cri with podcidr" CIDR="10.42.0.0/24"
	Dec 29 07:17:37 newest-cni-067566 kubelet[1302]: I1229 07:17:37.035410    1302 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.42.0.0/24"
	Dec 29 07:17:38 newest-cni-067566 kubelet[1302]: I1229 07:17:38.233652    1302 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/8b8c4415-a221-4dfa-a159-aafc30482453-cni-cfg\") pod \"kindnet-xsh5z\" (UID: \"8b8c4415-a221-4dfa-a159-aafc30482453\") " pod="kube-system/kindnet-xsh5z"
	Dec 29 07:17:38 newest-cni-067566 kubelet[1302]: I1229 07:17:38.233696    1302 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/8b8c4415-a221-4dfa-a159-aafc30482453-lib-modules\") pod \"kindnet-xsh5z\" (UID: \"8b8c4415-a221-4dfa-a159-aafc30482453\") " pod="kube-system/kindnet-xsh5z"
	Dec 29 07:17:38 newest-cni-067566 kubelet[1302]: I1229 07:17:38.233722    1302 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a08835fd-da4b-4946-8106-ef878654d316-xtables-lock\") pod \"kube-proxy-bgwp5\" (UID: \"a08835fd-da4b-4946-8106-ef878654d316\") " pod="kube-system/kube-proxy-bgwp5"
	Dec 29 07:17:38 newest-cni-067566 kubelet[1302]: I1229 07:17:38.233750    1302 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/8b8c4415-a221-4dfa-a159-aafc30482453-xtables-lock\") pod \"kindnet-xsh5z\" (UID: \"8b8c4415-a221-4dfa-a159-aafc30482453\") " pod="kube-system/kindnet-xsh5z"
	Dec 29 07:17:38 newest-cni-067566 kubelet[1302]: I1229 07:17:38.233772    1302 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xh4dq\" (UniqueName: \"kubernetes.io/projected/8b8c4415-a221-4dfa-a159-aafc30482453-kube-api-access-xh4dq\") pod \"kindnet-xsh5z\" (UID: \"8b8c4415-a221-4dfa-a159-aafc30482453\") " pod="kube-system/kindnet-xsh5z"
	Dec 29 07:17:38 newest-cni-067566 kubelet[1302]: I1229 07:17:38.233810    1302 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/a08835fd-da4b-4946-8106-ef878654d316-kube-proxy\") pod \"kube-proxy-bgwp5\" (UID: \"a08835fd-da4b-4946-8106-ef878654d316\") " pod="kube-system/kube-proxy-bgwp5"
	Dec 29 07:17:38 newest-cni-067566 kubelet[1302]: I1229 07:17:38.233900    1302 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a08835fd-da4b-4946-8106-ef878654d316-lib-modules\") pod \"kube-proxy-bgwp5\" (UID: \"a08835fd-da4b-4946-8106-ef878654d316\") " pod="kube-system/kube-proxy-bgwp5"
	Dec 29 07:17:38 newest-cni-067566 kubelet[1302]: I1229 07:17:38.233931    1302 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dcbkj\" (UniqueName: \"kubernetes.io/projected/a08835fd-da4b-4946-8106-ef878654d316-kube-api-access-dcbkj\") pod \"kube-proxy-bgwp5\" (UID: \"a08835fd-da4b-4946-8106-ef878654d316\") " pod="kube-system/kube-proxy-bgwp5"
	Dec 29 07:17:39 newest-cni-067566 kubelet[1302]: I1229 07:17:39.458135    1302 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kube-proxy-bgwp5" podStartSLOduration=1.458111624 podStartE2EDuration="1.458111624s" podCreationTimestamp="2025-12-29 07:17:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-29 07:17:39.457890958 +0000 UTC m=+7.135679081" watchObservedRunningTime="2025-12-29 07:17:39.458111624 +0000 UTC m=+7.135899741"
	Dec 29 07:17:40 newest-cni-067566 kubelet[1302]: I1229 07:17:40.459164    1302 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kindnet-xsh5z" podStartSLOduration=1.099566013 podStartE2EDuration="2.459143831s" podCreationTimestamp="2025-12-29 07:17:38 +0000 UTC" firstStartedPulling="2025-12-29 07:17:38.496403377 +0000 UTC m=+6.174191491" lastFinishedPulling="2025-12-29 07:17:39.855981195 +0000 UTC m=+7.533769309" observedRunningTime="2025-12-29 07:17:40.458995366 +0000 UTC m=+8.136783595" watchObservedRunningTime="2025-12-29 07:17:40.459143831 +0000 UTC m=+8.136931953"
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-067566 -n newest-cni-067566
helpers_test.go:270: (dbg) Run:  kubectl --context newest-cni-067566 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:281: non-running pods: coredns-7d764666f9-8z8sl storage-provisioner
helpers_test.go:283: ======> post-mortem[TestStartStop/group/newest-cni/serial/EnableAddonWhileActive]: describe non-running pods <======
helpers_test.go:286: (dbg) Run:  kubectl --context newest-cni-067566 describe pod coredns-7d764666f9-8z8sl storage-provisioner
helpers_test.go:286: (dbg) Non-zero exit: kubectl --context newest-cni-067566 describe pod coredns-7d764666f9-8z8sl storage-provisioner: exit status 1 (58.941577ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "coredns-7d764666f9-8z8sl" not found
	Error from server (NotFound): pods "storage-provisioner" not found

                                                
                                                
** /stderr **
helpers_test.go:288: kubectl --context newest-cni-067566 describe pod coredns-7d764666f9-8z8sl storage-provisioner: exit status 1
--- FAIL: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (2.38s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (7.36s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p embed-certs-739827 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 pause -p embed-certs-739827 --alsologtostderr -v=1: exit status 80 (1.815200798s)

                                                
                                                
-- stdout --
	* Pausing node embed-certs-739827 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1229 07:17:48.018771  283377 out.go:360] Setting OutFile to fd 1 ...
	I1229 07:17:48.018913  283377 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1229 07:17:48.018925  283377 out.go:374] Setting ErrFile to fd 2...
	I1229 07:17:48.018933  283377 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1229 07:17:48.019288  283377 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22353-9207/.minikube/bin
	I1229 07:17:48.019618  283377 out.go:368] Setting JSON to false
	I1229 07:17:48.019645  283377 mustload.go:66] Loading cluster: embed-certs-739827
	I1229 07:17:48.020134  283377 config.go:182] Loaded profile config "embed-certs-739827": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1229 07:17:48.020735  283377 cli_runner.go:164] Run: docker container inspect embed-certs-739827 --format={{.State.Status}}
	I1229 07:17:48.040595  283377 host.go:66] Checking if "embed-certs-739827" exists ...
	I1229 07:17:48.040933  283377 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1229 07:17:48.102173  283377 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:78 OomKillDisable:false NGoroutines:77 SystemTime:2025-12-29 07:17:48.091411564 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:dea7da592f5d1d2b7755e3a161be07f43fad8f75 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1229 07:17:48.104264  283377 pause.go:60] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766979815-22353@sha256:20dad5895b49b986a1253c0faab60865204843ac97fd3a6e6210da5896244409 binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/22353/minikube-v1.37.0-1766979747-22353-amd64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1766979747-22353/minikube-v1.37.0-1766979747-22353-amd64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1766979747-22353-amd64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) preload-source:auto profile:embed-certs-739827 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) rosetta:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(boo
l=true) wantupdatenotification:%!s(bool=true) wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1229 07:17:48.110663  283377 out.go:179] * Pausing node embed-certs-739827 ... 
	I1229 07:17:48.112584  283377 host.go:66] Checking if "embed-certs-739827" exists ...
	I1229 07:17:48.112851  283377 ssh_runner.go:195] Run: systemctl --version
	I1229 07:17:48.112898  283377 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-739827
	I1229 07:17:48.132417  283377 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33083 SSHKeyPath:/home/jenkins/minikube-integration/22353-9207/.minikube/machines/embed-certs-739827/id_rsa Username:docker}
	I1229 07:17:48.229923  283377 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1229 07:17:48.241776  283377 pause.go:52] kubelet running: true
	I1229 07:17:48.241840  283377 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1229 07:17:48.419133  283377 cri.go:61] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1229 07:17:48.419210  283377 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1229 07:17:48.495898  283377 cri.go:96] found id: "704468951f6d121993ccbed029ae4733cf4e68c409c3406d0b0c0e1e83ee7a16"
	I1229 07:17:48.495916  283377 cri.go:96] found id: "18fb52cea0d3400acddabe689116097412e230e6ba2b4477769aa7d3e66a805d"
	I1229 07:17:48.495921  283377 cri.go:96] found id: "f2078890d2c20f1615244415e59607cf2ee2465b8956242073cb8e5b80673001"
	I1229 07:17:48.495926  283377 cri.go:96] found id: "8832475ac0d106938d4161128b43278743eaed163bb0b266a8fe65ce2718ec8e"
	I1229 07:17:48.495929  283377 cri.go:96] found id: "1af994b88cd9a20f9f21db7006c416782dc168261061cd2ae2e686e54a934563"
	I1229 07:17:48.495934  283377 cri.go:96] found id: "64d38c25f85b27ef903c4b442a4a233566702ef4d41de37f0bd76a24a6632555"
	I1229 07:17:48.495938  283377 cri.go:96] found id: "9212464b12efa806f75edd62f5a28621d98bc923f0f5c51a13c6e0475b23ee0a"
	I1229 07:17:48.495942  283377 cri.go:96] found id: "f8f720f7da22897696acdb14fb867efe0f070b8de40dde3450d76b6859332adc"
	I1229 07:17:48.495946  283377 cri.go:96] found id: "0b939e4faa5624d77348fcf707669fb95bdce762e69420b9e5dde5b8d7fad11c"
	I1229 07:17:48.495956  283377 cri.go:96] found id: "eb918f980511aef13b7e4f8bd78fdf35a7588bb9c363b3367caa3a60f30c3ec2"
	I1229 07:17:48.495960  283377 cri.go:96] found id: "aad9ce3cf88d2672fedb660dd957d3e79fcfbbdeb5e444da96051de7009cea2d"
	I1229 07:17:48.495965  283377 cri.go:96] found id: ""
	I1229 07:17:48.496006  283377 ssh_runner.go:195] Run: sudo runc list -f json
	I1229 07:17:48.508622  283377 retry.go:84] will retry after 300ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-29T07:17:48Z" level=error msg="open /run/runc: no such file or directory"
	I1229 07:17:48.795069  283377 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1229 07:17:48.809715  283377 pause.go:52] kubelet running: false
	I1229 07:17:48.809769  283377 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1229 07:17:49.014902  283377 cri.go:61] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1229 07:17:49.014997  283377 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1229 07:17:49.103012  283377 cri.go:96] found id: "704468951f6d121993ccbed029ae4733cf4e68c409c3406d0b0c0e1e83ee7a16"
	I1229 07:17:49.103028  283377 cri.go:96] found id: "18fb52cea0d3400acddabe689116097412e230e6ba2b4477769aa7d3e66a805d"
	I1229 07:17:49.103032  283377 cri.go:96] found id: "f2078890d2c20f1615244415e59607cf2ee2465b8956242073cb8e5b80673001"
	I1229 07:17:49.103035  283377 cri.go:96] found id: "8832475ac0d106938d4161128b43278743eaed163bb0b266a8fe65ce2718ec8e"
	I1229 07:17:49.103038  283377 cri.go:96] found id: "1af994b88cd9a20f9f21db7006c416782dc168261061cd2ae2e686e54a934563"
	I1229 07:17:49.103041  283377 cri.go:96] found id: "64d38c25f85b27ef903c4b442a4a233566702ef4d41de37f0bd76a24a6632555"
	I1229 07:17:49.103044  283377 cri.go:96] found id: "9212464b12efa806f75edd62f5a28621d98bc923f0f5c51a13c6e0475b23ee0a"
	I1229 07:17:49.103047  283377 cri.go:96] found id: "f8f720f7da22897696acdb14fb867efe0f070b8de40dde3450d76b6859332adc"
	I1229 07:17:49.103055  283377 cri.go:96] found id: "0b939e4faa5624d77348fcf707669fb95bdce762e69420b9e5dde5b8d7fad11c"
	I1229 07:17:49.103060  283377 cri.go:96] found id: "eb918f980511aef13b7e4f8bd78fdf35a7588bb9c363b3367caa3a60f30c3ec2"
	I1229 07:17:49.103063  283377 cri.go:96] found id: "aad9ce3cf88d2672fedb660dd957d3e79fcfbbdeb5e444da96051de7009cea2d"
	I1229 07:17:49.103066  283377 cri.go:96] found id: ""
	I1229 07:17:49.103093  283377 ssh_runner.go:195] Run: sudo runc list -f json
	I1229 07:17:49.441364  283377 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1229 07:17:49.457110  283377 pause.go:52] kubelet running: false
	I1229 07:17:49.457170  283377 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1229 07:17:49.655118  283377 cri.go:61] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1229 07:17:49.655197  283377 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1229 07:17:49.743512  283377 cri.go:96] found id: "704468951f6d121993ccbed029ae4733cf4e68c409c3406d0b0c0e1e83ee7a16"
	I1229 07:17:49.743540  283377 cri.go:96] found id: "18fb52cea0d3400acddabe689116097412e230e6ba2b4477769aa7d3e66a805d"
	I1229 07:17:49.743546  283377 cri.go:96] found id: "f2078890d2c20f1615244415e59607cf2ee2465b8956242073cb8e5b80673001"
	I1229 07:17:49.743552  283377 cri.go:96] found id: "8832475ac0d106938d4161128b43278743eaed163bb0b266a8fe65ce2718ec8e"
	I1229 07:17:49.743556  283377 cri.go:96] found id: "1af994b88cd9a20f9f21db7006c416782dc168261061cd2ae2e686e54a934563"
	I1229 07:17:49.743562  283377 cri.go:96] found id: "64d38c25f85b27ef903c4b442a4a233566702ef4d41de37f0bd76a24a6632555"
	I1229 07:17:49.743566  283377 cri.go:96] found id: "9212464b12efa806f75edd62f5a28621d98bc923f0f5c51a13c6e0475b23ee0a"
	I1229 07:17:49.743572  283377 cri.go:96] found id: "f8f720f7da22897696acdb14fb867efe0f070b8de40dde3450d76b6859332adc"
	I1229 07:17:49.743577  283377 cri.go:96] found id: "0b939e4faa5624d77348fcf707669fb95bdce762e69420b9e5dde5b8d7fad11c"
	I1229 07:17:49.743585  283377 cri.go:96] found id: "eb918f980511aef13b7e4f8bd78fdf35a7588bb9c363b3367caa3a60f30c3ec2"
	I1229 07:17:49.743590  283377 cri.go:96] found id: "aad9ce3cf88d2672fedb660dd957d3e79fcfbbdeb5e444da96051de7009cea2d"
	I1229 07:17:49.743594  283377 cri.go:96] found id: ""
	I1229 07:17:49.743647  283377 ssh_runner.go:195] Run: sudo runc list -f json
	I1229 07:17:49.757400  283377 out.go:203] 
	W1229 07:17:49.758556  283377 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-29T07:17:49Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-29T07:17:49Z" level=error msg="open /run/runc: no such file or directory"
	
	W1229 07:17:49.758572  283377 out.go:285] * 
	* 
	W1229 07:17:49.760774  283377 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1229 07:17:49.763402  283377 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:309: out/minikube-linux-amd64 pause -p embed-certs-739827 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect embed-certs-739827
helpers_test.go:244: (dbg) docker inspect embed-certs-739827:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "5d317fcd0cf2e464b4302b3edb0923fd8225c8487adab12364480a572c254510",
	        "Created": "2025-12-29T07:15:42.247731806Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 268404,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-29T07:16:49.272550429Z",
	            "FinishedAt": "2025-12-29T07:16:48.366627723Z"
	        },
	        "Image": "sha256:a2c772d8c9235c5e8a66a3d0af7f5184436aa141d74f90a5659fe0d594ea14d5",
	        "ResolvConfPath": "/var/lib/docker/containers/5d317fcd0cf2e464b4302b3edb0923fd8225c8487adab12364480a572c254510/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/5d317fcd0cf2e464b4302b3edb0923fd8225c8487adab12364480a572c254510/hostname",
	        "HostsPath": "/var/lib/docker/containers/5d317fcd0cf2e464b4302b3edb0923fd8225c8487adab12364480a572c254510/hosts",
	        "LogPath": "/var/lib/docker/containers/5d317fcd0cf2e464b4302b3edb0923fd8225c8487adab12364480a572c254510/5d317fcd0cf2e464b4302b3edb0923fd8225c8487adab12364480a572c254510-json.log",
	        "Name": "/embed-certs-739827",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "embed-certs-739827:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "embed-certs-739827",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "5d317fcd0cf2e464b4302b3edb0923fd8225c8487adab12364480a572c254510",
	                "LowerDir": "/var/lib/docker/overlay2/1ddb85c4d6d055685246eef346b309475d08181c1a016f3bdcb20ebf98f7bc7c-init/diff:/var/lib/docker/overlay2/3c9e424a3298c821d4195922375c499065668db3230de879006c717dc268a220/diff",
	                "MergedDir": "/var/lib/docker/overlay2/1ddb85c4d6d055685246eef346b309475d08181c1a016f3bdcb20ebf98f7bc7c/merged",
	                "UpperDir": "/var/lib/docker/overlay2/1ddb85c4d6d055685246eef346b309475d08181c1a016f3bdcb20ebf98f7bc7c/diff",
	                "WorkDir": "/var/lib/docker/overlay2/1ddb85c4d6d055685246eef346b309475d08181c1a016f3bdcb20ebf98f7bc7c/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "embed-certs-739827",
	                "Source": "/var/lib/docker/volumes/embed-certs-739827/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "embed-certs-739827",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766979815-22353@sha256:20dad5895b49b986a1253c0faab60865204843ac97fd3a6e6210da5896244409",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "embed-certs-739827",
	                "name.minikube.sigs.k8s.io": "embed-certs-739827",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "ecbd40f6344d61174b0d6cdd84af861ca9e73ac711be404978ef32e426a12d05",
	            "SandboxKey": "/var/run/docker/netns/ecbd40f6344d",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33083"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33084"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33087"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33085"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33086"
	                    }
	                ]
	            },
	            "Networks": {
	                "embed-certs-739827": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.103.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "b087e00cc8440c1f4006081344d5fbc0a2e6dd2a74b7013ef26beec3a624ea25",
	                    "EndpointID": "69fff274e635ba82d9c7956bd856b1c2d1033928217e53b2838d6384759c2472",
	                    "Gateway": "192.168.103.1",
	                    "IPAddress": "192.168.103.2",
	                    "MacAddress": "d2:91:39:5f:2f:e6",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "embed-certs-739827",
	                        "5d317fcd0cf2"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-739827 -n embed-certs-739827
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-739827 -n embed-certs-739827: exit status 2 (401.336635ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestStartStop/group/embed-certs/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-739827 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-amd64 -p embed-certs-739827 logs -n 25: (1.873110799s)
helpers_test.go:261: TestStartStop/group/embed-certs/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ addons  │ enable metrics-server -p embed-certs-739827 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-739827           │ jenkins │ v1.37.0 │ 29 Dec 25 07:16 UTC │                     │
	│ stop    │ -p embed-certs-739827 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-739827           │ jenkins │ v1.37.0 │ 29 Dec 25 07:16 UTC │ 29 Dec 25 07:16 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-798607 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-798607 │ jenkins │ v1.37.0 │ 29 Dec 25 07:16 UTC │                     │
	│ stop    │ -p default-k8s-diff-port-798607 --alsologtostderr -v=3                                                                                                                                                                                        │ default-k8s-diff-port-798607 │ jenkins │ v1.37.0 │ 29 Dec 25 07:16 UTC │ 29 Dec 25 07:16 UTC │
	│ addons  │ enable dashboard -p embed-certs-739827 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-739827           │ jenkins │ v1.37.0 │ 29 Dec 25 07:16 UTC │ 29 Dec 25 07:16 UTC │
	│ start   │ -p embed-certs-739827 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0                                                                                        │ embed-certs-739827           │ jenkins │ v1.37.0 │ 29 Dec 25 07:16 UTC │ 29 Dec 25 07:17 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-798607 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                       │ default-k8s-diff-port-798607 │ jenkins │ v1.37.0 │ 29 Dec 25 07:16 UTC │ 29 Dec 25 07:16 UTC │
	│ start   │ -p default-k8s-diff-port-798607 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0                                                                      │ default-k8s-diff-port-798607 │ jenkins │ v1.37.0 │ 29 Dec 25 07:16 UTC │ 29 Dec 25 07:17 UTC │
	│ image   │ no-preload-122332 image list --format=json                                                                                                                                                                                                    │ no-preload-122332            │ jenkins │ v1.37.0 │ 29 Dec 25 07:17 UTC │ 29 Dec 25 07:17 UTC │
	│ pause   │ -p no-preload-122332 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-122332            │ jenkins │ v1.37.0 │ 29 Dec 25 07:17 UTC │                     │
	│ delete  │ -p no-preload-122332                                                                                                                                                                                                                          │ no-preload-122332            │ jenkins │ v1.37.0 │ 29 Dec 25 07:17 UTC │ 29 Dec 25 07:17 UTC │
	│ delete  │ -p no-preload-122332                                                                                                                                                                                                                          │ no-preload-122332            │ jenkins │ v1.37.0 │ 29 Dec 25 07:17 UTC │ 29 Dec 25 07:17 UTC │
	│ start   │ -p newest-cni-067566 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0 │ newest-cni-067566            │ jenkins │ v1.37.0 │ 29 Dec 25 07:17 UTC │ 29 Dec 25 07:17 UTC │
	│ addons  │ enable metrics-server -p newest-cni-067566 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ newest-cni-067566            │ jenkins │ v1.37.0 │ 29 Dec 25 07:17 UTC │                     │
	│ start   │ -p kubernetes-upgrade-174577 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio                                                                                                                             │ kubernetes-upgrade-174577    │ jenkins │ v1.37.0 │ 29 Dec 25 07:17 UTC │                     │
	│ start   │ -p kubernetes-upgrade-174577 --memory=3072 --kubernetes-version=v1.35.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                                                                                      │ kubernetes-upgrade-174577    │ jenkins │ v1.37.0 │ 29 Dec 25 07:17 UTC │ 29 Dec 25 07:17 UTC │
	│ stop    │ -p newest-cni-067566 --alsologtostderr -v=3                                                                                                                                                                                                   │ newest-cni-067566            │ jenkins │ v1.37.0 │ 29 Dec 25 07:17 UTC │ 29 Dec 25 07:17 UTC │
	│ addons  │ enable dashboard -p newest-cni-067566 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ newest-cni-067566            │ jenkins │ v1.37.0 │ 29 Dec 25 07:17 UTC │ 29 Dec 25 07:17 UTC │
	│ start   │ -p newest-cni-067566 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0 │ newest-cni-067566            │ jenkins │ v1.37.0 │ 29 Dec 25 07:17 UTC │                     │
	│ delete  │ -p kubernetes-upgrade-174577                                                                                                                                                                                                                  │ kubernetes-upgrade-174577    │ jenkins │ v1.37.0 │ 29 Dec 25 07:17 UTC │ 29 Dec 25 07:17 UTC │
	│ image   │ embed-certs-739827 image list --format=json                                                                                                                                                                                                   │ embed-certs-739827           │ jenkins │ v1.37.0 │ 29 Dec 25 07:17 UTC │ 29 Dec 25 07:17 UTC │
	│ pause   │ -p embed-certs-739827 --alsologtostderr -v=1                                                                                                                                                                                                  │ embed-certs-739827           │ jenkins │ v1.37.0 │ 29 Dec 25 07:17 UTC │                     │
	│ image   │ default-k8s-diff-port-798607 image list --format=json                                                                                                                                                                                         │ default-k8s-diff-port-798607 │ jenkins │ v1.37.0 │ 29 Dec 25 07:17 UTC │ 29 Dec 25 07:17 UTC │
	│ start   │ -p auto-619064 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio                                                                                                                       │ auto-619064                  │ jenkins │ v1.37.0 │ 29 Dec 25 07:17 UTC │                     │
	│ pause   │ -p default-k8s-diff-port-798607 --alsologtostderr -v=1                                                                                                                                                                                        │ default-k8s-diff-port-798607 │ jenkins │ v1.37.0 │ 29 Dec 25 07:17 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/29 07:17:48
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1229 07:17:48.840797  283828 out.go:360] Setting OutFile to fd 1 ...
	I1229 07:17:48.841045  283828 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1229 07:17:48.841055  283828 out.go:374] Setting ErrFile to fd 2...
	I1229 07:17:48.841062  283828 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1229 07:17:48.841280  283828 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22353-9207/.minikube/bin
	I1229 07:17:48.841791  283828 out.go:368] Setting JSON to false
	I1229 07:17:48.842910  283828 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":3621,"bootTime":1766989048,"procs":301,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1229 07:17:48.842979  283828 start.go:143] virtualization: kvm guest
	I1229 07:17:48.844736  283828 out.go:179] * [auto-619064] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1229 07:17:48.845826  283828 notify.go:221] Checking for updates...
	I1229 07:17:48.845856  283828 out.go:179]   - MINIKUBE_LOCATION=22353
	I1229 07:17:48.846918  283828 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1229 07:17:48.848029  283828 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22353-9207/kubeconfig
	I1229 07:17:48.849267  283828 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22353-9207/.minikube
	I1229 07:17:48.850430  283828 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1229 07:17:48.851389  283828 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1229 07:17:48.852763  283828 config.go:182] Loaded profile config "default-k8s-diff-port-798607": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1229 07:17:48.852851  283828 config.go:182] Loaded profile config "embed-certs-739827": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1229 07:17:48.852967  283828 config.go:182] Loaded profile config "newest-cni-067566": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1229 07:17:48.853079  283828 driver.go:422] Setting default libvirt URI to qemu:///system
	I1229 07:17:48.886042  283828 docker.go:124] docker version: linux-29.1.3:Docker Engine - Community
	I1229 07:17:48.886270  283828 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1229 07:17:48.958368  283828 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:65 OomKillDisable:false NGoroutines:75 SystemTime:2025-12-29 07:17:48.945550839 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:dea7da592f5d1d2b7755e3a161be07f43fad8f75 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1229 07:17:48.958528  283828 docker.go:319] overlay module found
	I1229 07:17:48.960723  283828 out.go:179] * Using the docker driver based on user configuration
	I1229 07:17:48.962115  283828 start.go:309] selected driver: docker
	I1229 07:17:48.962139  283828 start.go:928] validating driver "docker" against <nil>
	I1229 07:17:48.962156  283828 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1229 07:17:48.962959  283828 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1229 07:17:49.036845  283828 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:66 OomKillDisable:false NGoroutines:77 SystemTime:2025-12-29 07:17:49.023955716 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:dea7da592f5d1d2b7755e3a161be07f43fad8f75 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1229 07:17:49.037106  283828 start_flags.go:333] no existing cluster config was found, will generate one from the flags 
	I1229 07:17:49.037420  283828 start_flags.go:1019] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1229 07:17:49.039270  283828 out.go:179] * Using Docker driver with root privileges
	I1229 07:17:49.040420  283828 cni.go:84] Creating CNI manager for ""
	I1229 07:17:49.040485  283828 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1229 07:17:49.040495  283828 start_flags.go:342] Found "CNI" CNI - setting NetworkPlugin=cni
	I1229 07:17:49.040561  283828 start.go:353] cluster config:
	{Name:auto-619064 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766979815-22353@sha256:20dad5895b49b986a1253c0faab60865204843ac97fd3a6e6210da5896244409 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:auto-619064 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:cri
o CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: Au
toPauseInterval:1m0s Rosetta:false}
	I1229 07:17:49.041849  283828 out.go:179] * Starting "auto-619064" primary control-plane node in "auto-619064" cluster
	I1229 07:17:49.043144  283828 cache.go:134] Beginning downloading kic base image for docker with crio
	I1229 07:17:49.044792  283828 out.go:179] * Pulling base image v0.0.48-1766979815-22353 ...
	I1229 07:17:49.045939  283828 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime crio
	I1229 07:17:49.045971  283828 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22353-9207/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-cri-o-overlay-amd64.tar.lz4
	I1229 07:17:49.045979  283828 cache.go:65] Caching tarball of preloaded images
	I1229 07:17:49.046047  283828 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766979815-22353@sha256:20dad5895b49b986a1253c0faab60865204843ac97fd3a6e6210da5896244409 in local docker daemon
	I1229 07:17:49.046077  283828 preload.go:251] Found /home/jenkins/minikube-integration/22353-9207/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1229 07:17:49.046088  283828 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0 on crio
	I1229 07:17:49.046215  283828 profile.go:143] Saving config to /home/jenkins/minikube-integration/22353-9207/.minikube/profiles/auto-619064/config.json ...
	I1229 07:17:49.046253  283828 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22353-9207/.minikube/profiles/auto-619064/config.json: {Name:mk9baeefab07482d719bbe5fc1c8ed346993a174 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1229 07:17:49.074420  283828 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766979815-22353@sha256:20dad5895b49b986a1253c0faab60865204843ac97fd3a6e6210da5896244409 in local docker daemon, skipping pull
	I1229 07:17:49.074442  283828 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766979815-22353@sha256:20dad5895b49b986a1253c0faab60865204843ac97fd3a6e6210da5896244409 exists in daemon, skipping load
	I1229 07:17:49.074464  283828 cache.go:243] Successfully downloaded all kic artifacts
	I1229 07:17:49.074504  283828 start.go:360] acquireMachinesLock for auto-619064: {Name:mk846f65ba6df3e8e6a1f86164308301a22a7b28 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1229 07:17:49.074631  283828 start.go:364] duration metric: took 103.352µs to acquireMachinesLock for "auto-619064"
	I1229 07:17:49.074660  283828 start.go:93] Provisioning new machine with config: &{Name:auto-619064 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766979815-22353@sha256:20dad5895b49b986a1253c0faab60865204843ac97fd3a6e6210da5896244409 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:auto-619064 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: Soc
ketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false} &{Name: IP: Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1229 07:17:49.074755  283828 start.go:125] createHost starting for "" (driver="docker")
	I1229 07:17:44.397135  281965 out.go:252] * Restarting existing docker container for "newest-cni-067566" ...
	I1229 07:17:44.397210  281965 cli_runner.go:164] Run: docker start newest-cni-067566
	I1229 07:17:44.690272  281965 cli_runner.go:164] Run: docker container inspect newest-cni-067566 --format={{.State.Status}}
	I1229 07:17:44.717474  281965 kic.go:430] container "newest-cni-067566" state is running.
	I1229 07:17:44.717921  281965 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-067566
	I1229 07:17:44.743901  281965 profile.go:143] Saving config to /home/jenkins/minikube-integration/22353-9207/.minikube/profiles/newest-cni-067566/config.json ...
	I1229 07:17:44.744189  281965 machine.go:94] provisionDockerMachine start ...
	I1229 07:17:44.744274  281965 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-067566
	I1229 07:17:44.768576  281965 main.go:144] libmachine: Using SSH client type: native
	I1229 07:17:44.768891  281965 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e300] 0x850fa0 <nil>  [] 0s} 127.0.0.1 33098 <nil> <nil>}
	I1229 07:17:44.768924  281965 main.go:144] libmachine: About to run SSH command:
	hostname
	I1229 07:17:44.770493  281965 main.go:144] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:57186->127.0.0.1:33098: read: connection reset by peer
	I1229 07:17:47.914089  281965 main.go:144] libmachine: SSH cmd err, output: <nil>: newest-cni-067566
	
	I1229 07:17:47.914114  281965 ubuntu.go:182] provisioning hostname "newest-cni-067566"
	I1229 07:17:47.914174  281965 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-067566
	I1229 07:17:47.934478  281965 main.go:144] libmachine: Using SSH client type: native
	I1229 07:17:47.934810  281965 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e300] 0x850fa0 <nil>  [] 0s} 127.0.0.1 33098 <nil> <nil>}
	I1229 07:17:47.934832  281965 main.go:144] libmachine: About to run SSH command:
	sudo hostname newest-cni-067566 && echo "newest-cni-067566" | sudo tee /etc/hostname
	I1229 07:17:48.090581  281965 main.go:144] libmachine: SSH cmd err, output: <nil>: newest-cni-067566
	
	I1229 07:17:48.090653  281965 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-067566
	I1229 07:17:48.111676  281965 main.go:144] libmachine: Using SSH client type: native
	I1229 07:17:48.111979  281965 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e300] 0x850fa0 <nil>  [] 0s} 127.0.0.1 33098 <nil> <nil>}
	I1229 07:17:48.112010  281965 main.go:144] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-067566' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-067566/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-067566' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1229 07:17:48.252535  281965 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	I1229 07:17:48.252565  281965 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22353-9207/.minikube CaCertPath:/home/jenkins/minikube-integration/22353-9207/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22353-9207/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22353-9207/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22353-9207/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22353-9207/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22353-9207/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22353-9207/.minikube}
	I1229 07:17:48.252607  281965 ubuntu.go:190] setting up certificates
	I1229 07:17:48.252633  281965 provision.go:84] configureAuth start
	I1229 07:17:48.252749  281965 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-067566
	I1229 07:17:48.273854  281965 provision.go:143] copyHostCerts
	I1229 07:17:48.273916  281965 exec_runner.go:144] found /home/jenkins/minikube-integration/22353-9207/.minikube/ca.pem, removing ...
	I1229 07:17:48.273935  281965 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22353-9207/.minikube/ca.pem
	I1229 07:17:48.274004  281965 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22353-9207/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22353-9207/.minikube/ca.pem (1082 bytes)
	I1229 07:17:48.274141  281965 exec_runner.go:144] found /home/jenkins/minikube-integration/22353-9207/.minikube/cert.pem, removing ...
	I1229 07:17:48.274153  281965 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22353-9207/.minikube/cert.pem
	I1229 07:17:48.274197  281965 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22353-9207/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22353-9207/.minikube/cert.pem (1123 bytes)
	I1229 07:17:48.274307  281965 exec_runner.go:144] found /home/jenkins/minikube-integration/22353-9207/.minikube/key.pem, removing ...
	I1229 07:17:48.274318  281965 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22353-9207/.minikube/key.pem
	I1229 07:17:48.274356  281965 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22353-9207/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22353-9207/.minikube/key.pem (1675 bytes)
	I1229 07:17:48.274453  281965 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22353-9207/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22353-9207/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22353-9207/.minikube/certs/ca-key.pem org=jenkins.newest-cni-067566 san=[127.0.0.1 192.168.94.2 localhost minikube newest-cni-067566]
	I1229 07:17:48.299081  281965 provision.go:177] copyRemoteCerts
	I1229 07:17:48.299165  281965 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1229 07:17:48.299241  281965 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-067566
	I1229 07:17:48.317986  281965 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33098 SSHKeyPath:/home/jenkins/minikube-integration/22353-9207/.minikube/machines/newest-cni-067566/id_rsa Username:docker}
	I1229 07:17:48.420562  281965 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-9207/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1229 07:17:48.439388  281965 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-9207/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1229 07:17:48.458327  281965 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-9207/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1229 07:17:48.476027  281965 provision.go:87] duration metric: took 223.366415ms to configureAuth
	I1229 07:17:48.476058  281965 ubuntu.go:206] setting minikube options for container-runtime
	I1229 07:17:48.476241  281965 config.go:182] Loaded profile config "newest-cni-067566": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1229 07:17:48.476348  281965 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-067566
	I1229 07:17:48.497081  281965 main.go:144] libmachine: Using SSH client type: native
	I1229 07:17:48.497420  281965 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e300] 0x850fa0 <nil>  [] 0s} 127.0.0.1 33098 <nil> <nil>}
	I1229 07:17:48.497457  281965 main.go:144] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1229 07:17:48.800973  281965 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1229 07:17:48.801000  281965 machine.go:97] duration metric: took 4.056798061s to provisionDockerMachine
	I1229 07:17:48.801014  281965 start.go:293] postStartSetup for "newest-cni-067566" (driver="docker")
	I1229 07:17:48.801028  281965 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1229 07:17:48.801107  281965 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1229 07:17:48.801169  281965 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-067566
	I1229 07:17:48.822694  281965 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33098 SSHKeyPath:/home/jenkins/minikube-integration/22353-9207/.minikube/machines/newest-cni-067566/id_rsa Username:docker}
	I1229 07:17:48.929634  281965 ssh_runner.go:195] Run: cat /etc/os-release
	I1229 07:17:48.935128  281965 main.go:144] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1229 07:17:48.935160  281965 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1229 07:17:48.935174  281965 filesync.go:126] Scanning /home/jenkins/minikube-integration/22353-9207/.minikube/addons for local assets ...
	I1229 07:17:48.935265  281965 filesync.go:126] Scanning /home/jenkins/minikube-integration/22353-9207/.minikube/files for local assets ...
	I1229 07:17:48.935372  281965 filesync.go:149] local asset: /home/jenkins/minikube-integration/22353-9207/.minikube/files/etc/ssl/certs/127332.pem -> 127332.pem in /etc/ssl/certs
	I1229 07:17:48.935496  281965 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1229 07:17:48.945366  281965 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-9207/.minikube/files/etc/ssl/certs/127332.pem --> /etc/ssl/certs/127332.pem (1708 bytes)
	I1229 07:17:48.966328  281965 start.go:296] duration metric: took 165.300332ms for postStartSetup
	I1229 07:17:48.966399  281965 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1229 07:17:48.966445  281965 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-067566
	I1229 07:17:48.995761  281965 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33098 SSHKeyPath:/home/jenkins/minikube-integration/22353-9207/.minikube/machines/newest-cni-067566/id_rsa Username:docker}
	I1229 07:17:49.102276  281965 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1229 07:17:49.107575  281965 fix.go:56] duration metric: took 4.734013486s for fixHost
	I1229 07:17:49.107603  281965 start.go:83] releasing machines lock for "newest-cni-067566", held for 4.734065769s
	I1229 07:17:49.107664  281965 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-067566
	I1229 07:17:49.128559  281965 ssh_runner.go:195] Run: cat /version.json
	I1229 07:17:49.128616  281965 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-067566
	I1229 07:17:49.128663  281965 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1229 07:17:49.128754  281965 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-067566
	I1229 07:17:49.150708  281965 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33098 SSHKeyPath:/home/jenkins/minikube-integration/22353-9207/.minikube/machines/newest-cni-067566/id_rsa Username:docker}
	I1229 07:17:49.150993  281965 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33098 SSHKeyPath:/home/jenkins/minikube-integration/22353-9207/.minikube/machines/newest-cni-067566/id_rsa Username:docker}
	
	
	==> CRI-O <==
	Dec 29 07:17:20 embed-certs-739827 crio[571]: time="2025-12-29T07:17:20.668192837Z" level=info msg="Started container" PID=1800 containerID=feda0a75a638296e3ddf5c652055d9b00fc2c11e0ffa5bc53f8c8a1d1e1de408 description=kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-sglpv/dashboard-metrics-scraper id=c76e817c-a7b7-49aa-b967-66b32d6cad84 name=/runtime.v1.RuntimeService/StartContainer sandboxID=48790b8ecc0a5f5083cca4db87f99e10893da7de9f0c5891e52d51bf9810c4b1
	Dec 29 07:17:20 embed-certs-739827 crio[571]: time="2025-12-29T07:17:20.720343007Z" level=info msg="Removing container: 36b6d8077eccbd26463acd2b39ffdb7882883d5320899c3b869a184ec564b4e4" id=66e3f517-5ab9-4063-afb9-19b30c25994a name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 29 07:17:20 embed-certs-739827 crio[571]: time="2025-12-29T07:17:20.729528314Z" level=info msg="Removed container 36b6d8077eccbd26463acd2b39ffdb7882883d5320899c3b869a184ec564b4e4: kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-sglpv/dashboard-metrics-scraper" id=66e3f517-5ab9-4063-afb9-19b30c25994a name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 29 07:17:28 embed-certs-739827 crio[571]: time="2025-12-29T07:17:28.742618134Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=a714508e-f3f6-4f48-a28c-0338ec7f40c5 name=/runtime.v1.ImageService/ImageStatus
	Dec 29 07:17:28 embed-certs-739827 crio[571]: time="2025-12-29T07:17:28.743767615Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=8a33acfd-2a02-4aaf-96a1-dee7a41abb1f name=/runtime.v1.ImageService/ImageStatus
	Dec 29 07:17:28 embed-certs-739827 crio[571]: time="2025-12-29T07:17:28.744989757Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=043d3105-aff8-46ea-b260-8e8f3732b568 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 29 07:17:28 embed-certs-739827 crio[571]: time="2025-12-29T07:17:28.745172518Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 29 07:17:28 embed-certs-739827 crio[571]: time="2025-12-29T07:17:28.749851041Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 29 07:17:28 embed-certs-739827 crio[571]: time="2025-12-29T07:17:28.750548347Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/ddb39b9554d3191988643cbc3b1e4d9ba5c97ac3ee4f4d875a52203655228bab/merged/etc/passwd: no such file or directory"
	Dec 29 07:17:28 embed-certs-739827 crio[571]: time="2025-12-29T07:17:28.750587511Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/ddb39b9554d3191988643cbc3b1e4d9ba5c97ac3ee4f4d875a52203655228bab/merged/etc/group: no such file or directory"
	Dec 29 07:17:28 embed-certs-739827 crio[571]: time="2025-12-29T07:17:28.750905642Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 29 07:17:28 embed-certs-739827 crio[571]: time="2025-12-29T07:17:28.782498551Z" level=info msg="Created container 704468951f6d121993ccbed029ae4733cf4e68c409c3406d0b0c0e1e83ee7a16: kube-system/storage-provisioner/storage-provisioner" id=043d3105-aff8-46ea-b260-8e8f3732b568 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 29 07:17:28 embed-certs-739827 crio[571]: time="2025-12-29T07:17:28.783081909Z" level=info msg="Starting container: 704468951f6d121993ccbed029ae4733cf4e68c409c3406d0b0c0e1e83ee7a16" id=0983650b-504b-4841-a345-9c9a61e95892 name=/runtime.v1.RuntimeService/StartContainer
	Dec 29 07:17:28 embed-certs-739827 crio[571]: time="2025-12-29T07:17:28.785339989Z" level=info msg="Started container" PID=1814 containerID=704468951f6d121993ccbed029ae4733cf4e68c409c3406d0b0c0e1e83ee7a16 description=kube-system/storage-provisioner/storage-provisioner id=0983650b-504b-4841-a345-9c9a61e95892 name=/runtime.v1.RuntimeService/StartContainer sandboxID=02ab04d8adcb09c16a16ca241d983d78eb3e2571ecbeebcd952606e1b3068a81
	Dec 29 07:17:46 embed-certs-739827 crio[571]: time="2025-12-29T07:17:46.622547274Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=4d556068-55e1-4e6d-b496-1056346b5ce5 name=/runtime.v1.ImageService/ImageStatus
	Dec 29 07:17:46 embed-certs-739827 crio[571]: time="2025-12-29T07:17:46.623713938Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=2e4860c1-04c2-49d3-bb79-32fc48a90ea5 name=/runtime.v1.ImageService/ImageStatus
	Dec 29 07:17:46 embed-certs-739827 crio[571]: time="2025-12-29T07:17:46.624855576Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-sglpv/dashboard-metrics-scraper" id=3e24f1fc-bb8e-4c67-a4d6-c24369cd41f8 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 29 07:17:46 embed-certs-739827 crio[571]: time="2025-12-29T07:17:46.625051822Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 29 07:17:46 embed-certs-739827 crio[571]: time="2025-12-29T07:17:46.630556302Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 29 07:17:46 embed-certs-739827 crio[571]: time="2025-12-29T07:17:46.631037235Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 29 07:17:46 embed-certs-739827 crio[571]: time="2025-12-29T07:17:46.658163102Z" level=info msg="Created container eb918f980511aef13b7e4f8bd78fdf35a7588bb9c363b3367caa3a60f30c3ec2: kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-sglpv/dashboard-metrics-scraper" id=3e24f1fc-bb8e-4c67-a4d6-c24369cd41f8 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 29 07:17:46 embed-certs-739827 crio[571]: time="2025-12-29T07:17:46.658786617Z" level=info msg="Starting container: eb918f980511aef13b7e4f8bd78fdf35a7588bb9c363b3367caa3a60f30c3ec2" id=02b12608-0fc5-497e-8ddd-dfc8eea80a1d name=/runtime.v1.RuntimeService/StartContainer
	Dec 29 07:17:46 embed-certs-739827 crio[571]: time="2025-12-29T07:17:46.660411174Z" level=info msg="Started container" PID=1853 containerID=eb918f980511aef13b7e4f8bd78fdf35a7588bb9c363b3367caa3a60f30c3ec2 description=kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-sglpv/dashboard-metrics-scraper id=02b12608-0fc5-497e-8ddd-dfc8eea80a1d name=/runtime.v1.RuntimeService/StartContainer sandboxID=48790b8ecc0a5f5083cca4db87f99e10893da7de9f0c5891e52d51bf9810c4b1
	Dec 29 07:17:46 embed-certs-739827 crio[571]: time="2025-12-29T07:17:46.797906705Z" level=info msg="Removing container: feda0a75a638296e3ddf5c652055d9b00fc2c11e0ffa5bc53f8c8a1d1e1de408" id=15c95264-98c7-4c0d-b4e3-51a27f5de49d name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 29 07:17:46 embed-certs-739827 crio[571]: time="2025-12-29T07:17:46.80658633Z" level=info msg="Removed container feda0a75a638296e3ddf5c652055d9b00fc2c11e0ffa5bc53f8c8a1d1e1de408: kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-sglpv/dashboard-metrics-scraper" id=15c95264-98c7-4c0d-b4e3-51a27f5de49d name=/runtime.v1.RuntimeService/RemoveContainer
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                          NAMESPACE
	eb918f980511a       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           4 seconds ago       Exited              dashboard-metrics-scraper   3                   48790b8ecc0a5       dashboard-metrics-scraper-867fb5f87b-sglpv   kubernetes-dashboard
	704468951f6d1       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           22 seconds ago      Running             storage-provisioner         1                   02ab04d8adcb0       storage-provisioner                          kube-system
	aad9ce3cf88d2       docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029   46 seconds ago      Running             kubernetes-dashboard        0                   0e546d9258df3       kubernetes-dashboard-b84665fb8-rdq2m         kubernetes-dashboard
	18fb52cea0d34       aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139                                           53 seconds ago      Running             coredns                     0                   c78ca58b08718       coredns-7d764666f9-55529                     kube-system
	83f35284bd1fb       56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c                                           53 seconds ago      Running             busybox                     1                   967d7bdf739a6       busybox                                      default
	f2078890d2c20       4921d7a6dffa922dd679732ba4797085c4f39e9a53bee8b6fdb1d463e8571251                                           53 seconds ago      Running             kindnet-cni                 0                   d7dcd4a1d5298       kindnet-l6mxr                                kube-system
	8832475ac0d10       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           53 seconds ago      Exited              storage-provisioner         0                   02ab04d8adcb0       storage-provisioner                          kube-system
	1af994b88cd9a       32652ff1bbe6b6df16a3dc621fcad4e6e185a2852bffc5847aab79e36c54bfb8                                           53 seconds ago      Running             kube-proxy                  0                   dd92199c0d384       kube-proxy-hdmp6                             kube-system
	64d38c25f85b2       5c6acd67e9cd10eb60a246cd233db251ef62ea97e6572f897e873f0cb648f499                                           55 seconds ago      Running             kube-apiserver              0                   28daa174d0ef2       kube-apiserver-embed-certs-739827            kube-system
	9212464b12efa       550794e3b12ac21ec7fd940bdfb45f7c8dae3c52acd8c70e580b882de20c3dcc                                           55 seconds ago      Running             kube-scheduler              0                   3e97ba4238d86       kube-scheduler-embed-certs-739827            kube-system
	f8f720f7da228       2c9a4b058bd7e6ec479c38f9e8a1dac2f5ee5b0a3ebda6dfac968e8720229508                                           55 seconds ago      Running             kube-controller-manager     0                   bc01cdda86309       kube-controller-manager-embed-certs-739827   kube-system
	0b939e4faa562       0a108f7189562e99793bdecab61fdf1a7c9d913af3385de9da17fb9d6ff430e2                                           55 seconds ago      Running             etcd                        0                   3f0414e81a726       etcd-embed-certs-739827                      kube-system
	
	
	==> coredns [18fb52cea0d3400acddabe689116097412e230e6ba2b4477769aa7d3e66a805d] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 66f0a748f44f6317a6b122af3f457c9dd0ecaed8718ffbf95a69434523efd9ec4992e71f54c7edd5753646fe9af89ac2138b9c3ce14d4a0ba9d2372a55f120bb
	CoreDNS-1.13.1
	linux/amd64, go1.25.2, 1db4568
	[INFO] 127.0.0.1:33939 - 33460 "HINFO IN 4050320932017595841.7466373877937374045. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.03362485s
	[INFO] plugin/ready: Plugins not ready: "kubernetes"
	[INFO] plugin/ready: Plugins not ready: "kubernetes"
	[INFO] plugin/ready: Plugins not ready: "kubernetes"
	[INFO] plugin/ready: Plugins not ready: "kubernetes"
	[ERROR] plugin/kubernetes: Failed to watch
	[ERROR] plugin/kubernetes: Failed to watch
	[ERROR] plugin/kubernetes: Failed to watch
	
	
	==> describe nodes <==
	Name:               embed-certs-739827
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=embed-certs-739827
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=306c26d738eaa3534a776d7c684ed563998a48b8
	                    minikube.k8s.io/name=embed-certs-739827
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_29T07_15_59_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 29 Dec 2025 07:15:56 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-739827
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 29 Dec 2025 07:17:38 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 29 Dec 2025 07:17:28 +0000   Mon, 29 Dec 2025 07:15:54 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 29 Dec 2025 07:17:28 +0000   Mon, 29 Dec 2025 07:15:54 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 29 Dec 2025 07:17:28 +0000   Mon, 29 Dec 2025 07:15:54 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 29 Dec 2025 07:17:28 +0000   Mon, 29 Dec 2025 07:16:16 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.103.2
	  Hostname:    embed-certs-739827
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863360Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863360Ki
	  pods:               110
	System Info:
	  Machine ID:                 44f990608ba801cb32708aeb6951f96d
	  System UUID:                ab46c7d0-f92f-48dd-a29d-7cfb62a7d0f3
	  Boot ID:                    38b59c2f-2e15-4538-b2f7-46a8b6545a02
	  Kernel Version:             6.8.0-1045-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.35.0
	  Kubelet Version:            v1.35.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         91s
	  kube-system                 coredns-7d764666f9-55529                      100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     107s
	  kube-system                 etcd-embed-certs-739827                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         113s
	  kube-system                 kindnet-l6mxr                                 100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      108s
	  kube-system                 kube-apiserver-embed-certs-739827             250m (3%)     0 (0%)      0 (0%)           0 (0%)         113s
	  kube-system                 kube-controller-manager-embed-certs-739827    200m (2%)     0 (0%)      0 (0%)           0 (0%)         113s
	  kube-system                 kube-proxy-hdmp6                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         108s
	  kube-system                 kube-scheduler-embed-certs-739827             100m (1%)     0 (0%)      0 (0%)           0 (0%)         113s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         107s
	  kubernetes-dashboard        dashboard-metrics-scraper-867fb5f87b-sglpv    0 (0%)        0 (0%)      0 (0%)           0 (0%)         50s
	  kubernetes-dashboard        kubernetes-dashboard-b84665fb8-rdq2m          0 (0%)        0 (0%)      0 (0%)           0 (0%)         50s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason          Age   From             Message
	  ----    ------          ----  ----             -------
	  Normal  RegisteredNode  109s  node-controller  Node embed-certs-739827 event: Registered Node embed-certs-739827 in Controller
	  Normal  RegisteredNode  51s   node-controller  Node embed-certs-739827 event: Registered Node embed-certs-739827 in Controller
	
	
	==> dmesg <==
	[Dec29 06:17] MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details.
	[  +0.001714] TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details.
	[  +0.001000] MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details.
	[  +0.084008] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge
	[  +0.380672] i8042: Warning: Keylock active
	[  +0.013979] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.493300] block sda: the capability attribute has been deprecated.
	[  +0.084603] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.024785] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +4.737934] kauditd_printk_skb: 47 callbacks suppressed
	
	
	==> etcd [0b939e4faa5624d77348fcf707669fb95bdce762e69420b9e5dde5b8d7fad11c] <==
	{"level":"info","ts":"2025-12-29T07:16:56.670495Z","logger":"raft","caller":"v3@v3.6.0/raft.go:988","msg":"f23060b075c4c089 is starting a new election at term 2"}
	{"level":"info","ts":"2025-12-29T07:16:56.670552Z","logger":"raft","caller":"v3@v3.6.0/raft.go:930","msg":"f23060b075c4c089 became pre-candidate at term 2"}
	{"level":"info","ts":"2025-12-29T07:16:56.670631Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1077","msg":"f23060b075c4c089 received MsgPreVoteResp from f23060b075c4c089 at term 2"}
	{"level":"info","ts":"2025-12-29T07:16:56.670649Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1693","msg":"f23060b075c4c089 has received 1 MsgPreVoteResp votes and 0 vote rejections"}
	{"level":"info","ts":"2025-12-29T07:16:56.670669Z","logger":"raft","caller":"v3@v3.6.0/raft.go:912","msg":"f23060b075c4c089 became candidate at term 3"}
	{"level":"info","ts":"2025-12-29T07:16:56.671364Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1077","msg":"f23060b075c4c089 received MsgVoteResp from f23060b075c4c089 at term 3"}
	{"level":"info","ts":"2025-12-29T07:16:56.671385Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1693","msg":"f23060b075c4c089 has received 1 MsgVoteResp votes and 0 vote rejections"}
	{"level":"info","ts":"2025-12-29T07:16:56.671406Z","logger":"raft","caller":"v3@v3.6.0/raft.go:970","msg":"f23060b075c4c089 became leader at term 3"}
	{"level":"info","ts":"2025-12-29T07:16:56.671419Z","logger":"raft","caller":"v3@v3.6.0/node.go:370","msg":"raft.node: f23060b075c4c089 elected leader f23060b075c4c089 at term 3"}
	{"level":"info","ts":"2025-12-29T07:16:56.672480Z","caller":"etcdserver/server.go:1820","msg":"published local member to cluster through raft","local-member-id":"f23060b075c4c089","local-member-attributes":"{Name:embed-certs-739827 ClientURLs:[https://192.168.103.2:2379]}","cluster-id":"3336683c081d149d","publish-timeout":"7s"}
	{"level":"info","ts":"2025-12-29T07:16:56.672500Z","caller":"embed/serve.go:138","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-12-29T07:16:56.672526Z","caller":"embed/serve.go:138","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-12-29T07:16:56.672709Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-12-29T07:16:56.672748Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-12-29T07:16:56.673807Z","caller":"v3rpc/health.go:63","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-12-29T07:16:56.673836Z","caller":"v3rpc/health.go:63","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-12-29T07:16:56.677097Z","caller":"embed/serve.go:283","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2025-12-29T07:16:56.677400Z","caller":"embed/serve.go:283","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.103.2:2379"}
	{"level":"info","ts":"2025-12-29T07:17:18.324534Z","caller":"traceutil/trace.go:172","msg":"trace[2003940404] transaction","detail":"{read_only:false; response_revision:623; number_of_response:1; }","duration":"175.322301ms","start":"2025-12-29T07:17:18.149189Z","end":"2025-12-29T07:17:18.324511Z","steps":["trace[2003940404] 'process raft request'  (duration: 175.180237ms)"],"step_count":1}
	{"level":"warn","ts":"2025-12-29T07:17:18.489317Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"162.440144ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/embed-certs-739827\" limit:1 ","response":"range_response_count:1 size:5960"}
	{"level":"info","ts":"2025-12-29T07:17:18.489453Z","caller":"traceutil/trace.go:172","msg":"trace[186191666] range","detail":"{range_begin:/registry/minions/embed-certs-739827; range_end:; response_count:1; response_revision:623; }","duration":"162.563213ms","start":"2025-12-29T07:17:18.326842Z","end":"2025-12-29T07:17:18.489405Z","steps":["trace[186191666] 'agreement among raft nodes before linearized reading'  (duration: 36.516229ms)","trace[186191666] 'range keys from in-memory index tree'  (duration: 125.708212ms)"],"step_count":2}
	{"level":"warn","ts":"2025-12-29T07:17:18.489887Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"125.979556ms","expected-duration":"100ms","prefix":"","request":"header:<ID:13873791002224553064 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/leases/kube-node-lease/embed-certs-739827\" mod_revision:609 > success:<request_put:<key:\"/registry/leases/kube-node-lease/embed-certs-739827\" value_size:501 >> failure:<request_range:<key:\"/registry/leases/kube-node-lease/embed-certs-739827\" > >>","response":"size:16"}
	{"level":"info","ts":"2025-12-29T07:17:18.489971Z","caller":"traceutil/trace.go:172","msg":"trace[1155581574] transaction","detail":"{read_only:false; response_revision:624; number_of_response:1; }","duration":"307.030326ms","start":"2025-12-29T07:17:18.182926Z","end":"2025-12-29T07:17:18.489956Z","steps":["trace[1155581574] 'process raft request'  (duration: 180.457902ms)","trace[1155581574] 'compare'  (duration: 125.777811ms)"],"step_count":2}
	{"level":"warn","ts":"2025-12-29T07:17:18.490035Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-12-29T07:17:18.182906Z","time spent":"307.093891ms","remote":"127.0.0.1:42478","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":560,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/leases/kube-node-lease/embed-certs-739827\" mod_revision:609 > success:<request_put:<key:\"/registry/leases/kube-node-lease/embed-certs-739827\" value_size:501 >> failure:<request_range:<key:\"/registry/leases/kube-node-lease/embed-certs-739827\" > >"}
	{"level":"info","ts":"2025-12-29T07:17:19.016447Z","caller":"traceutil/trace.go:172","msg":"trace[1192439289] transaction","detail":"{read_only:false; response_revision:625; number_of_response:1; }","duration":"129.494233ms","start":"2025-12-29T07:17:18.886932Z","end":"2025-12-29T07:17:19.016427Z","steps":["trace[1192439289] 'process raft request'  (duration: 128.110859ms)"],"step_count":1}
	
	
	==> kernel <==
	 07:17:51 up  1:00,  0 user,  load average: 4.25, 3.10, 2.18
	Linux embed-certs-739827 6.8.0-1045-gcp #48~22.04.1-Ubuntu SMP Tue Nov 25 13:07:56 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [f2078890d2c20f1615244415e59607cf2ee2465b8956242073cb8e5b80673001] <==
	I1229 07:16:58.290327       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1229 07:16:58.290622       1 main.go:139] hostIP = 192.168.103.2
	podIP = 192.168.103.2
	I1229 07:16:58.290824       1 main.go:148] setting mtu 1500 for CNI 
	I1229 07:16:58.290857       1 main.go:178] kindnetd IP family: "ipv4"
	I1229 07:16:58.290885       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-12-29T07:16:58Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1229 07:16:58.491207       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1229 07:16:58.491270       1 controller.go:381] "Waiting for informer caches to sync"
	I1229 07:16:58.491287       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1229 07:16:58.491543       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1229 07:16:58.791432       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1229 07:16:58.791458       1 metrics.go:72] Registering metrics
	I1229 07:16:58.791510       1 controller.go:711] "Syncing nftables rules"
	I1229 07:17:08.491341       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1229 07:17:08.491404       1 main.go:301] handling current node
	I1229 07:17:18.491829       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1229 07:17:18.491884       1 main.go:301] handling current node
	I1229 07:17:28.498305       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1229 07:17:28.498346       1 main.go:301] handling current node
	I1229 07:17:38.493357       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1229 07:17:38.493426       1 main.go:301] handling current node
	I1229 07:17:48.491366       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1229 07:17:48.491724       1 main.go:301] handling current node
	
	
	==> kube-apiserver [64d38c25f85b27ef903c4b442a4a233566702ef4d41de37f0bd76a24a6632555] <==
	I1229 07:16:57.661447       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1229 07:16:57.664801       1 shared_informer.go:377] "Caches are synced"
	I1229 07:16:57.662159       1 shared_informer.go:356] "Caches are synced" controller="crd-autoregister"
	I1229 07:16:57.664568       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1229 07:16:57.664598       1 shared_informer.go:377] "Caches are synced"
	I1229 07:16:57.665005       1 aggregator.go:187] initial CRD sync complete...
	I1229 07:16:57.664863       1 policy_source.go:248] refreshing policies
	I1229 07:16:57.665015       1 autoregister_controller.go:144] Starting autoregister controller
	I1229 07:16:57.665023       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1229 07:16:57.665030       1 cache.go:39] Caches are synced for autoregister controller
	I1229 07:16:57.662058       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1229 07:16:57.671045       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1229 07:16:57.709547       1 cidrallocator.go:302] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1229 07:16:57.714571       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1229 07:16:57.788288       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1229 07:16:57.994366       1 controller.go:667] quota admission added evaluator for: namespaces
	I1229 07:16:58.051747       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1229 07:16:58.075970       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1229 07:16:58.089140       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1229 07:16:58.132856       1 alloc.go:329] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.101.135.194"}
	I1229 07:16:58.142589       1 alloc.go:329] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.100.230.131"}
	I1229 07:16:58.564099       1 storage_scheduling.go:139] all system priority classes are created successfully or already exist.
	I1229 07:17:01.289530       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1229 07:17:01.337693       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1229 07:17:01.388084       1 controller.go:667] quota admission added evaluator for: endpoints
	
	
	==> kube-controller-manager [f8f720f7da22897696acdb14fb867efe0f070b8de40dde3450d76b6859332adc] <==
	I1229 07:17:00.794979       1 shared_informer.go:377] "Caches are synced"
	I1229 07:17:00.794987       1 shared_informer.go:377] "Caches are synced"
	I1229 07:17:00.795103       1 shared_informer.go:377] "Caches are synced"
	I1229 07:17:00.795140       1 shared_informer.go:377] "Caches are synced"
	I1229 07:17:00.795154       1 shared_informer.go:377] "Caches are synced"
	I1229 07:17:00.803054       1 range_allocator.go:177] "Sending events to api server"
	I1229 07:17:00.803128       1 range_allocator.go:181] "Starting range CIDR allocator"
	I1229 07:17:00.803159       1 shared_informer.go:370] "Waiting for caches to sync"
	I1229 07:17:00.803203       1 shared_informer.go:377] "Caches are synced"
	I1229 07:17:00.795200       1 shared_informer.go:377] "Caches are synced"
	I1229 07:17:00.793832       1 node_lifecycle_controller.go:1080] "Controller detected that zone is now in new state" zone="" newState="Normal"
	I1229 07:17:00.802316       1 shared_informer.go:377] "Caches are synced"
	I1229 07:17:00.804611       1 shared_informer.go:377] "Caches are synced"
	I1229 07:17:00.804695       1 shared_informer.go:377] "Caches are synced"
	I1229 07:17:00.804714       1 shared_informer.go:377] "Caches are synced"
	I1229 07:17:00.804733       1 shared_informer.go:377] "Caches are synced"
	I1229 07:17:00.804737       1 shared_informer.go:377] "Caches are synced"
	I1229 07:17:00.804612       1 shared_informer.go:377] "Caches are synced"
	I1229 07:17:00.804624       1 shared_informer.go:377] "Caches are synced"
	I1229 07:17:00.807285       1 shared_informer.go:370] "Waiting for caches to sync"
	I1229 07:17:00.812235       1 shared_informer.go:377] "Caches are synced"
	I1229 07:17:00.904372       1 shared_informer.go:377] "Caches are synced"
	I1229 07:17:00.904399       1 garbagecollector.go:166] "Garbage collector: all resource monitors have synced"
	I1229 07:17:00.904405       1 garbagecollector.go:169] "Proceeding to collect garbage"
	I1229 07:17:00.907447       1 shared_informer.go:377] "Caches are synced"
	
	
	==> kube-proxy [1af994b88cd9a20f9f21db7006c416782dc168261061cd2ae2e686e54a934563] <==
	I1229 07:16:58.080789       1 server_linux.go:53] "Using iptables proxy"
	I1229 07:16:58.143579       1 shared_informer.go:370] "Waiting for caches to sync"
	I1229 07:16:58.244380       1 shared_informer.go:377] "Caches are synced"
	I1229 07:16:58.244415       1 server.go:218] "Successfully retrieved NodeIPs" NodeIPs=["192.168.103.2"]
	E1229 07:16:58.244548       1 server.go:255] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1229 07:16:58.264117       1 server.go:264] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1229 07:16:58.264180       1 server_linux.go:136] "Using iptables Proxier"
	I1229 07:16:58.269542       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1229 07:16:58.270296       1 server.go:529] "Version info" version="v1.35.0"
	I1229 07:16:58.270342       1 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1229 07:16:58.272417       1 config.go:200] "Starting service config controller"
	I1229 07:16:58.272440       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1229 07:16:58.272465       1 config.go:403] "Starting serviceCIDR config controller"
	I1229 07:16:58.272471       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1229 07:16:58.272488       1 config.go:106] "Starting endpoint slice config controller"
	I1229 07:16:58.272493       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1229 07:16:58.272710       1 config.go:309] "Starting node config controller"
	I1229 07:16:58.273143       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1229 07:16:58.273192       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1229 07:16:58.373295       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1229 07:16:58.373319       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1229 07:16:58.373297       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-scheduler [9212464b12efa806f75edd62f5a28621d98bc923f0f5c51a13c6e0475b23ee0a] <==
	I1229 07:16:56.431710       1 serving.go:386] Generated self-signed cert in-memory
	W1229 07:16:57.582799       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1229 07:16:57.582966       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1229 07:16:57.582983       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1229 07:16:57.582994       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1229 07:16:57.650101       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.35.0"
	I1229 07:16:57.650150       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1229 07:16:57.654413       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1229 07:16:57.654472       1 shared_informer.go:370] "Waiting for caches to sync"
	I1229 07:16:57.654534       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1229 07:16:57.654775       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1229 07:16:57.756806       1 shared_informer.go:377] "Caches are synced"
	
	
	==> kubelet <==
	Dec 29 07:17:10 embed-certs-739827 kubelet[735]: E1229 07:17:10.693758     735 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-867fb5f87b-sglpv_kubernetes-dashboard(fa13a956-8468-408f-a28c-4f12b60eede7)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-sglpv" podUID="fa13a956-8468-408f-a28c-4f12b60eede7"
	Dec 29 07:17:10 embed-certs-739827 kubelet[735]: E1229 07:17:10.821571     735 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-embed-certs-739827" containerName="kube-scheduler"
	Dec 29 07:17:11 embed-certs-739827 kubelet[735]: E1229 07:17:11.696202     735 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-embed-certs-739827" containerName="kube-scheduler"
	Dec 29 07:17:15 embed-certs-739827 kubelet[735]: E1229 07:17:15.354327     735 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-controller-manager-embed-certs-739827" containerName="kube-controller-manager"
	Dec 29 07:17:20 embed-certs-739827 kubelet[735]: E1229 07:17:20.621932     735 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-sglpv" containerName="dashboard-metrics-scraper"
	Dec 29 07:17:20 embed-certs-739827 kubelet[735]: I1229 07:17:20.621967     735 scope.go:122] "RemoveContainer" containerID="36b6d8077eccbd26463acd2b39ffdb7882883d5320899c3b869a184ec564b4e4"
	Dec 29 07:17:20 embed-certs-739827 kubelet[735]: I1229 07:17:20.719079     735 scope.go:122] "RemoveContainer" containerID="36b6d8077eccbd26463acd2b39ffdb7882883d5320899c3b869a184ec564b4e4"
	Dec 29 07:17:20 embed-certs-739827 kubelet[735]: E1229 07:17:20.719463     735 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-sglpv" containerName="dashboard-metrics-scraper"
	Dec 29 07:17:20 embed-certs-739827 kubelet[735]: I1229 07:17:20.719500     735 scope.go:122] "RemoveContainer" containerID="feda0a75a638296e3ddf5c652055d9b00fc2c11e0ffa5bc53f8c8a1d1e1de408"
	Dec 29 07:17:20 embed-certs-739827 kubelet[735]: E1229 07:17:20.719780     735 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-867fb5f87b-sglpv_kubernetes-dashboard(fa13a956-8468-408f-a28c-4f12b60eede7)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-sglpv" podUID="fa13a956-8468-408f-a28c-4f12b60eede7"
	Dec 29 07:17:28 embed-certs-739827 kubelet[735]: I1229 07:17:28.742109     735 scope.go:122] "RemoveContainer" containerID="8832475ac0d106938d4161128b43278743eaed163bb0b266a8fe65ce2718ec8e"
	Dec 29 07:17:29 embed-certs-739827 kubelet[735]: E1229 07:17:29.014361     735 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-sglpv" containerName="dashboard-metrics-scraper"
	Dec 29 07:17:29 embed-certs-739827 kubelet[735]: I1229 07:17:29.014414     735 scope.go:122] "RemoveContainer" containerID="feda0a75a638296e3ddf5c652055d9b00fc2c11e0ffa5bc53f8c8a1d1e1de408"
	Dec 29 07:17:29 embed-certs-739827 kubelet[735]: E1229 07:17:29.014657     735 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-867fb5f87b-sglpv_kubernetes-dashboard(fa13a956-8468-408f-a28c-4f12b60eede7)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-sglpv" podUID="fa13a956-8468-408f-a28c-4f12b60eede7"
	Dec 29 07:17:34 embed-certs-739827 kubelet[735]: E1229 07:17:34.507094     735 prober_manager.go:209] "Readiness probe already exists for container" pod="kube-system/coredns-7d764666f9-55529" containerName="coredns"
	Dec 29 07:17:46 embed-certs-739827 kubelet[735]: E1229 07:17:46.621918     735 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-sglpv" containerName="dashboard-metrics-scraper"
	Dec 29 07:17:46 embed-certs-739827 kubelet[735]: I1229 07:17:46.621962     735 scope.go:122] "RemoveContainer" containerID="feda0a75a638296e3ddf5c652055d9b00fc2c11e0ffa5bc53f8c8a1d1e1de408"
	Dec 29 07:17:46 embed-certs-739827 kubelet[735]: I1229 07:17:46.796624     735 scope.go:122] "RemoveContainer" containerID="feda0a75a638296e3ddf5c652055d9b00fc2c11e0ffa5bc53f8c8a1d1e1de408"
	Dec 29 07:17:46 embed-certs-739827 kubelet[735]: E1229 07:17:46.796859     735 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-sglpv" containerName="dashboard-metrics-scraper"
	Dec 29 07:17:46 embed-certs-739827 kubelet[735]: I1229 07:17:46.796897     735 scope.go:122] "RemoveContainer" containerID="eb918f980511aef13b7e4f8bd78fdf35a7588bb9c363b3367caa3a60f30c3ec2"
	Dec 29 07:17:46 embed-certs-739827 kubelet[735]: E1229 07:17:46.797120     735 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-867fb5f87b-sglpv_kubernetes-dashboard(fa13a956-8468-408f-a28c-4f12b60eede7)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-sglpv" podUID="fa13a956-8468-408f-a28c-4f12b60eede7"
	Dec 29 07:17:48 embed-certs-739827 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Dec 29 07:17:48 embed-certs-739827 systemd[1]: kubelet.service: Deactivated successfully.
	Dec 29 07:17:48 embed-certs-739827 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 29 07:17:48 embed-certs-739827 systemd[1]: kubelet.service: Consumed 1.774s CPU time.
	
	
	==> kubernetes-dashboard [aad9ce3cf88d2672fedb660dd957d3e79fcfbbdeb5e444da96051de7009cea2d] <==
	2025/12/29 07:17:05 Starting overwatch
	2025/12/29 07:17:05 Using namespace: kubernetes-dashboard
	2025/12/29 07:17:05 Using in-cluster config to connect to apiserver
	2025/12/29 07:17:05 Using secret token for csrf signing
	2025/12/29 07:17:05 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/12/29 07:17:05 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/12/29 07:17:05 Successful initial request to the apiserver, version: v1.35.0
	2025/12/29 07:17:05 Generating JWE encryption key
	2025/12/29 07:17:05 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/12/29 07:17:05 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/12/29 07:17:05 Initializing JWE encryption key from synchronized object
	2025/12/29 07:17:05 Creating in-cluster Sidecar client
	2025/12/29 07:17:05 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/12/29 07:17:05 Serving insecurely on HTTP port: 9090
	2025/12/29 07:17:35 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	
	
	==> storage-provisioner [704468951f6d121993ccbed029ae4733cf4e68c409c3406d0b0c0e1e83ee7a16] <==
	I1229 07:17:28.798146       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1229 07:17:28.805256       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1229 07:17:28.805318       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1229 07:17:28.807309       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1229 07:17:32.262409       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1229 07:17:36.523406       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1229 07:17:40.121492       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1229 07:17:43.175771       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1229 07:17:46.198251       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1229 07:17:46.203798       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 07:17:46.204005       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1229 07:17:46.204100       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"347ea651-c327-4cd5-b9c6-ab5a3882fdf7", APIVersion:"v1", ResourceVersion:"651", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-739827_ca89ef00-a9eb-4d5e-93a5-3bff5cdbddd0 became leader
	I1229 07:17:46.204182       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-739827_ca89ef00-a9eb-4d5e-93a5-3bff5cdbddd0!
	W1229 07:17:46.206260       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1229 07:17:46.210326       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 07:17:46.305029       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-739827_ca89ef00-a9eb-4d5e-93a5-3bff5cdbddd0!
	W1229 07:17:48.213588       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1229 07:17:48.221839       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1229 07:17:50.226259       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1229 07:17:50.277531       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	
	
	==> storage-provisioner [8832475ac0d106938d4161128b43278743eaed163bb0b266a8fe65ce2718ec8e] <==
	I1229 07:16:58.026363       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1229 07:17:28.034874       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-739827 -n embed-certs-739827
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-739827 -n embed-certs-739827: exit status 2 (474.704936ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:270: (dbg) Run:  kubectl --context embed-certs-739827 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:294: <<< TestStartStop/group/embed-certs/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:295: ---------------------/post-mortem---------------------------------
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect embed-certs-739827
helpers_test.go:244: (dbg) docker inspect embed-certs-739827:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "5d317fcd0cf2e464b4302b3edb0923fd8225c8487adab12364480a572c254510",
	        "Created": "2025-12-29T07:15:42.247731806Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 268404,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-29T07:16:49.272550429Z",
	            "FinishedAt": "2025-12-29T07:16:48.366627723Z"
	        },
	        "Image": "sha256:a2c772d8c9235c5e8a66a3d0af7f5184436aa141d74f90a5659fe0d594ea14d5",
	        "ResolvConfPath": "/var/lib/docker/containers/5d317fcd0cf2e464b4302b3edb0923fd8225c8487adab12364480a572c254510/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/5d317fcd0cf2e464b4302b3edb0923fd8225c8487adab12364480a572c254510/hostname",
	        "HostsPath": "/var/lib/docker/containers/5d317fcd0cf2e464b4302b3edb0923fd8225c8487adab12364480a572c254510/hosts",
	        "LogPath": "/var/lib/docker/containers/5d317fcd0cf2e464b4302b3edb0923fd8225c8487adab12364480a572c254510/5d317fcd0cf2e464b4302b3edb0923fd8225c8487adab12364480a572c254510-json.log",
	        "Name": "/embed-certs-739827",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "embed-certs-739827:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "embed-certs-739827",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "5d317fcd0cf2e464b4302b3edb0923fd8225c8487adab12364480a572c254510",
	                "LowerDir": "/var/lib/docker/overlay2/1ddb85c4d6d055685246eef346b309475d08181c1a016f3bdcb20ebf98f7bc7c-init/diff:/var/lib/docker/overlay2/3c9e424a3298c821d4195922375c499065668db3230de879006c717dc268a220/diff",
	                "MergedDir": "/var/lib/docker/overlay2/1ddb85c4d6d055685246eef346b309475d08181c1a016f3bdcb20ebf98f7bc7c/merged",
	                "UpperDir": "/var/lib/docker/overlay2/1ddb85c4d6d055685246eef346b309475d08181c1a016f3bdcb20ebf98f7bc7c/diff",
	                "WorkDir": "/var/lib/docker/overlay2/1ddb85c4d6d055685246eef346b309475d08181c1a016f3bdcb20ebf98f7bc7c/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "embed-certs-739827",
	                "Source": "/var/lib/docker/volumes/embed-certs-739827/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "embed-certs-739827",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766979815-22353@sha256:20dad5895b49b986a1253c0faab60865204843ac97fd3a6e6210da5896244409",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "embed-certs-739827",
	                "name.minikube.sigs.k8s.io": "embed-certs-739827",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "ecbd40f6344d61174b0d6cdd84af861ca9e73ac711be404978ef32e426a12d05",
	            "SandboxKey": "/var/run/docker/netns/ecbd40f6344d",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33083"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33084"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33087"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33085"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33086"
	                    }
	                ]
	            },
	            "Networks": {
	                "embed-certs-739827": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.103.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "b087e00cc8440c1f4006081344d5fbc0a2e6dd2a74b7013ef26beec3a624ea25",
	                    "EndpointID": "69fff274e635ba82d9c7956bd856b1c2d1033928217e53b2838d6384759c2472",
	                    "Gateway": "192.168.103.1",
	                    "IPAddress": "192.168.103.2",
	                    "MacAddress": "d2:91:39:5f:2f:e6",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "embed-certs-739827",
	                        "5d317fcd0cf2"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-739827 -n embed-certs-739827
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-739827 -n embed-certs-739827: exit status 2 (417.027225ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestStartStop/group/embed-certs/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-739827 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-amd64 -p embed-certs-739827 logs -n 25: (1.644307206s)
helpers_test.go:261: TestStartStop/group/embed-certs/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ addons  │ enable metrics-server -p embed-certs-739827 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-739827           │ jenkins │ v1.37.0 │ 29 Dec 25 07:16 UTC │                     │
	│ stop    │ -p embed-certs-739827 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-739827           │ jenkins │ v1.37.0 │ 29 Dec 25 07:16 UTC │ 29 Dec 25 07:16 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-798607 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-798607 │ jenkins │ v1.37.0 │ 29 Dec 25 07:16 UTC │                     │
	│ stop    │ -p default-k8s-diff-port-798607 --alsologtostderr -v=3                                                                                                                                                                                        │ default-k8s-diff-port-798607 │ jenkins │ v1.37.0 │ 29 Dec 25 07:16 UTC │ 29 Dec 25 07:16 UTC │
	│ addons  │ enable dashboard -p embed-certs-739827 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-739827           │ jenkins │ v1.37.0 │ 29 Dec 25 07:16 UTC │ 29 Dec 25 07:16 UTC │
	│ start   │ -p embed-certs-739827 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0                                                                                        │ embed-certs-739827           │ jenkins │ v1.37.0 │ 29 Dec 25 07:16 UTC │ 29 Dec 25 07:17 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-798607 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                       │ default-k8s-diff-port-798607 │ jenkins │ v1.37.0 │ 29 Dec 25 07:16 UTC │ 29 Dec 25 07:16 UTC │
	│ start   │ -p default-k8s-diff-port-798607 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0                                                                      │ default-k8s-diff-port-798607 │ jenkins │ v1.37.0 │ 29 Dec 25 07:16 UTC │ 29 Dec 25 07:17 UTC │
	│ image   │ no-preload-122332 image list --format=json                                                                                                                                                                                                    │ no-preload-122332            │ jenkins │ v1.37.0 │ 29 Dec 25 07:17 UTC │ 29 Dec 25 07:17 UTC │
	│ pause   │ -p no-preload-122332 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-122332            │ jenkins │ v1.37.0 │ 29 Dec 25 07:17 UTC │                     │
	│ delete  │ -p no-preload-122332                                                                                                                                                                                                                          │ no-preload-122332            │ jenkins │ v1.37.0 │ 29 Dec 25 07:17 UTC │ 29 Dec 25 07:17 UTC │
	│ delete  │ -p no-preload-122332                                                                                                                                                                                                                          │ no-preload-122332            │ jenkins │ v1.37.0 │ 29 Dec 25 07:17 UTC │ 29 Dec 25 07:17 UTC │
	│ start   │ -p newest-cni-067566 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0 │ newest-cni-067566            │ jenkins │ v1.37.0 │ 29 Dec 25 07:17 UTC │ 29 Dec 25 07:17 UTC │
	│ addons  │ enable metrics-server -p newest-cni-067566 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ newest-cni-067566            │ jenkins │ v1.37.0 │ 29 Dec 25 07:17 UTC │                     │
	│ start   │ -p kubernetes-upgrade-174577 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio                                                                                                                             │ kubernetes-upgrade-174577    │ jenkins │ v1.37.0 │ 29 Dec 25 07:17 UTC │                     │
	│ start   │ -p kubernetes-upgrade-174577 --memory=3072 --kubernetes-version=v1.35.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                                                                                      │ kubernetes-upgrade-174577    │ jenkins │ v1.37.0 │ 29 Dec 25 07:17 UTC │ 29 Dec 25 07:17 UTC │
	│ stop    │ -p newest-cni-067566 --alsologtostderr -v=3                                                                                                                                                                                                   │ newest-cni-067566            │ jenkins │ v1.37.0 │ 29 Dec 25 07:17 UTC │ 29 Dec 25 07:17 UTC │
	│ addons  │ enable dashboard -p newest-cni-067566 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ newest-cni-067566            │ jenkins │ v1.37.0 │ 29 Dec 25 07:17 UTC │ 29 Dec 25 07:17 UTC │
	│ start   │ -p newest-cni-067566 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0 │ newest-cni-067566            │ jenkins │ v1.37.0 │ 29 Dec 25 07:17 UTC │                     │
	│ delete  │ -p kubernetes-upgrade-174577                                                                                                                                                                                                                  │ kubernetes-upgrade-174577    │ jenkins │ v1.37.0 │ 29 Dec 25 07:17 UTC │ 29 Dec 25 07:17 UTC │
	│ image   │ embed-certs-739827 image list --format=json                                                                                                                                                                                                   │ embed-certs-739827           │ jenkins │ v1.37.0 │ 29 Dec 25 07:17 UTC │ 29 Dec 25 07:17 UTC │
	│ pause   │ -p embed-certs-739827 --alsologtostderr -v=1                                                                                                                                                                                                  │ embed-certs-739827           │ jenkins │ v1.37.0 │ 29 Dec 25 07:17 UTC │                     │
	│ image   │ default-k8s-diff-port-798607 image list --format=json                                                                                                                                                                                         │ default-k8s-diff-port-798607 │ jenkins │ v1.37.0 │ 29 Dec 25 07:17 UTC │ 29 Dec 25 07:17 UTC │
	│ start   │ -p auto-619064 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio                                                                                                                       │ auto-619064                  │ jenkins │ v1.37.0 │ 29 Dec 25 07:17 UTC │                     │
	│ pause   │ -p default-k8s-diff-port-798607 --alsologtostderr -v=1                                                                                                                                                                                        │ default-k8s-diff-port-798607 │ jenkins │ v1.37.0 │ 29 Dec 25 07:17 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/29 07:17:48
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1229 07:17:48.840797  283828 out.go:360] Setting OutFile to fd 1 ...
	I1229 07:17:48.841045  283828 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1229 07:17:48.841055  283828 out.go:374] Setting ErrFile to fd 2...
	I1229 07:17:48.841062  283828 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1229 07:17:48.841280  283828 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22353-9207/.minikube/bin
	I1229 07:17:48.841791  283828 out.go:368] Setting JSON to false
	I1229 07:17:48.842910  283828 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":3621,"bootTime":1766989048,"procs":301,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1229 07:17:48.842979  283828 start.go:143] virtualization: kvm guest
	I1229 07:17:48.844736  283828 out.go:179] * [auto-619064] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1229 07:17:48.845826  283828 notify.go:221] Checking for updates...
	I1229 07:17:48.845856  283828 out.go:179]   - MINIKUBE_LOCATION=22353
	I1229 07:17:48.846918  283828 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1229 07:17:48.848029  283828 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22353-9207/kubeconfig
	I1229 07:17:48.849267  283828 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22353-9207/.minikube
	I1229 07:17:48.850430  283828 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1229 07:17:48.851389  283828 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1229 07:17:48.852763  283828 config.go:182] Loaded profile config "default-k8s-diff-port-798607": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1229 07:17:48.852851  283828 config.go:182] Loaded profile config "embed-certs-739827": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1229 07:17:48.852967  283828 config.go:182] Loaded profile config "newest-cni-067566": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1229 07:17:48.853079  283828 driver.go:422] Setting default libvirt URI to qemu:///system
	I1229 07:17:48.886042  283828 docker.go:124] docker version: linux-29.1.3:Docker Engine - Community
	I1229 07:17:48.886270  283828 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1229 07:17:48.958368  283828 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:65 OomKillDisable:false NGoroutines:75 SystemTime:2025-12-29 07:17:48.945550839 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:dea7da592f5d1d2b7755e3a161be07f43fad8f75 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1229 07:17:48.958528  283828 docker.go:319] overlay module found
	I1229 07:17:48.960723  283828 out.go:179] * Using the docker driver based on user configuration
	I1229 07:17:48.962115  283828 start.go:309] selected driver: docker
	I1229 07:17:48.962139  283828 start.go:928] validating driver "docker" against <nil>
	I1229 07:17:48.962156  283828 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1229 07:17:48.962959  283828 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1229 07:17:49.036845  283828 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:66 OomKillDisable:false NGoroutines:77 SystemTime:2025-12-29 07:17:49.023955716 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:dea7da592f5d1d2b7755e3a161be07f43fad8f75 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1229 07:17:49.037106  283828 start_flags.go:333] no existing cluster config was found, will generate one from the flags 
	I1229 07:17:49.037420  283828 start_flags.go:1019] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1229 07:17:49.039270  283828 out.go:179] * Using Docker driver with root privileges
	I1229 07:17:49.040420  283828 cni.go:84] Creating CNI manager for ""
	I1229 07:17:49.040485  283828 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1229 07:17:49.040495  283828 start_flags.go:342] Found "CNI" CNI - setting NetworkPlugin=cni
	I1229 07:17:49.040561  283828 start.go:353] cluster config:
	{Name:auto-619064 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766979815-22353@sha256:20dad5895b49b986a1253c0faab60865204843ac97fd3a6e6210da5896244409 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:auto-619064 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:cri
o CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: Au
toPauseInterval:1m0s Rosetta:false}
	I1229 07:17:49.041849  283828 out.go:179] * Starting "auto-619064" primary control-plane node in "auto-619064" cluster
	I1229 07:17:49.043144  283828 cache.go:134] Beginning downloading kic base image for docker with crio
	I1229 07:17:49.044792  283828 out.go:179] * Pulling base image v0.0.48-1766979815-22353 ...
	I1229 07:17:49.045939  283828 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime crio
	I1229 07:17:49.045971  283828 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22353-9207/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-cri-o-overlay-amd64.tar.lz4
	I1229 07:17:49.045979  283828 cache.go:65] Caching tarball of preloaded images
	I1229 07:17:49.046047  283828 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766979815-22353@sha256:20dad5895b49b986a1253c0faab60865204843ac97fd3a6e6210da5896244409 in local docker daemon
	I1229 07:17:49.046077  283828 preload.go:251] Found /home/jenkins/minikube-integration/22353-9207/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1229 07:17:49.046088  283828 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0 on crio
	I1229 07:17:49.046215  283828 profile.go:143] Saving config to /home/jenkins/minikube-integration/22353-9207/.minikube/profiles/auto-619064/config.json ...
	I1229 07:17:49.046253  283828 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22353-9207/.minikube/profiles/auto-619064/config.json: {Name:mk9baeefab07482d719bbe5fc1c8ed346993a174 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1229 07:17:49.074420  283828 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766979815-22353@sha256:20dad5895b49b986a1253c0faab60865204843ac97fd3a6e6210da5896244409 in local docker daemon, skipping pull
	I1229 07:17:49.074442  283828 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766979815-22353@sha256:20dad5895b49b986a1253c0faab60865204843ac97fd3a6e6210da5896244409 exists in daemon, skipping load
	I1229 07:17:49.074464  283828 cache.go:243] Successfully downloaded all kic artifacts
	I1229 07:17:49.074504  283828 start.go:360] acquireMachinesLock for auto-619064: {Name:mk846f65ba6df3e8e6a1f86164308301a22a7b28 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1229 07:17:49.074631  283828 start.go:364] duration metric: took 103.352µs to acquireMachinesLock for "auto-619064"
	I1229 07:17:49.074660  283828 start.go:93] Provisioning new machine with config: &{Name:auto-619064 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766979815-22353@sha256:20dad5895b49b986a1253c0faab60865204843ac97fd3a6e6210da5896244409 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:auto-619064 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: Soc
ketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false} &{Name: IP: Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1229 07:17:49.074755  283828 start.go:125] createHost starting for "" (driver="docker")
	I1229 07:17:44.397135  281965 out.go:252] * Restarting existing docker container for "newest-cni-067566" ...
	I1229 07:17:44.397210  281965 cli_runner.go:164] Run: docker start newest-cni-067566
	I1229 07:17:44.690272  281965 cli_runner.go:164] Run: docker container inspect newest-cni-067566 --format={{.State.Status}}
	I1229 07:17:44.717474  281965 kic.go:430] container "newest-cni-067566" state is running.
	I1229 07:17:44.717921  281965 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-067566
	I1229 07:17:44.743901  281965 profile.go:143] Saving config to /home/jenkins/minikube-integration/22353-9207/.minikube/profiles/newest-cni-067566/config.json ...
	I1229 07:17:44.744189  281965 machine.go:94] provisionDockerMachine start ...
	I1229 07:17:44.744274  281965 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-067566
	I1229 07:17:44.768576  281965 main.go:144] libmachine: Using SSH client type: native
	I1229 07:17:44.768891  281965 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e300] 0x850fa0 <nil>  [] 0s} 127.0.0.1 33098 <nil> <nil>}
	I1229 07:17:44.768924  281965 main.go:144] libmachine: About to run SSH command:
	hostname
	I1229 07:17:44.770493  281965 main.go:144] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:57186->127.0.0.1:33098: read: connection reset by peer
	I1229 07:17:47.914089  281965 main.go:144] libmachine: SSH cmd err, output: <nil>: newest-cni-067566
	
	I1229 07:17:47.914114  281965 ubuntu.go:182] provisioning hostname "newest-cni-067566"
	I1229 07:17:47.914174  281965 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-067566
	I1229 07:17:47.934478  281965 main.go:144] libmachine: Using SSH client type: native
	I1229 07:17:47.934810  281965 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e300] 0x850fa0 <nil>  [] 0s} 127.0.0.1 33098 <nil> <nil>}
	I1229 07:17:47.934832  281965 main.go:144] libmachine: About to run SSH command:
	sudo hostname newest-cni-067566 && echo "newest-cni-067566" | sudo tee /etc/hostname
	I1229 07:17:48.090581  281965 main.go:144] libmachine: SSH cmd err, output: <nil>: newest-cni-067566
	
	I1229 07:17:48.090653  281965 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-067566
	I1229 07:17:48.111676  281965 main.go:144] libmachine: Using SSH client type: native
	I1229 07:17:48.111979  281965 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e300] 0x850fa0 <nil>  [] 0s} 127.0.0.1 33098 <nil> <nil>}
	I1229 07:17:48.112010  281965 main.go:144] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-067566' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-067566/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-067566' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1229 07:17:48.252535  281965 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	I1229 07:17:48.252565  281965 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22353-9207/.minikube CaCertPath:/home/jenkins/minikube-integration/22353-9207/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22353-9207/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22353-9207/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22353-9207/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22353-9207/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22353-9207/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22353-9207/.minikube}
	I1229 07:17:48.252607  281965 ubuntu.go:190] setting up certificates
	I1229 07:17:48.252633  281965 provision.go:84] configureAuth start
	I1229 07:17:48.252749  281965 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-067566
	I1229 07:17:48.273854  281965 provision.go:143] copyHostCerts
	I1229 07:17:48.273916  281965 exec_runner.go:144] found /home/jenkins/minikube-integration/22353-9207/.minikube/ca.pem, removing ...
	I1229 07:17:48.273935  281965 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22353-9207/.minikube/ca.pem
	I1229 07:17:48.274004  281965 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22353-9207/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22353-9207/.minikube/ca.pem (1082 bytes)
	I1229 07:17:48.274141  281965 exec_runner.go:144] found /home/jenkins/minikube-integration/22353-9207/.minikube/cert.pem, removing ...
	I1229 07:17:48.274153  281965 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22353-9207/.minikube/cert.pem
	I1229 07:17:48.274197  281965 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22353-9207/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22353-9207/.minikube/cert.pem (1123 bytes)
	I1229 07:17:48.274307  281965 exec_runner.go:144] found /home/jenkins/minikube-integration/22353-9207/.minikube/key.pem, removing ...
	I1229 07:17:48.274318  281965 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22353-9207/.minikube/key.pem
	I1229 07:17:48.274356  281965 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22353-9207/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22353-9207/.minikube/key.pem (1675 bytes)
	I1229 07:17:48.274453  281965 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22353-9207/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22353-9207/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22353-9207/.minikube/certs/ca-key.pem org=jenkins.newest-cni-067566 san=[127.0.0.1 192.168.94.2 localhost minikube newest-cni-067566]
	I1229 07:17:48.299081  281965 provision.go:177] copyRemoteCerts
	I1229 07:17:48.299165  281965 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1229 07:17:48.299241  281965 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-067566
	I1229 07:17:48.317986  281965 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33098 SSHKeyPath:/home/jenkins/minikube-integration/22353-9207/.minikube/machines/newest-cni-067566/id_rsa Username:docker}
	I1229 07:17:48.420562  281965 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-9207/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1229 07:17:48.439388  281965 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-9207/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1229 07:17:48.458327  281965 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-9207/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1229 07:17:48.476027  281965 provision.go:87] duration metric: took 223.366415ms to configureAuth
	I1229 07:17:48.476058  281965 ubuntu.go:206] setting minikube options for container-runtime
	I1229 07:17:48.476241  281965 config.go:182] Loaded profile config "newest-cni-067566": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1229 07:17:48.476348  281965 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-067566
	I1229 07:17:48.497081  281965 main.go:144] libmachine: Using SSH client type: native
	I1229 07:17:48.497420  281965 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e300] 0x850fa0 <nil>  [] 0s} 127.0.0.1 33098 <nil> <nil>}
	I1229 07:17:48.497457  281965 main.go:144] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1229 07:17:48.800973  281965 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1229 07:17:48.801000  281965 machine.go:97] duration metric: took 4.056798061s to provisionDockerMachine
	I1229 07:17:48.801014  281965 start.go:293] postStartSetup for "newest-cni-067566" (driver="docker")
	I1229 07:17:48.801028  281965 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1229 07:17:48.801107  281965 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1229 07:17:48.801169  281965 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-067566
	I1229 07:17:48.822694  281965 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33098 SSHKeyPath:/home/jenkins/minikube-integration/22353-9207/.minikube/machines/newest-cni-067566/id_rsa Username:docker}
	I1229 07:17:48.929634  281965 ssh_runner.go:195] Run: cat /etc/os-release
	I1229 07:17:48.935128  281965 main.go:144] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1229 07:17:48.935160  281965 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1229 07:17:48.935174  281965 filesync.go:126] Scanning /home/jenkins/minikube-integration/22353-9207/.minikube/addons for local assets ...
	I1229 07:17:48.935265  281965 filesync.go:126] Scanning /home/jenkins/minikube-integration/22353-9207/.minikube/files for local assets ...
	I1229 07:17:48.935372  281965 filesync.go:149] local asset: /home/jenkins/minikube-integration/22353-9207/.minikube/files/etc/ssl/certs/127332.pem -> 127332.pem in /etc/ssl/certs
	I1229 07:17:48.935496  281965 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1229 07:17:48.945366  281965 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-9207/.minikube/files/etc/ssl/certs/127332.pem --> /etc/ssl/certs/127332.pem (1708 bytes)
	I1229 07:17:48.966328  281965 start.go:296] duration metric: took 165.300332ms for postStartSetup
	I1229 07:17:48.966399  281965 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1229 07:17:48.966445  281965 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-067566
	I1229 07:17:48.995761  281965 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33098 SSHKeyPath:/home/jenkins/minikube-integration/22353-9207/.minikube/machines/newest-cni-067566/id_rsa Username:docker}
	I1229 07:17:49.102276  281965 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1229 07:17:49.107575  281965 fix.go:56] duration metric: took 4.734013486s for fixHost
	I1229 07:17:49.107603  281965 start.go:83] releasing machines lock for "newest-cni-067566", held for 4.734065769s
	I1229 07:17:49.107664  281965 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-067566
	I1229 07:17:49.128559  281965 ssh_runner.go:195] Run: cat /version.json
	I1229 07:17:49.128616  281965 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-067566
	I1229 07:17:49.128663  281965 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1229 07:17:49.128754  281965 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-067566
	I1229 07:17:49.150708  281965 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33098 SSHKeyPath:/home/jenkins/minikube-integration/22353-9207/.minikube/machines/newest-cni-067566/id_rsa Username:docker}
	I1229 07:17:49.150993  281965 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33098 SSHKeyPath:/home/jenkins/minikube-integration/22353-9207/.minikube/machines/newest-cni-067566/id_rsa Username:docker}
	I1229 07:17:49.250916  281965 ssh_runner.go:195] Run: systemctl --version
	I1229 07:17:49.315877  281965 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1229 07:17:49.356072  281965 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1229 07:17:49.361829  281965 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1229 07:17:49.361914  281965 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1229 07:17:49.370024  281965 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1229 07:17:49.370051  281965 start.go:496] detecting cgroup driver to use...
	I1229 07:17:49.370093  281965 detect.go:190] detected "systemd" cgroup driver on host os
	I1229 07:17:49.370140  281965 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1229 07:17:49.384478  281965 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1229 07:17:49.399113  281965 docker.go:218] disabling cri-docker service (if available) ...
	I1229 07:17:49.399172  281965 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1229 07:17:49.416774  281965 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1229 07:17:49.431582  281965 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1229 07:17:49.548080  281965 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1229 07:17:49.643089  281965 docker.go:234] disabling docker service ...
	I1229 07:17:49.643159  281965 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1229 07:17:49.662626  281965 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1229 07:17:49.682582  281965 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1229 07:17:49.801951  281965 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1229 07:17:49.911105  281965 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1229 07:17:49.929444  281965 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1229 07:17:49.946306  281965 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1229 07:17:49.946376  281965 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1229 07:17:49.957380  281965 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1229 07:17:49.957441  281965 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1229 07:17:49.968147  281965 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1229 07:17:49.978493  281965 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1229 07:17:49.996157  281965 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1229 07:17:50.005465  281965 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1229 07:17:50.015637  281965 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1229 07:17:50.024813  281965 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1229 07:17:50.034513  281965 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1229 07:17:50.043120  281965 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1229 07:17:50.051941  281965 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1229 07:17:50.172961  281965 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1229 07:17:50.841119  281965 start.go:553] Will wait 60s for socket path /var/run/crio/crio.sock
	I1229 07:17:50.841193  281965 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1229 07:17:50.846002  281965 start.go:574] Will wait 60s for crictl version
	I1229 07:17:50.846051  281965 ssh_runner.go:195] Run: which crictl
	I1229 07:17:50.850121  281965 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1229 07:17:50.883548  281965 start.go:590] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.35.0
	RuntimeApiVersion:  v1
	I1229 07:17:50.883634  281965 ssh_runner.go:195] Run: crio --version
	I1229 07:17:50.912399  281965 ssh_runner.go:195] Run: crio --version
	I1229 07:17:50.953893  281965 out.go:179] * Preparing Kubernetes v1.35.0 on CRI-O 1.35.0 ...
	I1229 07:17:50.955740  281965 cli_runner.go:164] Run: docker network inspect newest-cni-067566 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1229 07:17:50.978260  281965 ssh_runner.go:195] Run: grep 192.168.94.1	host.minikube.internal$ /etc/hosts
	I1229 07:17:50.982618  281965 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.94.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1229 07:17:50.996958  281965 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I1229 07:17:50.998969  281965 kubeadm.go:884] updating cluster {Name:newest-cni-067566 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766979815-22353@sha256:20dad5895b49b986a1253c0faab60865204843ac97fd3a6e6210da5896244409 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:newest-cni-067566 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:
262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false} ...
	I1229 07:17:50.999133  281965 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime crio
	I1229 07:17:50.999199  281965 ssh_runner.go:195] Run: sudo crictl images --output json
	I1229 07:17:51.047006  281965 crio.go:561] all images are preloaded for cri-o runtime.
	I1229 07:17:51.047035  281965 crio.go:433] Images already preloaded, skipping extraction
	I1229 07:17:51.047104  281965 ssh_runner.go:195] Run: sudo crictl images --output json
	I1229 07:17:51.080967  281965 crio.go:561] all images are preloaded for cri-o runtime.
	I1229 07:17:51.080993  281965 cache_images.go:86] Images are preloaded, skipping loading
	I1229 07:17:51.081002  281965 kubeadm.go:935] updating node { 192.168.94.2 8443 v1.35.0 crio true true} ...
	I1229 07:17:51.081136  281965 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=newest-cni-067566 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.94.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0 ClusterName:newest-cni-067566 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1229 07:17:51.081264  281965 ssh_runner.go:195] Run: crio config
	I1229 07:17:51.143516  281965 cni.go:84] Creating CNI manager for ""
	I1229 07:17:51.143544  281965 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1229 07:17:51.143562  281965 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1229 07:17:51.143592  281965 kubeadm.go:197] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.94.2 APIServerPort:8443 KubernetesVersion:v1.35.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-067566 NodeName:newest-cni-067566 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.94.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.94.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/k
ubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1229 07:17:51.143792  281965 kubeadm.go:203] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.94.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-067566"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.94.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.94.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1229 07:17:51.143967  281965 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0
	I1229 07:17:51.154663  281965 binaries.go:51] Found k8s binaries, skipping transfer
	I1229 07:17:51.154739  281965 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1229 07:17:51.163231  281965 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I1229 07:17:51.180379  281965 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1229 07:17:51.194793  281965 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2211 bytes)
	I1229 07:17:51.209176  281965 ssh_runner.go:195] Run: grep 192.168.94.2	control-plane.minikube.internal$ /etc/hosts
	I1229 07:17:51.213781  281965 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.94.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1229 07:17:51.225619  281965 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1229 07:17:51.343234  281965 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1229 07:17:51.376074  281965 certs.go:69] Setting up /home/jenkins/minikube-integration/22353-9207/.minikube/profiles/newest-cni-067566 for IP: 192.168.94.2
	I1229 07:17:51.376093  281965 certs.go:195] generating shared ca certs ...
	I1229 07:17:51.376111  281965 certs.go:227] acquiring lock for ca certs: {Name:mk9ea2caa33885d3eff30cd6b31b0954826457bd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1229 07:17:51.376340  281965 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22353-9207/.minikube/ca.key
	I1229 07:17:51.376400  281965 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22353-9207/.minikube/proxy-client-ca.key
	I1229 07:17:51.376413  281965 certs.go:257] generating profile certs ...
	I1229 07:17:51.376517  281965 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22353-9207/.minikube/profiles/newest-cni-067566/client.key
	I1229 07:17:51.376583  281965 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22353-9207/.minikube/profiles/newest-cni-067566/apiserver.key.f6ce96bf
	I1229 07:17:51.376640  281965 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22353-9207/.minikube/profiles/newest-cni-067566/proxy-client.key
	I1229 07:17:51.376793  281965 certs.go:484] found cert: /home/jenkins/minikube-integration/22353-9207/.minikube/certs/12733.pem (1338 bytes)
	W1229 07:17:51.376849  281965 certs.go:480] ignoring /home/jenkins/minikube-integration/22353-9207/.minikube/certs/12733_empty.pem, impossibly tiny 0 bytes
	I1229 07:17:51.376868  281965 certs.go:484] found cert: /home/jenkins/minikube-integration/22353-9207/.minikube/certs/ca-key.pem (1675 bytes)
	I1229 07:17:51.376916  281965 certs.go:484] found cert: /home/jenkins/minikube-integration/22353-9207/.minikube/certs/ca.pem (1082 bytes)
	I1229 07:17:51.376953  281965 certs.go:484] found cert: /home/jenkins/minikube-integration/22353-9207/.minikube/certs/cert.pem (1123 bytes)
	I1229 07:17:51.376985  281965 certs.go:484] found cert: /home/jenkins/minikube-integration/22353-9207/.minikube/certs/key.pem (1675 bytes)
	I1229 07:17:51.377052  281965 certs.go:484] found cert: /home/jenkins/minikube-integration/22353-9207/.minikube/files/etc/ssl/certs/127332.pem (1708 bytes)
	I1229 07:17:51.377919  281965 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-9207/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1229 07:17:51.398520  281965 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-9207/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1229 07:17:51.419391  281965 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-9207/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1229 07:17:51.442650  281965 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-9207/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1229 07:17:51.470646  281965 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-9207/.minikube/profiles/newest-cni-067566/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1229 07:17:51.506385  281965 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-9207/.minikube/profiles/newest-cni-067566/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1229 07:17:51.531059  281965 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-9207/.minikube/profiles/newest-cni-067566/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1229 07:17:51.550655  281965 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-9207/.minikube/profiles/newest-cni-067566/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1229 07:17:51.571836  281965 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-9207/.minikube/files/etc/ssl/certs/127332.pem --> /usr/share/ca-certificates/127332.pem (1708 bytes)
	I1229 07:17:51.594410  281965 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-9207/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1229 07:17:51.614508  281965 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-9207/.minikube/certs/12733.pem --> /usr/share/ca-certificates/12733.pem (1338 bytes)
	I1229 07:17:51.634570  281965 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (722 bytes)
	I1229 07:17:51.650629  281965 ssh_runner.go:195] Run: openssl version
	I1229 07:17:51.657957  281965 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/12733.pem
	I1229 07:17:51.670436  281965 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/12733.pem /etc/ssl/certs/12733.pem
	I1229 07:17:51.681300  281965 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12733.pem
	I1229 07:17:51.685865  281965 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 29 06:49 /usr/share/ca-certificates/12733.pem
	I1229 07:17:51.685923  281965 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12733.pem
	I1229 07:17:51.731800  281965 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1229 07:17:51.739827  281965 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/127332.pem
	I1229 07:17:51.749550  281965 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/127332.pem /etc/ssl/certs/127332.pem
	I1229 07:17:51.759285  281965 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/127332.pem
	I1229 07:17:51.763661  281965 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 29 06:49 /usr/share/ca-certificates/127332.pem
	I1229 07:17:51.763715  281965 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/127332.pem
	I1229 07:17:51.808611  281965 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1229 07:17:51.816279  281965 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1229 07:17:51.824749  281965 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1229 07:17:51.833269  281965 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1229 07:17:51.837458  281965 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 29 06:46 /usr/share/ca-certificates/minikubeCA.pem
	I1229 07:17:51.837515  281965 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1229 07:17:51.887787  281965 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1229 07:17:51.896533  281965 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1229 07:17:51.901090  281965 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1229 07:17:51.952369  281965 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1229 07:17:52.010393  281965 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1229 07:17:52.047251  281965 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1229 07:17:52.097957  281965 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1229 07:17:52.136766  281965 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1229 07:17:52.191885  281965 kubeadm.go:401] StartCluster: {Name:newest-cni-067566 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766979815-22353@sha256:20dad5895b49b986a1253c0faab60865204843ac97fd3a6e6210da5896244409 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:newest-cni-067566 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262
144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1229 07:17:52.191994  281965 cri.go:61] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1229 07:17:52.192040  281965 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1229 07:17:52.228460  281965 cri.go:96] found id: ""
	I1229 07:17:52.228531  281965 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1229 07:17:52.237298  281965 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1229 07:17:52.237316  281965 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1229 07:17:52.237360  281965 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1229 07:17:52.245197  281965 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1229 07:17:52.246000  281965 kubeconfig.go:47] verify endpoint returned: get endpoint: "newest-cni-067566" does not appear in /home/jenkins/minikube-integration/22353-9207/kubeconfig
	I1229 07:17:52.246518  281965 kubeconfig.go:62] /home/jenkins/minikube-integration/22353-9207/kubeconfig needs updating (will repair): [kubeconfig missing "newest-cni-067566" cluster setting kubeconfig missing "newest-cni-067566" context setting]
	I1229 07:17:52.247499  281965 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22353-9207/kubeconfig: {Name:mkdd37af1bd6d0f9cc034a64c07c8d33ee5c10ac Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1229 07:17:52.249536  281965 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1229 07:17:52.257408  281965 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.94.2
	I1229 07:17:52.257439  281965 kubeadm.go:602] duration metric: took 20.118222ms to restartPrimaryControlPlane
	I1229 07:17:52.257449  281965 kubeadm.go:403] duration metric: took 65.574946ms to StartCluster
	I1229 07:17:52.257470  281965 settings.go:142] acquiring lock: {Name:mkc0a676e8e9f981283a1b55a28ae5261bf35d87 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1229 07:17:52.257532  281965 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22353-9207/kubeconfig
	I1229 07:17:52.259664  281965 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22353-9207/kubeconfig: {Name:mkdd37af1bd6d0f9cc034a64c07c8d33ee5c10ac Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1229 07:17:52.261498  281965 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1229 07:17:52.261582  281965 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1229 07:17:52.261694  281965 addons.go:70] Setting storage-provisioner=true in profile "newest-cni-067566"
	I1229 07:17:52.261703  281965 config.go:182] Loaded profile config "newest-cni-067566": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1229 07:17:52.261714  281965 addons.go:239] Setting addon storage-provisioner=true in "newest-cni-067566"
	W1229 07:17:52.261727  281965 addons.go:248] addon storage-provisioner should already be in state true
	I1229 07:17:52.261751  281965 addons.go:70] Setting default-storageclass=true in profile "newest-cni-067566"
	I1229 07:17:52.261756  281965 host.go:66] Checking if "newest-cni-067566" exists ...
	I1229 07:17:52.261767  281965 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-067566"
	I1229 07:17:52.261764  281965 addons.go:70] Setting dashboard=true in profile "newest-cni-067566"
	I1229 07:17:52.261805  281965 addons.go:239] Setting addon dashboard=true in "newest-cni-067566"
	W1229 07:17:52.261818  281965 addons.go:248] addon dashboard should already be in state true
	I1229 07:17:52.261850  281965 host.go:66] Checking if "newest-cni-067566" exists ...
	I1229 07:17:52.262089  281965 cli_runner.go:164] Run: docker container inspect newest-cni-067566 --format={{.State.Status}}
	I1229 07:17:52.262256  281965 cli_runner.go:164] Run: docker container inspect newest-cni-067566 --format={{.State.Status}}
	I1229 07:17:52.262365  281965 cli_runner.go:164] Run: docker container inspect newest-cni-067566 --format={{.State.Status}}
	I1229 07:17:52.390337  281965 addons.go:239] Setting addon default-storageclass=true in "newest-cni-067566"
	W1229 07:17:52.390362  281965 addons.go:248] addon default-storageclass should already be in state true
	I1229 07:17:52.390396  281965 host.go:66] Checking if "newest-cni-067566" exists ...
	I1229 07:17:52.390482  281965 out.go:179] * Verifying Kubernetes components...
	I1229 07:17:52.390882  281965 cli_runner.go:164] Run: docker container inspect newest-cni-067566 --format={{.State.Status}}
	I1229 07:17:52.431110  281965 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1229 07:17:52.431134  281965 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1229 07:17:52.431205  281965 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-067566
	I1229 07:17:52.434602  281965 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1229 07:17:52.434606  281965 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1229 07:17:52.450780  281965 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33098 SSHKeyPath:/home/jenkins/minikube-integration/22353-9207/.minikube/machines/newest-cni-067566/id_rsa Username:docker}
	I1229 07:17:52.554841  281965 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1229 07:17:52.576858  281965 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1229 07:17:52.577938  281965 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1229 07:17:52.577995  281965 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-067566
	I1229 07:17:52.607996  281965 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33098 SSHKeyPath:/home/jenkins/minikube-integration/22353-9207/.minikube/machines/newest-cni-067566/id_rsa Username:docker}
	I1229 07:17:52.619697  281965 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	
	
	==> CRI-O <==
	Dec 29 07:17:20 embed-certs-739827 crio[571]: time="2025-12-29T07:17:20.668192837Z" level=info msg="Started container" PID=1800 containerID=feda0a75a638296e3ddf5c652055d9b00fc2c11e0ffa5bc53f8c8a1d1e1de408 description=kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-sglpv/dashboard-metrics-scraper id=c76e817c-a7b7-49aa-b967-66b32d6cad84 name=/runtime.v1.RuntimeService/StartContainer sandboxID=48790b8ecc0a5f5083cca4db87f99e10893da7de9f0c5891e52d51bf9810c4b1
	Dec 29 07:17:20 embed-certs-739827 crio[571]: time="2025-12-29T07:17:20.720343007Z" level=info msg="Removing container: 36b6d8077eccbd26463acd2b39ffdb7882883d5320899c3b869a184ec564b4e4" id=66e3f517-5ab9-4063-afb9-19b30c25994a name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 29 07:17:20 embed-certs-739827 crio[571]: time="2025-12-29T07:17:20.729528314Z" level=info msg="Removed container 36b6d8077eccbd26463acd2b39ffdb7882883d5320899c3b869a184ec564b4e4: kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-sglpv/dashboard-metrics-scraper" id=66e3f517-5ab9-4063-afb9-19b30c25994a name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 29 07:17:28 embed-certs-739827 crio[571]: time="2025-12-29T07:17:28.742618134Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=a714508e-f3f6-4f48-a28c-0338ec7f40c5 name=/runtime.v1.ImageService/ImageStatus
	Dec 29 07:17:28 embed-certs-739827 crio[571]: time="2025-12-29T07:17:28.743767615Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=8a33acfd-2a02-4aaf-96a1-dee7a41abb1f name=/runtime.v1.ImageService/ImageStatus
	Dec 29 07:17:28 embed-certs-739827 crio[571]: time="2025-12-29T07:17:28.744989757Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=043d3105-aff8-46ea-b260-8e8f3732b568 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 29 07:17:28 embed-certs-739827 crio[571]: time="2025-12-29T07:17:28.745172518Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 29 07:17:28 embed-certs-739827 crio[571]: time="2025-12-29T07:17:28.749851041Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 29 07:17:28 embed-certs-739827 crio[571]: time="2025-12-29T07:17:28.750548347Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/ddb39b9554d3191988643cbc3b1e4d9ba5c97ac3ee4f4d875a52203655228bab/merged/etc/passwd: no such file or directory"
	Dec 29 07:17:28 embed-certs-739827 crio[571]: time="2025-12-29T07:17:28.750587511Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/ddb39b9554d3191988643cbc3b1e4d9ba5c97ac3ee4f4d875a52203655228bab/merged/etc/group: no such file or directory"
	Dec 29 07:17:28 embed-certs-739827 crio[571]: time="2025-12-29T07:17:28.750905642Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 29 07:17:28 embed-certs-739827 crio[571]: time="2025-12-29T07:17:28.782498551Z" level=info msg="Created container 704468951f6d121993ccbed029ae4733cf4e68c409c3406d0b0c0e1e83ee7a16: kube-system/storage-provisioner/storage-provisioner" id=043d3105-aff8-46ea-b260-8e8f3732b568 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 29 07:17:28 embed-certs-739827 crio[571]: time="2025-12-29T07:17:28.783081909Z" level=info msg="Starting container: 704468951f6d121993ccbed029ae4733cf4e68c409c3406d0b0c0e1e83ee7a16" id=0983650b-504b-4841-a345-9c9a61e95892 name=/runtime.v1.RuntimeService/StartContainer
	Dec 29 07:17:28 embed-certs-739827 crio[571]: time="2025-12-29T07:17:28.785339989Z" level=info msg="Started container" PID=1814 containerID=704468951f6d121993ccbed029ae4733cf4e68c409c3406d0b0c0e1e83ee7a16 description=kube-system/storage-provisioner/storage-provisioner id=0983650b-504b-4841-a345-9c9a61e95892 name=/runtime.v1.RuntimeService/StartContainer sandboxID=02ab04d8adcb09c16a16ca241d983d78eb3e2571ecbeebcd952606e1b3068a81
	Dec 29 07:17:46 embed-certs-739827 crio[571]: time="2025-12-29T07:17:46.622547274Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=4d556068-55e1-4e6d-b496-1056346b5ce5 name=/runtime.v1.ImageService/ImageStatus
	Dec 29 07:17:46 embed-certs-739827 crio[571]: time="2025-12-29T07:17:46.623713938Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=2e4860c1-04c2-49d3-bb79-32fc48a90ea5 name=/runtime.v1.ImageService/ImageStatus
	Dec 29 07:17:46 embed-certs-739827 crio[571]: time="2025-12-29T07:17:46.624855576Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-sglpv/dashboard-metrics-scraper" id=3e24f1fc-bb8e-4c67-a4d6-c24369cd41f8 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 29 07:17:46 embed-certs-739827 crio[571]: time="2025-12-29T07:17:46.625051822Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 29 07:17:46 embed-certs-739827 crio[571]: time="2025-12-29T07:17:46.630556302Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 29 07:17:46 embed-certs-739827 crio[571]: time="2025-12-29T07:17:46.631037235Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 29 07:17:46 embed-certs-739827 crio[571]: time="2025-12-29T07:17:46.658163102Z" level=info msg="Created container eb918f980511aef13b7e4f8bd78fdf35a7588bb9c363b3367caa3a60f30c3ec2: kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-sglpv/dashboard-metrics-scraper" id=3e24f1fc-bb8e-4c67-a4d6-c24369cd41f8 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 29 07:17:46 embed-certs-739827 crio[571]: time="2025-12-29T07:17:46.658786617Z" level=info msg="Starting container: eb918f980511aef13b7e4f8bd78fdf35a7588bb9c363b3367caa3a60f30c3ec2" id=02b12608-0fc5-497e-8ddd-dfc8eea80a1d name=/runtime.v1.RuntimeService/StartContainer
	Dec 29 07:17:46 embed-certs-739827 crio[571]: time="2025-12-29T07:17:46.660411174Z" level=info msg="Started container" PID=1853 containerID=eb918f980511aef13b7e4f8bd78fdf35a7588bb9c363b3367caa3a60f30c3ec2 description=kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-sglpv/dashboard-metrics-scraper id=02b12608-0fc5-497e-8ddd-dfc8eea80a1d name=/runtime.v1.RuntimeService/StartContainer sandboxID=48790b8ecc0a5f5083cca4db87f99e10893da7de9f0c5891e52d51bf9810c4b1
	Dec 29 07:17:46 embed-certs-739827 crio[571]: time="2025-12-29T07:17:46.797906705Z" level=info msg="Removing container: feda0a75a638296e3ddf5c652055d9b00fc2c11e0ffa5bc53f8c8a1d1e1de408" id=15c95264-98c7-4c0d-b4e3-51a27f5de49d name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 29 07:17:46 embed-certs-739827 crio[571]: time="2025-12-29T07:17:46.80658633Z" level=info msg="Removed container feda0a75a638296e3ddf5c652055d9b00fc2c11e0ffa5bc53f8c8a1d1e1de408: kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-sglpv/dashboard-metrics-scraper" id=15c95264-98c7-4c0d-b4e3-51a27f5de49d name=/runtime.v1.RuntimeService/RemoveContainer
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                          NAMESPACE
	eb918f980511a       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           7 seconds ago       Exited              dashboard-metrics-scraper   3                   48790b8ecc0a5       dashboard-metrics-scraper-867fb5f87b-sglpv   kubernetes-dashboard
	704468951f6d1       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           25 seconds ago      Running             storage-provisioner         1                   02ab04d8adcb0       storage-provisioner                          kube-system
	aad9ce3cf88d2       docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029   48 seconds ago      Running             kubernetes-dashboard        0                   0e546d9258df3       kubernetes-dashboard-b84665fb8-rdq2m         kubernetes-dashboard
	18fb52cea0d34       aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139                                           56 seconds ago      Running             coredns                     0                   c78ca58b08718       coredns-7d764666f9-55529                     kube-system
	83f35284bd1fb       56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c                                           56 seconds ago      Running             busybox                     1                   967d7bdf739a6       busybox                                      default
	f2078890d2c20       4921d7a6dffa922dd679732ba4797085c4f39e9a53bee8b6fdb1d463e8571251                                           56 seconds ago      Running             kindnet-cni                 0                   d7dcd4a1d5298       kindnet-l6mxr                                kube-system
	8832475ac0d10       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           56 seconds ago      Exited              storage-provisioner         0                   02ab04d8adcb0       storage-provisioner                          kube-system
	1af994b88cd9a       32652ff1bbe6b6df16a3dc621fcad4e6e185a2852bffc5847aab79e36c54bfb8                                           56 seconds ago      Running             kube-proxy                  0                   dd92199c0d384       kube-proxy-hdmp6                             kube-system
	64d38c25f85b2       5c6acd67e9cd10eb60a246cd233db251ef62ea97e6572f897e873f0cb648f499                                           57 seconds ago      Running             kube-apiserver              0                   28daa174d0ef2       kube-apiserver-embed-certs-739827            kube-system
	9212464b12efa       550794e3b12ac21ec7fd940bdfb45f7c8dae3c52acd8c70e580b882de20c3dcc                                           57 seconds ago      Running             kube-scheduler              0                   3e97ba4238d86       kube-scheduler-embed-certs-739827            kube-system
	f8f720f7da228       2c9a4b058bd7e6ec479c38f9e8a1dac2f5ee5b0a3ebda6dfac968e8720229508                                           57 seconds ago      Running             kube-controller-manager     0                   bc01cdda86309       kube-controller-manager-embed-certs-739827   kube-system
	0b939e4faa562       0a108f7189562e99793bdecab61fdf1a7c9d913af3385de9da17fb9d6ff430e2                                           57 seconds ago      Running             etcd                        0                   3f0414e81a726       etcd-embed-certs-739827                      kube-system
	
	
	==> coredns [18fb52cea0d3400acddabe689116097412e230e6ba2b4477769aa7d3e66a805d] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 66f0a748f44f6317a6b122af3f457c9dd0ecaed8718ffbf95a69434523efd9ec4992e71f54c7edd5753646fe9af89ac2138b9c3ce14d4a0ba9d2372a55f120bb
	CoreDNS-1.13.1
	linux/amd64, go1.25.2, 1db4568
	[INFO] 127.0.0.1:33939 - 33460 "HINFO IN 4050320932017595841.7466373877937374045. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.03362485s
	[INFO] plugin/ready: Plugins not ready: "kubernetes"
	[INFO] plugin/ready: Plugins not ready: "kubernetes"
	[INFO] plugin/ready: Plugins not ready: "kubernetes"
	[INFO] plugin/ready: Plugins not ready: "kubernetes"
	[ERROR] plugin/kubernetes: Failed to watch
	[ERROR] plugin/kubernetes: Failed to watch
	[ERROR] plugin/kubernetes: Failed to watch
	
	
	==> describe nodes <==
	Name:               embed-certs-739827
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=embed-certs-739827
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=306c26d738eaa3534a776d7c684ed563998a48b8
	                    minikube.k8s.io/name=embed-certs-739827
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_29T07_15_59_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 29 Dec 2025 07:15:56 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-739827
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 29 Dec 2025 07:17:38 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 29 Dec 2025 07:17:28 +0000   Mon, 29 Dec 2025 07:15:54 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 29 Dec 2025 07:17:28 +0000   Mon, 29 Dec 2025 07:15:54 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 29 Dec 2025 07:17:28 +0000   Mon, 29 Dec 2025 07:15:54 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 29 Dec 2025 07:17:28 +0000   Mon, 29 Dec 2025 07:16:16 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.103.2
	  Hostname:    embed-certs-739827
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863360Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863360Ki
	  pods:               110
	System Info:
	  Machine ID:                 44f990608ba801cb32708aeb6951f96d
	  System UUID:                ab46c7d0-f92f-48dd-a29d-7cfb62a7d0f3
	  Boot ID:                    38b59c2f-2e15-4538-b2f7-46a8b6545a02
	  Kernel Version:             6.8.0-1045-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.35.0
	  Kubelet Version:            v1.35.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         94s
	  kube-system                 coredns-7d764666f9-55529                      100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     110s
	  kube-system                 etcd-embed-certs-739827                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         116s
	  kube-system                 kindnet-l6mxr                                 100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      111s
	  kube-system                 kube-apiserver-embed-certs-739827             250m (3%)     0 (0%)      0 (0%)           0 (0%)         116s
	  kube-system                 kube-controller-manager-embed-certs-739827    200m (2%)     0 (0%)      0 (0%)           0 (0%)         116s
	  kube-system                 kube-proxy-hdmp6                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         111s
	  kube-system                 kube-scheduler-embed-certs-739827             100m (1%)     0 (0%)      0 (0%)           0 (0%)         116s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         110s
	  kubernetes-dashboard        dashboard-metrics-scraper-867fb5f87b-sglpv    0 (0%)        0 (0%)      0 (0%)           0 (0%)         53s
	  kubernetes-dashboard        kubernetes-dashboard-b84665fb8-rdq2m          0 (0%)        0 (0%)      0 (0%)           0 (0%)         53s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason          Age   From             Message
	  ----    ------          ----  ----             -------
	  Normal  RegisteredNode  112s  node-controller  Node embed-certs-739827 event: Registered Node embed-certs-739827 in Controller
	  Normal  RegisteredNode  54s   node-controller  Node embed-certs-739827 event: Registered Node embed-certs-739827 in Controller
	
	
	==> dmesg <==
	[Dec29 06:17] MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details.
	[  +0.001714] TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details.
	[  +0.001000] MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details.
	[  +0.084008] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge
	[  +0.380672] i8042: Warning: Keylock active
	[  +0.013979] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.493300] block sda: the capability attribute has been deprecated.
	[  +0.084603] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.024785] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +4.737934] kauditd_printk_skb: 47 callbacks suppressed
	
	
	==> etcd [0b939e4faa5624d77348fcf707669fb95bdce762e69420b9e5dde5b8d7fad11c] <==
	{"level":"info","ts":"2025-12-29T07:16:56.670552Z","logger":"raft","caller":"v3@v3.6.0/raft.go:930","msg":"f23060b075c4c089 became pre-candidate at term 2"}
	{"level":"info","ts":"2025-12-29T07:16:56.670631Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1077","msg":"f23060b075c4c089 received MsgPreVoteResp from f23060b075c4c089 at term 2"}
	{"level":"info","ts":"2025-12-29T07:16:56.670649Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1693","msg":"f23060b075c4c089 has received 1 MsgPreVoteResp votes and 0 vote rejections"}
	{"level":"info","ts":"2025-12-29T07:16:56.670669Z","logger":"raft","caller":"v3@v3.6.0/raft.go:912","msg":"f23060b075c4c089 became candidate at term 3"}
	{"level":"info","ts":"2025-12-29T07:16:56.671364Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1077","msg":"f23060b075c4c089 received MsgVoteResp from f23060b075c4c089 at term 3"}
	{"level":"info","ts":"2025-12-29T07:16:56.671385Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1693","msg":"f23060b075c4c089 has received 1 MsgVoteResp votes and 0 vote rejections"}
	{"level":"info","ts":"2025-12-29T07:16:56.671406Z","logger":"raft","caller":"v3@v3.6.0/raft.go:970","msg":"f23060b075c4c089 became leader at term 3"}
	{"level":"info","ts":"2025-12-29T07:16:56.671419Z","logger":"raft","caller":"v3@v3.6.0/node.go:370","msg":"raft.node: f23060b075c4c089 elected leader f23060b075c4c089 at term 3"}
	{"level":"info","ts":"2025-12-29T07:16:56.672480Z","caller":"etcdserver/server.go:1820","msg":"published local member to cluster through raft","local-member-id":"f23060b075c4c089","local-member-attributes":"{Name:embed-certs-739827 ClientURLs:[https://192.168.103.2:2379]}","cluster-id":"3336683c081d149d","publish-timeout":"7s"}
	{"level":"info","ts":"2025-12-29T07:16:56.672500Z","caller":"embed/serve.go:138","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-12-29T07:16:56.672526Z","caller":"embed/serve.go:138","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-12-29T07:16:56.672709Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-12-29T07:16:56.672748Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-12-29T07:16:56.673807Z","caller":"v3rpc/health.go:63","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-12-29T07:16:56.673836Z","caller":"v3rpc/health.go:63","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-12-29T07:16:56.677097Z","caller":"embed/serve.go:283","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2025-12-29T07:16:56.677400Z","caller":"embed/serve.go:283","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.103.2:2379"}
	{"level":"info","ts":"2025-12-29T07:17:18.324534Z","caller":"traceutil/trace.go:172","msg":"trace[2003940404] transaction","detail":"{read_only:false; response_revision:623; number_of_response:1; }","duration":"175.322301ms","start":"2025-12-29T07:17:18.149189Z","end":"2025-12-29T07:17:18.324511Z","steps":["trace[2003940404] 'process raft request'  (duration: 175.180237ms)"],"step_count":1}
	{"level":"warn","ts":"2025-12-29T07:17:18.489317Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"162.440144ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/embed-certs-739827\" limit:1 ","response":"range_response_count:1 size:5960"}
	{"level":"info","ts":"2025-12-29T07:17:18.489453Z","caller":"traceutil/trace.go:172","msg":"trace[186191666] range","detail":"{range_begin:/registry/minions/embed-certs-739827; range_end:; response_count:1; response_revision:623; }","duration":"162.563213ms","start":"2025-12-29T07:17:18.326842Z","end":"2025-12-29T07:17:18.489405Z","steps":["trace[186191666] 'agreement among raft nodes before linearized reading'  (duration: 36.516229ms)","trace[186191666] 'range keys from in-memory index tree'  (duration: 125.708212ms)"],"step_count":2}
	{"level":"warn","ts":"2025-12-29T07:17:18.489887Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"125.979556ms","expected-duration":"100ms","prefix":"","request":"header:<ID:13873791002224553064 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/leases/kube-node-lease/embed-certs-739827\" mod_revision:609 > success:<request_put:<key:\"/registry/leases/kube-node-lease/embed-certs-739827\" value_size:501 >> failure:<request_range:<key:\"/registry/leases/kube-node-lease/embed-certs-739827\" > >>","response":"size:16"}
	{"level":"info","ts":"2025-12-29T07:17:18.489971Z","caller":"traceutil/trace.go:172","msg":"trace[1155581574] transaction","detail":"{read_only:false; response_revision:624; number_of_response:1; }","duration":"307.030326ms","start":"2025-12-29T07:17:18.182926Z","end":"2025-12-29T07:17:18.489956Z","steps":["trace[1155581574] 'process raft request'  (duration: 180.457902ms)","trace[1155581574] 'compare'  (duration: 125.777811ms)"],"step_count":2}
	{"level":"warn","ts":"2025-12-29T07:17:18.490035Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-12-29T07:17:18.182906Z","time spent":"307.093891ms","remote":"127.0.0.1:42478","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":560,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/leases/kube-node-lease/embed-certs-739827\" mod_revision:609 > success:<request_put:<key:\"/registry/leases/kube-node-lease/embed-certs-739827\" value_size:501 >> failure:<request_range:<key:\"/registry/leases/kube-node-lease/embed-certs-739827\" > >"}
	{"level":"info","ts":"2025-12-29T07:17:19.016447Z","caller":"traceutil/trace.go:172","msg":"trace[1192439289] transaction","detail":"{read_only:false; response_revision:625; number_of_response:1; }","duration":"129.494233ms","start":"2025-12-29T07:17:18.886932Z","end":"2025-12-29T07:17:19.016427Z","steps":["trace[1192439289] 'process raft request'  (duration: 128.110859ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-29T07:17:52.434977Z","caller":"traceutil/trace.go:172","msg":"trace[1006585847] transaction","detail":"{read_only:false; response_revision:662; number_of_response:1; }","duration":"149.857251ms","start":"2025-12-29T07:17:52.285097Z","end":"2025-12-29T07:17:52.434954Z","steps":["trace[1006585847] 'process raft request'  (duration: 149.711674ms)"],"step_count":1}
	
	
	==> kernel <==
	 07:17:54 up  1:00,  0 user,  load average: 4.25, 3.10, 2.18
	Linux embed-certs-739827 6.8.0-1045-gcp #48~22.04.1-Ubuntu SMP Tue Nov 25 13:07:56 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [f2078890d2c20f1615244415e59607cf2ee2465b8956242073cb8e5b80673001] <==
	I1229 07:16:58.290327       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1229 07:16:58.290622       1 main.go:139] hostIP = 192.168.103.2
	podIP = 192.168.103.2
	I1229 07:16:58.290824       1 main.go:148] setting mtu 1500 for CNI 
	I1229 07:16:58.290857       1 main.go:178] kindnetd IP family: "ipv4"
	I1229 07:16:58.290885       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-12-29T07:16:58Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1229 07:16:58.491207       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1229 07:16:58.491270       1 controller.go:381] "Waiting for informer caches to sync"
	I1229 07:16:58.491287       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1229 07:16:58.491543       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1229 07:16:58.791432       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1229 07:16:58.791458       1 metrics.go:72] Registering metrics
	I1229 07:16:58.791510       1 controller.go:711] "Syncing nftables rules"
	I1229 07:17:08.491341       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1229 07:17:08.491404       1 main.go:301] handling current node
	I1229 07:17:18.491829       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1229 07:17:18.491884       1 main.go:301] handling current node
	I1229 07:17:28.498305       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1229 07:17:28.498346       1 main.go:301] handling current node
	I1229 07:17:38.493357       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1229 07:17:38.493426       1 main.go:301] handling current node
	I1229 07:17:48.491366       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1229 07:17:48.491724       1 main.go:301] handling current node
	
	
	==> kube-apiserver [64d38c25f85b27ef903c4b442a4a233566702ef4d41de37f0bd76a24a6632555] <==
	I1229 07:16:57.661447       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1229 07:16:57.664801       1 shared_informer.go:377] "Caches are synced"
	I1229 07:16:57.662159       1 shared_informer.go:356] "Caches are synced" controller="crd-autoregister"
	I1229 07:16:57.664568       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1229 07:16:57.664598       1 shared_informer.go:377] "Caches are synced"
	I1229 07:16:57.665005       1 aggregator.go:187] initial CRD sync complete...
	I1229 07:16:57.664863       1 policy_source.go:248] refreshing policies
	I1229 07:16:57.665015       1 autoregister_controller.go:144] Starting autoregister controller
	I1229 07:16:57.665023       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1229 07:16:57.665030       1 cache.go:39] Caches are synced for autoregister controller
	I1229 07:16:57.662058       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1229 07:16:57.671045       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1229 07:16:57.709547       1 cidrallocator.go:302] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1229 07:16:57.714571       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1229 07:16:57.788288       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1229 07:16:57.994366       1 controller.go:667] quota admission added evaluator for: namespaces
	I1229 07:16:58.051747       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1229 07:16:58.075970       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1229 07:16:58.089140       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1229 07:16:58.132856       1 alloc.go:329] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.101.135.194"}
	I1229 07:16:58.142589       1 alloc.go:329] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.100.230.131"}
	I1229 07:16:58.564099       1 storage_scheduling.go:139] all system priority classes are created successfully or already exist.
	I1229 07:17:01.289530       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1229 07:17:01.337693       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1229 07:17:01.388084       1 controller.go:667] quota admission added evaluator for: endpoints
	
	
	==> kube-controller-manager [f8f720f7da22897696acdb14fb867efe0f070b8de40dde3450d76b6859332adc] <==
	I1229 07:17:00.794979       1 shared_informer.go:377] "Caches are synced"
	I1229 07:17:00.794987       1 shared_informer.go:377] "Caches are synced"
	I1229 07:17:00.795103       1 shared_informer.go:377] "Caches are synced"
	I1229 07:17:00.795140       1 shared_informer.go:377] "Caches are synced"
	I1229 07:17:00.795154       1 shared_informer.go:377] "Caches are synced"
	I1229 07:17:00.803054       1 range_allocator.go:177] "Sending events to api server"
	I1229 07:17:00.803128       1 range_allocator.go:181] "Starting range CIDR allocator"
	I1229 07:17:00.803159       1 shared_informer.go:370] "Waiting for caches to sync"
	I1229 07:17:00.803203       1 shared_informer.go:377] "Caches are synced"
	I1229 07:17:00.795200       1 shared_informer.go:377] "Caches are synced"
	I1229 07:17:00.793832       1 node_lifecycle_controller.go:1080] "Controller detected that zone is now in new state" zone="" newState="Normal"
	I1229 07:17:00.802316       1 shared_informer.go:377] "Caches are synced"
	I1229 07:17:00.804611       1 shared_informer.go:377] "Caches are synced"
	I1229 07:17:00.804695       1 shared_informer.go:377] "Caches are synced"
	I1229 07:17:00.804714       1 shared_informer.go:377] "Caches are synced"
	I1229 07:17:00.804733       1 shared_informer.go:377] "Caches are synced"
	I1229 07:17:00.804737       1 shared_informer.go:377] "Caches are synced"
	I1229 07:17:00.804612       1 shared_informer.go:377] "Caches are synced"
	I1229 07:17:00.804624       1 shared_informer.go:377] "Caches are synced"
	I1229 07:17:00.807285       1 shared_informer.go:370] "Waiting for caches to sync"
	I1229 07:17:00.812235       1 shared_informer.go:377] "Caches are synced"
	I1229 07:17:00.904372       1 shared_informer.go:377] "Caches are synced"
	I1229 07:17:00.904399       1 garbagecollector.go:166] "Garbage collector: all resource monitors have synced"
	I1229 07:17:00.904405       1 garbagecollector.go:169] "Proceeding to collect garbage"
	I1229 07:17:00.907447       1 shared_informer.go:377] "Caches are synced"
	
	
	==> kube-proxy [1af994b88cd9a20f9f21db7006c416782dc168261061cd2ae2e686e54a934563] <==
	I1229 07:16:58.080789       1 server_linux.go:53] "Using iptables proxy"
	I1229 07:16:58.143579       1 shared_informer.go:370] "Waiting for caches to sync"
	I1229 07:16:58.244380       1 shared_informer.go:377] "Caches are synced"
	I1229 07:16:58.244415       1 server.go:218] "Successfully retrieved NodeIPs" NodeIPs=["192.168.103.2"]
	E1229 07:16:58.244548       1 server.go:255] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1229 07:16:58.264117       1 server.go:264] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1229 07:16:58.264180       1 server_linux.go:136] "Using iptables Proxier"
	I1229 07:16:58.269542       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1229 07:16:58.270296       1 server.go:529] "Version info" version="v1.35.0"
	I1229 07:16:58.270342       1 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1229 07:16:58.272417       1 config.go:200] "Starting service config controller"
	I1229 07:16:58.272440       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1229 07:16:58.272465       1 config.go:403] "Starting serviceCIDR config controller"
	I1229 07:16:58.272471       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1229 07:16:58.272488       1 config.go:106] "Starting endpoint slice config controller"
	I1229 07:16:58.272493       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1229 07:16:58.272710       1 config.go:309] "Starting node config controller"
	I1229 07:16:58.273143       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1229 07:16:58.273192       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1229 07:16:58.373295       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1229 07:16:58.373319       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1229 07:16:58.373297       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-scheduler [9212464b12efa806f75edd62f5a28621d98bc923f0f5c51a13c6e0475b23ee0a] <==
	I1229 07:16:56.431710       1 serving.go:386] Generated self-signed cert in-memory
	W1229 07:16:57.582799       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1229 07:16:57.582966       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1229 07:16:57.582983       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1229 07:16:57.582994       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1229 07:16:57.650101       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.35.0"
	I1229 07:16:57.650150       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1229 07:16:57.654413       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1229 07:16:57.654472       1 shared_informer.go:370] "Waiting for caches to sync"
	I1229 07:16:57.654534       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1229 07:16:57.654775       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1229 07:16:57.756806       1 shared_informer.go:377] "Caches are synced"
	
	
	==> kubelet <==
	Dec 29 07:17:10 embed-certs-739827 kubelet[735]: E1229 07:17:10.693758     735 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-867fb5f87b-sglpv_kubernetes-dashboard(fa13a956-8468-408f-a28c-4f12b60eede7)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-sglpv" podUID="fa13a956-8468-408f-a28c-4f12b60eede7"
	Dec 29 07:17:10 embed-certs-739827 kubelet[735]: E1229 07:17:10.821571     735 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-embed-certs-739827" containerName="kube-scheduler"
	Dec 29 07:17:11 embed-certs-739827 kubelet[735]: E1229 07:17:11.696202     735 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-embed-certs-739827" containerName="kube-scheduler"
	Dec 29 07:17:15 embed-certs-739827 kubelet[735]: E1229 07:17:15.354327     735 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-controller-manager-embed-certs-739827" containerName="kube-controller-manager"
	Dec 29 07:17:20 embed-certs-739827 kubelet[735]: E1229 07:17:20.621932     735 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-sglpv" containerName="dashboard-metrics-scraper"
	Dec 29 07:17:20 embed-certs-739827 kubelet[735]: I1229 07:17:20.621967     735 scope.go:122] "RemoveContainer" containerID="36b6d8077eccbd26463acd2b39ffdb7882883d5320899c3b869a184ec564b4e4"
	Dec 29 07:17:20 embed-certs-739827 kubelet[735]: I1229 07:17:20.719079     735 scope.go:122] "RemoveContainer" containerID="36b6d8077eccbd26463acd2b39ffdb7882883d5320899c3b869a184ec564b4e4"
	Dec 29 07:17:20 embed-certs-739827 kubelet[735]: E1229 07:17:20.719463     735 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-sglpv" containerName="dashboard-metrics-scraper"
	Dec 29 07:17:20 embed-certs-739827 kubelet[735]: I1229 07:17:20.719500     735 scope.go:122] "RemoveContainer" containerID="feda0a75a638296e3ddf5c652055d9b00fc2c11e0ffa5bc53f8c8a1d1e1de408"
	Dec 29 07:17:20 embed-certs-739827 kubelet[735]: E1229 07:17:20.719780     735 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-867fb5f87b-sglpv_kubernetes-dashboard(fa13a956-8468-408f-a28c-4f12b60eede7)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-sglpv" podUID="fa13a956-8468-408f-a28c-4f12b60eede7"
	Dec 29 07:17:28 embed-certs-739827 kubelet[735]: I1229 07:17:28.742109     735 scope.go:122] "RemoveContainer" containerID="8832475ac0d106938d4161128b43278743eaed163bb0b266a8fe65ce2718ec8e"
	Dec 29 07:17:29 embed-certs-739827 kubelet[735]: E1229 07:17:29.014361     735 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-sglpv" containerName="dashboard-metrics-scraper"
	Dec 29 07:17:29 embed-certs-739827 kubelet[735]: I1229 07:17:29.014414     735 scope.go:122] "RemoveContainer" containerID="feda0a75a638296e3ddf5c652055d9b00fc2c11e0ffa5bc53f8c8a1d1e1de408"
	Dec 29 07:17:29 embed-certs-739827 kubelet[735]: E1229 07:17:29.014657     735 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-867fb5f87b-sglpv_kubernetes-dashboard(fa13a956-8468-408f-a28c-4f12b60eede7)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-sglpv" podUID="fa13a956-8468-408f-a28c-4f12b60eede7"
	Dec 29 07:17:34 embed-certs-739827 kubelet[735]: E1229 07:17:34.507094     735 prober_manager.go:209] "Readiness probe already exists for container" pod="kube-system/coredns-7d764666f9-55529" containerName="coredns"
	Dec 29 07:17:46 embed-certs-739827 kubelet[735]: E1229 07:17:46.621918     735 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-sglpv" containerName="dashboard-metrics-scraper"
	Dec 29 07:17:46 embed-certs-739827 kubelet[735]: I1229 07:17:46.621962     735 scope.go:122] "RemoveContainer" containerID="feda0a75a638296e3ddf5c652055d9b00fc2c11e0ffa5bc53f8c8a1d1e1de408"
	Dec 29 07:17:46 embed-certs-739827 kubelet[735]: I1229 07:17:46.796624     735 scope.go:122] "RemoveContainer" containerID="feda0a75a638296e3ddf5c652055d9b00fc2c11e0ffa5bc53f8c8a1d1e1de408"
	Dec 29 07:17:46 embed-certs-739827 kubelet[735]: E1229 07:17:46.796859     735 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-sglpv" containerName="dashboard-metrics-scraper"
	Dec 29 07:17:46 embed-certs-739827 kubelet[735]: I1229 07:17:46.796897     735 scope.go:122] "RemoveContainer" containerID="eb918f980511aef13b7e4f8bd78fdf35a7588bb9c363b3367caa3a60f30c3ec2"
	Dec 29 07:17:46 embed-certs-739827 kubelet[735]: E1229 07:17:46.797120     735 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-867fb5f87b-sglpv_kubernetes-dashboard(fa13a956-8468-408f-a28c-4f12b60eede7)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-sglpv" podUID="fa13a956-8468-408f-a28c-4f12b60eede7"
	Dec 29 07:17:48 embed-certs-739827 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Dec 29 07:17:48 embed-certs-739827 systemd[1]: kubelet.service: Deactivated successfully.
	Dec 29 07:17:48 embed-certs-739827 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 29 07:17:48 embed-certs-739827 systemd[1]: kubelet.service: Consumed 1.774s CPU time.
	
	
	==> kubernetes-dashboard [aad9ce3cf88d2672fedb660dd957d3e79fcfbbdeb5e444da96051de7009cea2d] <==
	2025/12/29 07:17:05 Using namespace: kubernetes-dashboard
	2025/12/29 07:17:05 Using in-cluster config to connect to apiserver
	2025/12/29 07:17:05 Using secret token for csrf signing
	2025/12/29 07:17:05 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/12/29 07:17:05 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/12/29 07:17:05 Successful initial request to the apiserver, version: v1.35.0
	2025/12/29 07:17:05 Generating JWE encryption key
	2025/12/29 07:17:05 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/12/29 07:17:05 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/12/29 07:17:05 Initializing JWE encryption key from synchronized object
	2025/12/29 07:17:05 Creating in-cluster Sidecar client
	2025/12/29 07:17:05 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/12/29 07:17:05 Serving insecurely on HTTP port: 9090
	2025/12/29 07:17:35 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/12/29 07:17:05 Starting overwatch
	
	
	==> storage-provisioner [704468951f6d121993ccbed029ae4733cf4e68c409c3406d0b0c0e1e83ee7a16] <==
	I1229 07:17:28.798146       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1229 07:17:28.805256       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1229 07:17:28.805318       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1229 07:17:28.807309       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1229 07:17:32.262409       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1229 07:17:36.523406       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1229 07:17:40.121492       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1229 07:17:43.175771       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1229 07:17:46.198251       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1229 07:17:46.203798       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 07:17:46.204005       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1229 07:17:46.204100       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"347ea651-c327-4cd5-b9c6-ab5a3882fdf7", APIVersion:"v1", ResourceVersion:"651", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-739827_ca89ef00-a9eb-4d5e-93a5-3bff5cdbddd0 became leader
	I1229 07:17:46.204182       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-739827_ca89ef00-a9eb-4d5e-93a5-3bff5cdbddd0!
	W1229 07:17:46.206260       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1229 07:17:46.210326       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 07:17:46.305029       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-739827_ca89ef00-a9eb-4d5e-93a5-3bff5cdbddd0!
	W1229 07:17:48.213588       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1229 07:17:48.221839       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1229 07:17:50.226259       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1229 07:17:50.277531       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1229 07:17:52.281693       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1229 07:17:52.436274       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1229 07:17:54.440198       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1229 07:17:54.446889       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	
	
	==> storage-provisioner [8832475ac0d106938d4161128b43278743eaed163bb0b266a8fe65ce2718ec8e] <==
	I1229 07:16:58.026363       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1229 07:17:28.034874       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-739827 -n embed-certs-739827
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-739827 -n embed-certs-739827: exit status 2 (485.583027ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:270: (dbg) Run:  kubectl --context embed-certs-739827 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:294: <<< TestStartStop/group/embed-certs/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:295: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/embed-certs/serial/Pause (7.36s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (7.69s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p default-k8s-diff-port-798607 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 pause -p default-k8s-diff-port-798607 --alsologtostderr -v=1: exit status 80 (1.77313343s)

                                                
                                                
-- stdout --
	* Pausing node default-k8s-diff-port-798607 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1229 07:17:48.999711  283952 out.go:360] Setting OutFile to fd 1 ...
	I1229 07:17:49.001461  283952 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1229 07:17:49.001524  283952 out.go:374] Setting ErrFile to fd 2...
	I1229 07:17:49.001544  283952 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1229 07:17:49.002894  283952 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22353-9207/.minikube/bin
	I1229 07:17:49.003252  283952 out.go:368] Setting JSON to false
	I1229 07:17:49.003276  283952 mustload.go:66] Loading cluster: default-k8s-diff-port-798607
	I1229 07:17:49.003671  283952 config.go:182] Loaded profile config "default-k8s-diff-port-798607": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1229 07:17:49.004132  283952 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-798607 --format={{.State.Status}}
	I1229 07:17:49.026796  283952 host.go:66] Checking if "default-k8s-diff-port-798607" exists ...
	I1229 07:17:49.027054  283952 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1229 07:17:49.096766  283952 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:65 OomKillDisable:false NGoroutines:76 SystemTime:2025-12-29 07:17:49.084318863 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:dea7da592f5d1d2b7755e3a161be07f43fad8f75 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1229 07:17:49.097745  283952 pause.go:60] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766979815-22353@sha256:20dad5895b49b986a1253c0faab60865204843ac97fd3a6e6210da5896244409 binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/22353/minikube-v1.37.0-1766979747-22353-amd64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1766979747-22353/minikube-v1.37.0-1766979747-22353-amd64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1766979747-22353-amd64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) preload-source:auto profile:default-k8s-diff-port-798607 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) rosetta:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarni
ng:%!s(bool=true) wantupdatenotification:%!s(bool=true) wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1229 07:17:49.100033  283952 out.go:179] * Pausing node default-k8s-diff-port-798607 ... 
	I1229 07:17:49.101970  283952 host.go:66] Checking if "default-k8s-diff-port-798607" exists ...
	I1229 07:17:49.102358  283952 ssh_runner.go:195] Run: systemctl --version
	I1229 07:17:49.102400  283952 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-798607
	I1229 07:17:49.126108  283952 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33088 SSHKeyPath:/home/jenkins/minikube-integration/22353-9207/.minikube/machines/default-k8s-diff-port-798607/id_rsa Username:docker}
	I1229 07:17:49.230780  283952 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1229 07:17:49.244769  283952 pause.go:52] kubelet running: true
	I1229 07:17:49.244834  283952 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1229 07:17:49.436588  283952 cri.go:61] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1229 07:17:49.436668  283952 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1229 07:17:49.533662  283952 cri.go:96] found id: "69c2b94428a3aab745e9698658f6eb6d79fe6fbb241aa41ef296c07dec9ba9df"
	I1229 07:17:49.533702  283952 cri.go:96] found id: "a46d6765151b2df42c57b4fd3ae7acdca7c9fc096b1807fb848aabf31db30901"
	I1229 07:17:49.533709  283952 cri.go:96] found id: "afae4df43cd8f643833e16cb1295db765b29ebb67de964afad4a41ff8974936e"
	I1229 07:17:49.533714  283952 cri.go:96] found id: "b9f478121ddba24483732c5638ef28b71257f5d523b1dae6cfb332585c61c40c"
	I1229 07:17:49.533718  283952 cri.go:96] found id: "e449e0b1473e7e0fe4b34cc28dcd9fb7f66d2914bac76f028799024e8566d2cf"
	I1229 07:17:49.533730  283952 cri.go:96] found id: "2b72f7f6b29d95aee779b60cd81822c9b177c8165e5f4b6f517ffabb7842f102"
	I1229 07:17:49.533735  283952 cri.go:96] found id: "b68c52dc0f0ed416a57bc48dc7336f1d94c6becc7da6d8e5dc24d055b6929608"
	I1229 07:17:49.533739  283952 cri.go:96] found id: "7adaca7a38cbd91d087cd7df5275e466d228d6e8dd4c54aa4a305ea9bee1f833"
	I1229 07:17:49.533743  283952 cri.go:96] found id: "c791e2da2999f159e921bf68b6eb0ff81a9e870d3867e046bd180bb6857643da"
	I1229 07:17:49.533772  283952 cri.go:96] found id: "47b17f578b8c3e03183857345147e74034e0f22b4c45b853fdf16dc0a02b0d5d"
	I1229 07:17:49.533777  283952 cri.go:96] found id: "2ed73ae37565746fb8f6e353e039947e5998ca65d383bea0334243d9ae71661b"
	I1229 07:17:49.533781  283952 cri.go:96] found id: ""
	I1229 07:17:49.533836  283952 ssh_runner.go:195] Run: sudo runc list -f json
	I1229 07:17:49.553697  283952 retry.go:84] will retry after 400ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-29T07:17:49Z" level=error msg="open /run/runc: no such file or directory"
	I1229 07:17:49.923451  283952 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1229 07:17:49.939476  283952 pause.go:52] kubelet running: false
	I1229 07:17:49.939530  283952 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1229 07:17:50.127882  283952 cri.go:61] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1229 07:17:50.127963  283952 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1229 07:17:50.212615  283952 cri.go:96] found id: "69c2b94428a3aab745e9698658f6eb6d79fe6fbb241aa41ef296c07dec9ba9df"
	I1229 07:17:50.212640  283952 cri.go:96] found id: "a46d6765151b2df42c57b4fd3ae7acdca7c9fc096b1807fb848aabf31db30901"
	I1229 07:17:50.212647  283952 cri.go:96] found id: "afae4df43cd8f643833e16cb1295db765b29ebb67de964afad4a41ff8974936e"
	I1229 07:17:50.212652  283952 cri.go:96] found id: "b9f478121ddba24483732c5638ef28b71257f5d523b1dae6cfb332585c61c40c"
	I1229 07:17:50.212656  283952 cri.go:96] found id: "e449e0b1473e7e0fe4b34cc28dcd9fb7f66d2914bac76f028799024e8566d2cf"
	I1229 07:17:50.212661  283952 cri.go:96] found id: "2b72f7f6b29d95aee779b60cd81822c9b177c8165e5f4b6f517ffabb7842f102"
	I1229 07:17:50.212665  283952 cri.go:96] found id: "b68c52dc0f0ed416a57bc48dc7336f1d94c6becc7da6d8e5dc24d055b6929608"
	I1229 07:17:50.212669  283952 cri.go:96] found id: "7adaca7a38cbd91d087cd7df5275e466d228d6e8dd4c54aa4a305ea9bee1f833"
	I1229 07:17:50.212674  283952 cri.go:96] found id: "c791e2da2999f159e921bf68b6eb0ff81a9e870d3867e046bd180bb6857643da"
	I1229 07:17:50.212681  283952 cri.go:96] found id: "47b17f578b8c3e03183857345147e74034e0f22b4c45b853fdf16dc0a02b0d5d"
	I1229 07:17:50.212685  283952 cri.go:96] found id: "2ed73ae37565746fb8f6e353e039947e5998ca65d383bea0334243d9ae71661b"
	I1229 07:17:50.212700  283952 cri.go:96] found id: ""
	I1229 07:17:50.212756  283952 ssh_runner.go:195] Run: sudo runc list -f json
	I1229 07:17:50.429619  283952 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1229 07:17:50.443347  283952 pause.go:52] kubelet running: false
	I1229 07:17:50.443408  283952 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1229 07:17:50.579607  283952 cri.go:61] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1229 07:17:50.579704  283952 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1229 07:17:50.649079  283952 cri.go:96] found id: "69c2b94428a3aab745e9698658f6eb6d79fe6fbb241aa41ef296c07dec9ba9df"
	I1229 07:17:50.649101  283952 cri.go:96] found id: "a46d6765151b2df42c57b4fd3ae7acdca7c9fc096b1807fb848aabf31db30901"
	I1229 07:17:50.649104  283952 cri.go:96] found id: "afae4df43cd8f643833e16cb1295db765b29ebb67de964afad4a41ff8974936e"
	I1229 07:17:50.649108  283952 cri.go:96] found id: "b9f478121ddba24483732c5638ef28b71257f5d523b1dae6cfb332585c61c40c"
	I1229 07:17:50.649110  283952 cri.go:96] found id: "e449e0b1473e7e0fe4b34cc28dcd9fb7f66d2914bac76f028799024e8566d2cf"
	I1229 07:17:50.649113  283952 cri.go:96] found id: "2b72f7f6b29d95aee779b60cd81822c9b177c8165e5f4b6f517ffabb7842f102"
	I1229 07:17:50.649116  283952 cri.go:96] found id: "b68c52dc0f0ed416a57bc48dc7336f1d94c6becc7da6d8e5dc24d055b6929608"
	I1229 07:17:50.649119  283952 cri.go:96] found id: "7adaca7a38cbd91d087cd7df5275e466d228d6e8dd4c54aa4a305ea9bee1f833"
	I1229 07:17:50.649121  283952 cri.go:96] found id: "c791e2da2999f159e921bf68b6eb0ff81a9e870d3867e046bd180bb6857643da"
	I1229 07:17:50.649127  283952 cri.go:96] found id: "47b17f578b8c3e03183857345147e74034e0f22b4c45b853fdf16dc0a02b0d5d"
	I1229 07:17:50.649130  283952 cri.go:96] found id: "2ed73ae37565746fb8f6e353e039947e5998ca65d383bea0334243d9ae71661b"
	I1229 07:17:50.649132  283952 cri.go:96] found id: ""
	I1229 07:17:50.649167  283952 ssh_runner.go:195] Run: sudo runc list -f json
	I1229 07:17:50.685042  283952 out.go:203] 
	W1229 07:17:50.686430  283952 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-29T07:17:50Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-29T07:17:50Z" level=error msg="open /run/runc: no such file or directory"
	
	W1229 07:17:50.686449  283952 out.go:285] * 
	* 
	W1229 07:17:50.688667  283952 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_logs_00302df19cf26dc43b03ea32978d5cabc189a5ea_3.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_logs_00302df19cf26dc43b03ea32978d5cabc189a5ea_3.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1229 07:17:50.693470  283952 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:309: out/minikube-linux-amd64 pause -p default-k8s-diff-port-798607 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect default-k8s-diff-port-798607
helpers_test.go:244: (dbg) docker inspect default-k8s-diff-port-798607:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "430601fd040d9a7e1a9a24ba66eb256f26a98ca00bf762061420bd7abf8da277",
	        "Created": "2025-12-29T07:15:54.159908787Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 269592,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-29T07:16:53.914489521Z",
	            "FinishedAt": "2025-12-29T07:16:52.999356505Z"
	        },
	        "Image": "sha256:a2c772d8c9235c5e8a66a3d0af7f5184436aa141d74f90a5659fe0d594ea14d5",
	        "ResolvConfPath": "/var/lib/docker/containers/430601fd040d9a7e1a9a24ba66eb256f26a98ca00bf762061420bd7abf8da277/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/430601fd040d9a7e1a9a24ba66eb256f26a98ca00bf762061420bd7abf8da277/hostname",
	        "HostsPath": "/var/lib/docker/containers/430601fd040d9a7e1a9a24ba66eb256f26a98ca00bf762061420bd7abf8da277/hosts",
	        "LogPath": "/var/lib/docker/containers/430601fd040d9a7e1a9a24ba66eb256f26a98ca00bf762061420bd7abf8da277/430601fd040d9a7e1a9a24ba66eb256f26a98ca00bf762061420bd7abf8da277-json.log",
	        "Name": "/default-k8s-diff-port-798607",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "default-k8s-diff-port-798607:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "default-k8s-diff-port-798607",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "430601fd040d9a7e1a9a24ba66eb256f26a98ca00bf762061420bd7abf8da277",
	                "LowerDir": "/var/lib/docker/overlay2/934a99af38cf59b603256a4b9c3c25dd4ffa4ebaa0e924a1acf3daedfa4003e5-init/diff:/var/lib/docker/overlay2/3c9e424a3298c821d4195922375c499065668db3230de879006c717dc268a220/diff",
	                "MergedDir": "/var/lib/docker/overlay2/934a99af38cf59b603256a4b9c3c25dd4ffa4ebaa0e924a1acf3daedfa4003e5/merged",
	                "UpperDir": "/var/lib/docker/overlay2/934a99af38cf59b603256a4b9c3c25dd4ffa4ebaa0e924a1acf3daedfa4003e5/diff",
	                "WorkDir": "/var/lib/docker/overlay2/934a99af38cf59b603256a4b9c3c25dd4ffa4ebaa0e924a1acf3daedfa4003e5/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "default-k8s-diff-port-798607",
	                "Source": "/var/lib/docker/volumes/default-k8s-diff-port-798607/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "default-k8s-diff-port-798607",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8444/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766979815-22353@sha256:20dad5895b49b986a1253c0faab60865204843ac97fd3a6e6210da5896244409",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "default-k8s-diff-port-798607",
	                "name.minikube.sigs.k8s.io": "default-k8s-diff-port-798607",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "c253a66473beba267d26ce0f5712ca2af1d4b0bd77a88785b80eaab7744ad659",
	            "SandboxKey": "/var/run/docker/netns/c253a66473be",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33088"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33089"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33092"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33090"
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33091"
	                    }
	                ]
	            },
	            "Networks": {
	                "default-k8s-diff-port-798607": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "a50196d85ec6cf5fe29b96f215bd3c465a58a5511f7e880d6481f36ac7ca686a",
	                    "EndpointID": "ae9b5d75bf8deee45afafaafc65eb9a476f6c22a7ef99ff7c7ae20e070cc1cf6",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "MacAddress": "e2:9d:1c:35:ca:58",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "default-k8s-diff-port-798607",
	                        "430601fd040d"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-798607 -n default-k8s-diff-port-798607
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-798607 -n default-k8s-diff-port-798607: exit status 2 (392.991304ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestStartStop/group/default-k8s-diff-port/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-798607 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-amd64 -p default-k8s-diff-port-798607 logs -n 25: (2.545893343s)
helpers_test.go:261: TestStartStop/group/default-k8s-diff-port/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ addons  │ enable metrics-server -p embed-certs-739827 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-739827           │ jenkins │ v1.37.0 │ 29 Dec 25 07:16 UTC │                     │
	│ stop    │ -p embed-certs-739827 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-739827           │ jenkins │ v1.37.0 │ 29 Dec 25 07:16 UTC │ 29 Dec 25 07:16 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-798607 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-798607 │ jenkins │ v1.37.0 │ 29 Dec 25 07:16 UTC │                     │
	│ stop    │ -p default-k8s-diff-port-798607 --alsologtostderr -v=3                                                                                                                                                                                        │ default-k8s-diff-port-798607 │ jenkins │ v1.37.0 │ 29 Dec 25 07:16 UTC │ 29 Dec 25 07:16 UTC │
	│ addons  │ enable dashboard -p embed-certs-739827 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-739827           │ jenkins │ v1.37.0 │ 29 Dec 25 07:16 UTC │ 29 Dec 25 07:16 UTC │
	│ start   │ -p embed-certs-739827 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0                                                                                        │ embed-certs-739827           │ jenkins │ v1.37.0 │ 29 Dec 25 07:16 UTC │ 29 Dec 25 07:17 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-798607 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                       │ default-k8s-diff-port-798607 │ jenkins │ v1.37.0 │ 29 Dec 25 07:16 UTC │ 29 Dec 25 07:16 UTC │
	│ start   │ -p default-k8s-diff-port-798607 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0                                                                      │ default-k8s-diff-port-798607 │ jenkins │ v1.37.0 │ 29 Dec 25 07:16 UTC │ 29 Dec 25 07:17 UTC │
	│ image   │ no-preload-122332 image list --format=json                                                                                                                                                                                                    │ no-preload-122332            │ jenkins │ v1.37.0 │ 29 Dec 25 07:17 UTC │ 29 Dec 25 07:17 UTC │
	│ pause   │ -p no-preload-122332 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-122332            │ jenkins │ v1.37.0 │ 29 Dec 25 07:17 UTC │                     │
	│ delete  │ -p no-preload-122332                                                                                                                                                                                                                          │ no-preload-122332            │ jenkins │ v1.37.0 │ 29 Dec 25 07:17 UTC │ 29 Dec 25 07:17 UTC │
	│ delete  │ -p no-preload-122332                                                                                                                                                                                                                          │ no-preload-122332            │ jenkins │ v1.37.0 │ 29 Dec 25 07:17 UTC │ 29 Dec 25 07:17 UTC │
	│ start   │ -p newest-cni-067566 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0 │ newest-cni-067566            │ jenkins │ v1.37.0 │ 29 Dec 25 07:17 UTC │ 29 Dec 25 07:17 UTC │
	│ addons  │ enable metrics-server -p newest-cni-067566 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ newest-cni-067566            │ jenkins │ v1.37.0 │ 29 Dec 25 07:17 UTC │                     │
	│ start   │ -p kubernetes-upgrade-174577 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio                                                                                                                             │ kubernetes-upgrade-174577    │ jenkins │ v1.37.0 │ 29 Dec 25 07:17 UTC │                     │
	│ start   │ -p kubernetes-upgrade-174577 --memory=3072 --kubernetes-version=v1.35.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                                                                                      │ kubernetes-upgrade-174577    │ jenkins │ v1.37.0 │ 29 Dec 25 07:17 UTC │ 29 Dec 25 07:17 UTC │
	│ stop    │ -p newest-cni-067566 --alsologtostderr -v=3                                                                                                                                                                                                   │ newest-cni-067566            │ jenkins │ v1.37.0 │ 29 Dec 25 07:17 UTC │ 29 Dec 25 07:17 UTC │
	│ addons  │ enable dashboard -p newest-cni-067566 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ newest-cni-067566            │ jenkins │ v1.37.0 │ 29 Dec 25 07:17 UTC │ 29 Dec 25 07:17 UTC │
	│ start   │ -p newest-cni-067566 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0 │ newest-cni-067566            │ jenkins │ v1.37.0 │ 29 Dec 25 07:17 UTC │                     │
	│ delete  │ -p kubernetes-upgrade-174577                                                                                                                                                                                                                  │ kubernetes-upgrade-174577    │ jenkins │ v1.37.0 │ 29 Dec 25 07:17 UTC │ 29 Dec 25 07:17 UTC │
	│ image   │ embed-certs-739827 image list --format=json                                                                                                                                                                                                   │ embed-certs-739827           │ jenkins │ v1.37.0 │ 29 Dec 25 07:17 UTC │ 29 Dec 25 07:17 UTC │
	│ pause   │ -p embed-certs-739827 --alsologtostderr -v=1                                                                                                                                                                                                  │ embed-certs-739827           │ jenkins │ v1.37.0 │ 29 Dec 25 07:17 UTC │                     │
	│ image   │ default-k8s-diff-port-798607 image list --format=json                                                                                                                                                                                         │ default-k8s-diff-port-798607 │ jenkins │ v1.37.0 │ 29 Dec 25 07:17 UTC │ 29 Dec 25 07:17 UTC │
	│ start   │ -p auto-619064 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio                                                                                                                       │ auto-619064                  │ jenkins │ v1.37.0 │ 29 Dec 25 07:17 UTC │                     │
	│ pause   │ -p default-k8s-diff-port-798607 --alsologtostderr -v=1                                                                                                                                                                                        │ default-k8s-diff-port-798607 │ jenkins │ v1.37.0 │ 29 Dec 25 07:17 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/29 07:17:48
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1229 07:17:48.840797  283828 out.go:360] Setting OutFile to fd 1 ...
	I1229 07:17:48.841045  283828 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1229 07:17:48.841055  283828 out.go:374] Setting ErrFile to fd 2...
	I1229 07:17:48.841062  283828 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1229 07:17:48.841280  283828 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22353-9207/.minikube/bin
	I1229 07:17:48.841791  283828 out.go:368] Setting JSON to false
	I1229 07:17:48.842910  283828 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":3621,"bootTime":1766989048,"procs":301,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1229 07:17:48.842979  283828 start.go:143] virtualization: kvm guest
	I1229 07:17:48.844736  283828 out.go:179] * [auto-619064] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1229 07:17:48.845826  283828 notify.go:221] Checking for updates...
	I1229 07:17:48.845856  283828 out.go:179]   - MINIKUBE_LOCATION=22353
	I1229 07:17:48.846918  283828 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1229 07:17:48.848029  283828 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22353-9207/kubeconfig
	I1229 07:17:48.849267  283828 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22353-9207/.minikube
	I1229 07:17:48.850430  283828 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1229 07:17:48.851389  283828 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1229 07:17:48.852763  283828 config.go:182] Loaded profile config "default-k8s-diff-port-798607": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1229 07:17:48.852851  283828 config.go:182] Loaded profile config "embed-certs-739827": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1229 07:17:48.852967  283828 config.go:182] Loaded profile config "newest-cni-067566": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1229 07:17:48.853079  283828 driver.go:422] Setting default libvirt URI to qemu:///system
	I1229 07:17:48.886042  283828 docker.go:124] docker version: linux-29.1.3:Docker Engine - Community
	I1229 07:17:48.886270  283828 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1229 07:17:48.958368  283828 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:65 OomKillDisable:false NGoroutines:75 SystemTime:2025-12-29 07:17:48.945550839 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:dea7da592f5d1d2b7755e3a161be07f43fad8f75 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1229 07:17:48.958528  283828 docker.go:319] overlay module found
	I1229 07:17:48.960723  283828 out.go:179] * Using the docker driver based on user configuration
	I1229 07:17:48.962115  283828 start.go:309] selected driver: docker
	I1229 07:17:48.962139  283828 start.go:928] validating driver "docker" against <nil>
	I1229 07:17:48.962156  283828 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1229 07:17:48.962959  283828 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1229 07:17:49.036845  283828 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:66 OomKillDisable:false NGoroutines:77 SystemTime:2025-12-29 07:17:49.023955716 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:dea7da592f5d1d2b7755e3a161be07f43fad8f75 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1229 07:17:49.037106  283828 start_flags.go:333] no existing cluster config was found, will generate one from the flags 
	I1229 07:17:49.037420  283828 start_flags.go:1019] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1229 07:17:49.039270  283828 out.go:179] * Using Docker driver with root privileges
	I1229 07:17:49.040420  283828 cni.go:84] Creating CNI manager for ""
	I1229 07:17:49.040485  283828 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1229 07:17:49.040495  283828 start_flags.go:342] Found "CNI" CNI - setting NetworkPlugin=cni
	I1229 07:17:49.040561  283828 start.go:353] cluster config:
	{Name:auto-619064 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766979815-22353@sha256:20dad5895b49b986a1253c0faab60865204843ac97fd3a6e6210da5896244409 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:auto-619064 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:cri
o CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: Au
toPauseInterval:1m0s Rosetta:false}
	I1229 07:17:49.041849  283828 out.go:179] * Starting "auto-619064" primary control-plane node in "auto-619064" cluster
	I1229 07:17:49.043144  283828 cache.go:134] Beginning downloading kic base image for docker with crio
	I1229 07:17:49.044792  283828 out.go:179] * Pulling base image v0.0.48-1766979815-22353 ...
	I1229 07:17:49.045939  283828 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime crio
	I1229 07:17:49.045971  283828 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22353-9207/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-cri-o-overlay-amd64.tar.lz4
	I1229 07:17:49.045979  283828 cache.go:65] Caching tarball of preloaded images
	I1229 07:17:49.046047  283828 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766979815-22353@sha256:20dad5895b49b986a1253c0faab60865204843ac97fd3a6e6210da5896244409 in local docker daemon
	I1229 07:17:49.046077  283828 preload.go:251] Found /home/jenkins/minikube-integration/22353-9207/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1229 07:17:49.046088  283828 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0 on crio
	I1229 07:17:49.046215  283828 profile.go:143] Saving config to /home/jenkins/minikube-integration/22353-9207/.minikube/profiles/auto-619064/config.json ...
	I1229 07:17:49.046253  283828 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22353-9207/.minikube/profiles/auto-619064/config.json: {Name:mk9baeefab07482d719bbe5fc1c8ed346993a174 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1229 07:17:49.074420  283828 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766979815-22353@sha256:20dad5895b49b986a1253c0faab60865204843ac97fd3a6e6210da5896244409 in local docker daemon, skipping pull
	I1229 07:17:49.074442  283828 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766979815-22353@sha256:20dad5895b49b986a1253c0faab60865204843ac97fd3a6e6210da5896244409 exists in daemon, skipping load
	I1229 07:17:49.074464  283828 cache.go:243] Successfully downloaded all kic artifacts
	I1229 07:17:49.074504  283828 start.go:360] acquireMachinesLock for auto-619064: {Name:mk846f65ba6df3e8e6a1f86164308301a22a7b28 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1229 07:17:49.074631  283828 start.go:364] duration metric: took 103.352µs to acquireMachinesLock for "auto-619064"
	I1229 07:17:49.074660  283828 start.go:93] Provisioning new machine with config: &{Name:auto-619064 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766979815-22353@sha256:20dad5895b49b986a1253c0faab60865204843ac97fd3a6e6210da5896244409 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:auto-619064 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: Soc
ketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false} &{Name: IP: Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1229 07:17:49.074755  283828 start.go:125] createHost starting for "" (driver="docker")
	I1229 07:17:44.397135  281965 out.go:252] * Restarting existing docker container for "newest-cni-067566" ...
	I1229 07:17:44.397210  281965 cli_runner.go:164] Run: docker start newest-cni-067566
	I1229 07:17:44.690272  281965 cli_runner.go:164] Run: docker container inspect newest-cni-067566 --format={{.State.Status}}
	I1229 07:17:44.717474  281965 kic.go:430] container "newest-cni-067566" state is running.
	I1229 07:17:44.717921  281965 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-067566
	I1229 07:17:44.743901  281965 profile.go:143] Saving config to /home/jenkins/minikube-integration/22353-9207/.minikube/profiles/newest-cni-067566/config.json ...
	I1229 07:17:44.744189  281965 machine.go:94] provisionDockerMachine start ...
	I1229 07:17:44.744274  281965 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-067566
	I1229 07:17:44.768576  281965 main.go:144] libmachine: Using SSH client type: native
	I1229 07:17:44.768891  281965 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e300] 0x850fa0 <nil>  [] 0s} 127.0.0.1 33098 <nil> <nil>}
	I1229 07:17:44.768924  281965 main.go:144] libmachine: About to run SSH command:
	hostname
	I1229 07:17:44.770493  281965 main.go:144] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:57186->127.0.0.1:33098: read: connection reset by peer
	I1229 07:17:47.914089  281965 main.go:144] libmachine: SSH cmd err, output: <nil>: newest-cni-067566
	
	I1229 07:17:47.914114  281965 ubuntu.go:182] provisioning hostname "newest-cni-067566"
	I1229 07:17:47.914174  281965 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-067566
	I1229 07:17:47.934478  281965 main.go:144] libmachine: Using SSH client type: native
	I1229 07:17:47.934810  281965 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e300] 0x850fa0 <nil>  [] 0s} 127.0.0.1 33098 <nil> <nil>}
	I1229 07:17:47.934832  281965 main.go:144] libmachine: About to run SSH command:
	sudo hostname newest-cni-067566 && echo "newest-cni-067566" | sudo tee /etc/hostname
	I1229 07:17:48.090581  281965 main.go:144] libmachine: SSH cmd err, output: <nil>: newest-cni-067566
	
	I1229 07:17:48.090653  281965 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-067566
	I1229 07:17:48.111676  281965 main.go:144] libmachine: Using SSH client type: native
	I1229 07:17:48.111979  281965 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e300] 0x850fa0 <nil>  [] 0s} 127.0.0.1 33098 <nil> <nil>}
	I1229 07:17:48.112010  281965 main.go:144] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-067566' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-067566/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-067566' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1229 07:17:48.252535  281965 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	I1229 07:17:48.252565  281965 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22353-9207/.minikube CaCertPath:/home/jenkins/minikube-integration/22353-9207/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22353-9207/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22353-9207/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22353-9207/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22353-9207/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22353-9207/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22353-9207/.minikube}
	I1229 07:17:48.252607  281965 ubuntu.go:190] setting up certificates
	I1229 07:17:48.252633  281965 provision.go:84] configureAuth start
	I1229 07:17:48.252749  281965 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-067566
	I1229 07:17:48.273854  281965 provision.go:143] copyHostCerts
	I1229 07:17:48.273916  281965 exec_runner.go:144] found /home/jenkins/minikube-integration/22353-9207/.minikube/ca.pem, removing ...
	I1229 07:17:48.273935  281965 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22353-9207/.minikube/ca.pem
	I1229 07:17:48.274004  281965 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22353-9207/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22353-9207/.minikube/ca.pem (1082 bytes)
	I1229 07:17:48.274141  281965 exec_runner.go:144] found /home/jenkins/minikube-integration/22353-9207/.minikube/cert.pem, removing ...
	I1229 07:17:48.274153  281965 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22353-9207/.minikube/cert.pem
	I1229 07:17:48.274197  281965 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22353-9207/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22353-9207/.minikube/cert.pem (1123 bytes)
	I1229 07:17:48.274307  281965 exec_runner.go:144] found /home/jenkins/minikube-integration/22353-9207/.minikube/key.pem, removing ...
	I1229 07:17:48.274318  281965 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22353-9207/.minikube/key.pem
	I1229 07:17:48.274356  281965 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22353-9207/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22353-9207/.minikube/key.pem (1675 bytes)
	I1229 07:17:48.274453  281965 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22353-9207/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22353-9207/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22353-9207/.minikube/certs/ca-key.pem org=jenkins.newest-cni-067566 san=[127.0.0.1 192.168.94.2 localhost minikube newest-cni-067566]
	I1229 07:17:48.299081  281965 provision.go:177] copyRemoteCerts
	I1229 07:17:48.299165  281965 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1229 07:17:48.299241  281965 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-067566
	I1229 07:17:48.317986  281965 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33098 SSHKeyPath:/home/jenkins/minikube-integration/22353-9207/.minikube/machines/newest-cni-067566/id_rsa Username:docker}
	I1229 07:17:48.420562  281965 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-9207/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1229 07:17:48.439388  281965 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-9207/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1229 07:17:48.458327  281965 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-9207/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1229 07:17:48.476027  281965 provision.go:87] duration metric: took 223.366415ms to configureAuth
	I1229 07:17:48.476058  281965 ubuntu.go:206] setting minikube options for container-runtime
	I1229 07:17:48.476241  281965 config.go:182] Loaded profile config "newest-cni-067566": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1229 07:17:48.476348  281965 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-067566
	I1229 07:17:48.497081  281965 main.go:144] libmachine: Using SSH client type: native
	I1229 07:17:48.497420  281965 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e300] 0x850fa0 <nil>  [] 0s} 127.0.0.1 33098 <nil> <nil>}
	I1229 07:17:48.497457  281965 main.go:144] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1229 07:17:48.800973  281965 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1229 07:17:48.801000  281965 machine.go:97] duration metric: took 4.056798061s to provisionDockerMachine
	I1229 07:17:48.801014  281965 start.go:293] postStartSetup for "newest-cni-067566" (driver="docker")
	I1229 07:17:48.801028  281965 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1229 07:17:48.801107  281965 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1229 07:17:48.801169  281965 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-067566
	I1229 07:17:48.822694  281965 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33098 SSHKeyPath:/home/jenkins/minikube-integration/22353-9207/.minikube/machines/newest-cni-067566/id_rsa Username:docker}
	I1229 07:17:48.929634  281965 ssh_runner.go:195] Run: cat /etc/os-release
	I1229 07:17:48.935128  281965 main.go:144] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1229 07:17:48.935160  281965 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1229 07:17:48.935174  281965 filesync.go:126] Scanning /home/jenkins/minikube-integration/22353-9207/.minikube/addons for local assets ...
	I1229 07:17:48.935265  281965 filesync.go:126] Scanning /home/jenkins/minikube-integration/22353-9207/.minikube/files for local assets ...
	I1229 07:17:48.935372  281965 filesync.go:149] local asset: /home/jenkins/minikube-integration/22353-9207/.minikube/files/etc/ssl/certs/127332.pem -> 127332.pem in /etc/ssl/certs
	I1229 07:17:48.935496  281965 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1229 07:17:48.945366  281965 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-9207/.minikube/files/etc/ssl/certs/127332.pem --> /etc/ssl/certs/127332.pem (1708 bytes)
	I1229 07:17:48.966328  281965 start.go:296] duration metric: took 165.300332ms for postStartSetup
	I1229 07:17:48.966399  281965 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1229 07:17:48.966445  281965 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-067566
	I1229 07:17:48.995761  281965 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33098 SSHKeyPath:/home/jenkins/minikube-integration/22353-9207/.minikube/machines/newest-cni-067566/id_rsa Username:docker}
	I1229 07:17:49.102276  281965 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1229 07:17:49.107575  281965 fix.go:56] duration metric: took 4.734013486s for fixHost
	I1229 07:17:49.107603  281965 start.go:83] releasing machines lock for "newest-cni-067566", held for 4.734065769s
	I1229 07:17:49.107664  281965 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-067566
	I1229 07:17:49.128559  281965 ssh_runner.go:195] Run: cat /version.json
	I1229 07:17:49.128616  281965 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-067566
	I1229 07:17:49.128663  281965 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1229 07:17:49.128754  281965 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-067566
	I1229 07:17:49.150708  281965 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33098 SSHKeyPath:/home/jenkins/minikube-integration/22353-9207/.minikube/machines/newest-cni-067566/id_rsa Username:docker}
	I1229 07:17:49.150993  281965 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33098 SSHKeyPath:/home/jenkins/minikube-integration/22353-9207/.minikube/machines/newest-cni-067566/id_rsa Username:docker}
	I1229 07:17:49.250916  281965 ssh_runner.go:195] Run: systemctl --version
	I1229 07:17:49.315877  281965 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1229 07:17:49.356072  281965 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1229 07:17:49.361829  281965 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1229 07:17:49.361914  281965 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1229 07:17:49.370024  281965 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1229 07:17:49.370051  281965 start.go:496] detecting cgroup driver to use...
	I1229 07:17:49.370093  281965 detect.go:190] detected "systemd" cgroup driver on host os
	I1229 07:17:49.370140  281965 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1229 07:17:49.384478  281965 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1229 07:17:49.399113  281965 docker.go:218] disabling cri-docker service (if available) ...
	I1229 07:17:49.399172  281965 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1229 07:17:49.416774  281965 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1229 07:17:49.431582  281965 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1229 07:17:49.548080  281965 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1229 07:17:49.643089  281965 docker.go:234] disabling docker service ...
	I1229 07:17:49.643159  281965 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1229 07:17:49.662626  281965 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1229 07:17:49.682582  281965 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1229 07:17:49.801951  281965 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1229 07:17:49.911105  281965 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1229 07:17:49.929444  281965 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1229 07:17:49.946306  281965 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1229 07:17:49.946376  281965 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1229 07:17:49.957380  281965 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1229 07:17:49.957441  281965 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1229 07:17:49.968147  281965 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1229 07:17:49.978493  281965 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1229 07:17:49.996157  281965 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1229 07:17:50.005465  281965 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1229 07:17:50.015637  281965 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1229 07:17:50.024813  281965 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1229 07:17:50.034513  281965 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1229 07:17:50.043120  281965 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1229 07:17:50.051941  281965 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1229 07:17:50.172961  281965 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1229 07:17:50.841119  281965 start.go:553] Will wait 60s for socket path /var/run/crio/crio.sock
	I1229 07:17:50.841193  281965 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1229 07:17:50.846002  281965 start.go:574] Will wait 60s for crictl version
	I1229 07:17:50.846051  281965 ssh_runner.go:195] Run: which crictl
	I1229 07:17:50.850121  281965 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1229 07:17:50.883548  281965 start.go:590] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.35.0
	RuntimeApiVersion:  v1
	I1229 07:17:50.883634  281965 ssh_runner.go:195] Run: crio --version
	I1229 07:17:50.912399  281965 ssh_runner.go:195] Run: crio --version
	I1229 07:17:50.953893  281965 out.go:179] * Preparing Kubernetes v1.35.0 on CRI-O 1.35.0 ...
	I1229 07:17:50.955740  281965 cli_runner.go:164] Run: docker network inspect newest-cni-067566 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1229 07:17:50.978260  281965 ssh_runner.go:195] Run: grep 192.168.94.1	host.minikube.internal$ /etc/hosts
	I1229 07:17:50.982618  281965 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.94.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1229 07:17:50.996958  281965 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	
	
	==> CRI-O <==
	Dec 29 07:17:21 default-k8s-diff-port-798607 crio[573]: time="2025-12-29T07:17:21.321970664Z" level=info msg="Started container" PID=1793 containerID=33b48df23a22a9949889b1817868670fe70df6d3be8a35180711f39b34adf45a description=kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-w65p9/dashboard-metrics-scraper id=340f7cc2-93b3-4017-b322-7fc5200a180c name=/runtime.v1.RuntimeService/StartContainer sandboxID=0f345cab99434c686863a84487c68846422559230477f3ba3cdcbacaa339384a
	Dec 29 07:17:21 default-k8s-diff-port-798607 crio[573]: time="2025-12-29T07:17:21.3554131Z" level=info msg="Removing container: 3738bfcf3c9d7ccb89fc0c46b42e05404e26c155a7c1ae17db478db2a226cc57" id=cc12e7ce-93f6-4f21-b668-05ac36b15eb5 name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 29 07:17:21 default-k8s-diff-port-798607 crio[573]: time="2025-12-29T07:17:21.365352866Z" level=info msg="Removed container 3738bfcf3c9d7ccb89fc0c46b42e05404e26c155a7c1ae17db478db2a226cc57: kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-w65p9/dashboard-metrics-scraper" id=cc12e7ce-93f6-4f21-b668-05ac36b15eb5 name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 29 07:17:34 default-k8s-diff-port-798607 crio[573]: time="2025-12-29T07:17:34.385821127Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=7868b1d8-3b08-4a35-a938-a1db3463348f name=/runtime.v1.ImageService/ImageStatus
	Dec 29 07:17:34 default-k8s-diff-port-798607 crio[573]: time="2025-12-29T07:17:34.386773506Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=f6501361-4378-4cce-ac9c-6acf881a9c7b name=/runtime.v1.ImageService/ImageStatus
	Dec 29 07:17:34 default-k8s-diff-port-798607 crio[573]: time="2025-12-29T07:17:34.387848394Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=258f5f5b-ead9-4f0d-bdb8-a392f7d27078 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 29 07:17:34 default-k8s-diff-port-798607 crio[573]: time="2025-12-29T07:17:34.38799984Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 29 07:17:34 default-k8s-diff-port-798607 crio[573]: time="2025-12-29T07:17:34.392396268Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 29 07:17:34 default-k8s-diff-port-798607 crio[573]: time="2025-12-29T07:17:34.392559008Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/3f6e1d2125dd7e72e38d997cd458de1594256f4f2f1cce624f2df49640814f30/merged/etc/passwd: no such file or directory"
	Dec 29 07:17:34 default-k8s-diff-port-798607 crio[573]: time="2025-12-29T07:17:34.392583472Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/3f6e1d2125dd7e72e38d997cd458de1594256f4f2f1cce624f2df49640814f30/merged/etc/group: no such file or directory"
	Dec 29 07:17:34 default-k8s-diff-port-798607 crio[573]: time="2025-12-29T07:17:34.392906371Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 29 07:17:34 default-k8s-diff-port-798607 crio[573]: time="2025-12-29T07:17:34.420167676Z" level=info msg="Created container 69c2b94428a3aab745e9698658f6eb6d79fe6fbb241aa41ef296c07dec9ba9df: kube-system/storage-provisioner/storage-provisioner" id=258f5f5b-ead9-4f0d-bdb8-a392f7d27078 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 29 07:17:34 default-k8s-diff-port-798607 crio[573]: time="2025-12-29T07:17:34.420845864Z" level=info msg="Starting container: 69c2b94428a3aab745e9698658f6eb6d79fe6fbb241aa41ef296c07dec9ba9df" id=1a40e3bf-8cb0-4de8-925e-d46fdd0704c1 name=/runtime.v1.RuntimeService/StartContainer
	Dec 29 07:17:34 default-k8s-diff-port-798607 crio[573]: time="2025-12-29T07:17:34.423346958Z" level=info msg="Started container" PID=1808 containerID=69c2b94428a3aab745e9698658f6eb6d79fe6fbb241aa41ef296c07dec9ba9df description=kube-system/storage-provisioner/storage-provisioner id=1a40e3bf-8cb0-4de8-925e-d46fdd0704c1 name=/runtime.v1.RuntimeService/StartContainer sandboxID=eed119c40b9dd0e5d0e7007b036ea1849575c5fef760ee4fa9df09b24701e85a
	Dec 29 07:17:44 default-k8s-diff-port-798607 crio[573]: time="2025-12-29T07:17:44.267517715Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=0ef62de6-fe37-4501-a0a7-d9c67a8ba62f name=/runtime.v1.ImageService/ImageStatus
	Dec 29 07:17:44 default-k8s-diff-port-798607 crio[573]: time="2025-12-29T07:17:44.268505231Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=447be089-ac58-456e-bd31-dc13dabc7163 name=/runtime.v1.ImageService/ImageStatus
	Dec 29 07:17:44 default-k8s-diff-port-798607 crio[573]: time="2025-12-29T07:17:44.269688894Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-w65p9/dashboard-metrics-scraper" id=efb9ebd2-0600-432d-980a-eb5518c84570 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 29 07:17:44 default-k8s-diff-port-798607 crio[573]: time="2025-12-29T07:17:44.269847775Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 29 07:17:44 default-k8s-diff-port-798607 crio[573]: time="2025-12-29T07:17:44.275941026Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 29 07:17:44 default-k8s-diff-port-798607 crio[573]: time="2025-12-29T07:17:44.276496573Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 29 07:17:44 default-k8s-diff-port-798607 crio[573]: time="2025-12-29T07:17:44.315912484Z" level=info msg="Created container 47b17f578b8c3e03183857345147e74034e0f22b4c45b853fdf16dc0a02b0d5d: kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-w65p9/dashboard-metrics-scraper" id=efb9ebd2-0600-432d-980a-eb5518c84570 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 29 07:17:44 default-k8s-diff-port-798607 crio[573]: time="2025-12-29T07:17:44.317071916Z" level=info msg="Starting container: 47b17f578b8c3e03183857345147e74034e0f22b4c45b853fdf16dc0a02b0d5d" id=1413dc38-3ce4-4c7c-a260-1bef2db02553 name=/runtime.v1.RuntimeService/StartContainer
	Dec 29 07:17:44 default-k8s-diff-port-798607 crio[573]: time="2025-12-29T07:17:44.319441319Z" level=info msg="Started container" PID=1848 containerID=47b17f578b8c3e03183857345147e74034e0f22b4c45b853fdf16dc0a02b0d5d description=kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-w65p9/dashboard-metrics-scraper id=1413dc38-3ce4-4c7c-a260-1bef2db02553 name=/runtime.v1.RuntimeService/StartContainer sandboxID=0f345cab99434c686863a84487c68846422559230477f3ba3cdcbacaa339384a
	Dec 29 07:17:44 default-k8s-diff-port-798607 crio[573]: time="2025-12-29T07:17:44.417251799Z" level=info msg="Removing container: 33b48df23a22a9949889b1817868670fe70df6d3be8a35180711f39b34adf45a" id=2880610f-c6da-4ae9-a93b-75983a23f30d name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 29 07:17:44 default-k8s-diff-port-798607 crio[573]: time="2025-12-29T07:17:44.42599672Z" level=info msg="Removed container 33b48df23a22a9949889b1817868670fe70df6d3be8a35180711f39b34adf45a: kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-w65p9/dashboard-metrics-scraper" id=2880610f-c6da-4ae9-a93b-75983a23f30d name=/runtime.v1.RuntimeService/RemoveContainer
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                                    NAMESPACE
	47b17f578b8c3       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           7 seconds ago       Exited              dashboard-metrics-scraper   3                   0f345cab99434       dashboard-metrics-scraper-867fb5f87b-w65p9             kubernetes-dashboard
	69c2b94428a3a       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           17 seconds ago      Running             storage-provisioner         1                   eed119c40b9dd       storage-provisioner                                    kube-system
	2ed73ae375657       docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029   38 seconds ago      Running             kubernetes-dashboard        0                   d7832d2a1391c       kubernetes-dashboard-b84665fb8-mj5lz                   kubernetes-dashboard
	15e0b29dc7bd9       56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c                                           48 seconds ago      Running             busybox                     1                   606eb3e626ced       busybox                                                default
	a46d6765151b2       aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139                                           48 seconds ago      Running             coredns                     0                   079eebb5c8464       coredns-7d764666f9-jwmww                               kube-system
	afae4df43cd8f       4921d7a6dffa922dd679732ba4797085c4f39e9a53bee8b6fdb1d463e8571251                                           48 seconds ago      Running             kindnet-cni                 0                   20c808da8c2b3       kindnet-m6jd2                                          kube-system
	b9f478121ddba       32652ff1bbe6b6df16a3dc621fcad4e6e185a2852bffc5847aab79e36c54bfb8                                           48 seconds ago      Running             kube-proxy                  0                   e6a7968ba35a9       kube-proxy-4mnzc                                       kube-system
	e449e0b1473e7       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           48 seconds ago      Exited              storage-provisioner         0                   eed119c40b9dd       storage-provisioner                                    kube-system
	2b72f7f6b29d9       2c9a4b058bd7e6ec479c38f9e8a1dac2f5ee5b0a3ebda6dfac968e8720229508                                           51 seconds ago      Running             kube-controller-manager     0                   3ba471fdf50f1       kube-controller-manager-default-k8s-diff-port-798607   kube-system
	b68c52dc0f0ed       550794e3b12ac21ec7fd940bdfb45f7c8dae3c52acd8c70e580b882de20c3dcc                                           51 seconds ago      Running             kube-scheduler              0                   67239053ff742       kube-scheduler-default-k8s-diff-port-798607            kube-system
	7adaca7a38cbd       5c6acd67e9cd10eb60a246cd233db251ef62ea97e6572f897e873f0cb648f499                                           51 seconds ago      Running             kube-apiserver              0                   84418b38402af       kube-apiserver-default-k8s-diff-port-798607            kube-system
	c791e2da2999f       0a108f7189562e99793bdecab61fdf1a7c9d913af3385de9da17fb9d6ff430e2                                           51 seconds ago      Running             etcd                        0                   6696d2256aaef       etcd-default-k8s-diff-port-798607                      kube-system
	
	
	==> coredns [a46d6765151b2df42c57b4fd3ae7acdca7c9fc096b1807fb848aabf31db30901] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Plugins not ready: "kubernetes"
	[INFO] plugin/ready: Plugins not ready: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Plugins not ready: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.13.1
	linux/amd64, go1.25.2, 1db4568
	[INFO] 127.0.0.1:48293 - 40621 "HINFO IN 8612991051879489531.2804412481474677624. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.030638956s
	[INFO] plugin/ready: Plugins not ready: "kubernetes"
	[INFO] plugin/ready: Plugins not ready: "kubernetes"
	[ERROR] plugin/kubernetes: Failed to watch
	[ERROR] plugin/kubernetes: Failed to watch
	[ERROR] plugin/kubernetes: Failed to watch
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-798607
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=default-k8s-diff-port-798607
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=306c26d738eaa3534a776d7c684ed563998a48b8
	                    minikube.k8s.io/name=default-k8s-diff-port-798607
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_29T07_16_06_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 29 Dec 2025 07:16:02 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-798607
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 29 Dec 2025 07:17:43 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 29 Dec 2025 07:17:33 +0000   Mon, 29 Dec 2025 07:16:01 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 29 Dec 2025 07:17:33 +0000   Mon, 29 Dec 2025 07:16:01 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 29 Dec 2025 07:17:33 +0000   Mon, 29 Dec 2025 07:16:01 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 29 Dec 2025 07:17:33 +0000   Mon, 29 Dec 2025 07:16:24 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    default-k8s-diff-port-798607
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863360Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863360Ki
	  pods:               110
	System Info:
	  Machine ID:                 44f990608ba801cb32708aeb6951f96d
	  System UUID:                b24ee258-37aa-4e3b-b0b9-8a7f17d3bb24
	  Boot ID:                    38b59c2f-2e15-4538-b2f7-46a8b6545a02
	  Kernel Version:             6.8.0-1045-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.35.0
	  Kubelet Version:            v1.35.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         86s
	  kube-system                 coredns-7d764666f9-jwmww                                100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     102s
	  kube-system                 etcd-default-k8s-diff-port-798607                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         107s
	  kube-system                 kindnet-m6jd2                                           100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      102s
	  kube-system                 kube-apiserver-default-k8s-diff-port-798607             250m (3%)     0 (0%)      0 (0%)           0 (0%)         107s
	  kube-system                 kube-controller-manager-default-k8s-diff-port-798607    200m (2%)     0 (0%)      0 (0%)           0 (0%)         107s
	  kube-system                 kube-proxy-4mnzc                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         102s
	  kube-system                 kube-scheduler-default-k8s-diff-port-798607             100m (1%)     0 (0%)      0 (0%)           0 (0%)         108s
	  kube-system                 storage-provisioner                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         101s
	  kubernetes-dashboard        dashboard-metrics-scraper-867fb5f87b-w65p9              0 (0%)        0 (0%)      0 (0%)           0 (0%)         46s
	  kubernetes-dashboard        kubernetes-dashboard-b84665fb8-mj5lz                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         46s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason          Age   From             Message
	  ----    ------          ----  ----             -------
	  Normal  RegisteredNode  103s  node-controller  Node default-k8s-diff-port-798607 event: Registered Node default-k8s-diff-port-798607 in Controller
	  Normal  RegisteredNode  46s   node-controller  Node default-k8s-diff-port-798607 event: Registered Node default-k8s-diff-port-798607 in Controller
	
	
	==> dmesg <==
	[Dec29 06:17] MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details.
	[  +0.001714] TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details.
	[  +0.001000] MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details.
	[  +0.084008] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge
	[  +0.380672] i8042: Warning: Keylock active
	[  +0.013979] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.493300] block sda: the capability attribute has been deprecated.
	[  +0.084603] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.024785] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +4.737934] kauditd_printk_skb: 47 callbacks suppressed
	
	
	==> etcd [c791e2da2999f159e921bf68b6eb0ff81a9e870d3867e046bd180bb6857643da] <==
	{"level":"info","ts":"2025-12-29T07:17:00.857087Z","caller":"membership/cluster.go:433","msg":"ignore already added member","cluster-id":"68eaea490fab4e05","local-member-id":"9f0758e1c58a86ed","added-peer-id":"9f0758e1c58a86ed","added-peer-peer-urls":["https://192.168.85.2:2380"],"added-peer-is-learner":false}
	{"level":"info","ts":"2025-12-29T07:17:00.857256Z","caller":"membership/cluster.go:674","msg":"updated cluster version","cluster-id":"68eaea490fab4e05","local-member-id":"9f0758e1c58a86ed","from":"3.6","to":"3.6"}
	{"level":"info","ts":"2025-12-29T07:17:01.847299Z","logger":"raft","caller":"v3@v3.6.0/raft.go:988","msg":"9f0758e1c58a86ed is starting a new election at term 2"}
	{"level":"info","ts":"2025-12-29T07:17:01.847375Z","logger":"raft","caller":"v3@v3.6.0/raft.go:930","msg":"9f0758e1c58a86ed became pre-candidate at term 2"}
	{"level":"info","ts":"2025-12-29T07:17:01.847448Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1077","msg":"9f0758e1c58a86ed received MsgPreVoteResp from 9f0758e1c58a86ed at term 2"}
	{"level":"info","ts":"2025-12-29T07:17:01.847473Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1693","msg":"9f0758e1c58a86ed has received 1 MsgPreVoteResp votes and 0 vote rejections"}
	{"level":"info","ts":"2025-12-29T07:17:01.847491Z","logger":"raft","caller":"v3@v3.6.0/raft.go:912","msg":"9f0758e1c58a86ed became candidate at term 3"}
	{"level":"info","ts":"2025-12-29T07:17:01.848832Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1077","msg":"9f0758e1c58a86ed received MsgVoteResp from 9f0758e1c58a86ed at term 3"}
	{"level":"info","ts":"2025-12-29T07:17:01.848866Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1693","msg":"9f0758e1c58a86ed has received 1 MsgVoteResp votes and 0 vote rejections"}
	{"level":"info","ts":"2025-12-29T07:17:01.848885Z","logger":"raft","caller":"v3@v3.6.0/raft.go:970","msg":"9f0758e1c58a86ed became leader at term 3"}
	{"level":"info","ts":"2025-12-29T07:17:01.848892Z","logger":"raft","caller":"v3@v3.6.0/node.go:370","msg":"raft.node: 9f0758e1c58a86ed elected leader 9f0758e1c58a86ed at term 3"}
	{"level":"info","ts":"2025-12-29T07:17:01.849659Z","caller":"etcdserver/server.go:1820","msg":"published local member to cluster through raft","local-member-id":"9f0758e1c58a86ed","local-member-attributes":"{Name:default-k8s-diff-port-798607 ClientURLs:[https://192.168.85.2:2379]}","cluster-id":"68eaea490fab4e05","publish-timeout":"7s"}
	{"level":"info","ts":"2025-12-29T07:17:01.849681Z","caller":"embed/serve.go:138","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-12-29T07:17:01.849676Z","caller":"embed/serve.go:138","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-12-29T07:17:01.849912Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-12-29T07:17:01.849949Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-12-29T07:17:01.851888Z","caller":"v3rpc/health.go:63","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-12-29T07:17:01.851961Z","caller":"v3rpc/health.go:63","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-12-29T07:17:01.854505Z","caller":"embed/serve.go:283","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.85.2:2379"}
	{"level":"info","ts":"2025-12-29T07:17:01.854860Z","caller":"embed/serve.go:283","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2025-12-29T07:17:07.139785Z","caller":"traceutil/trace.go:172","msg":"trace[456230644] transaction","detail":"{read_only:false; response_revision:567; number_of_response:1; }","duration":"110.192207ms","start":"2025-12-29T07:17:07.029569Z","end":"2025-12-29T07:17:07.139761Z","steps":["trace[456230644] 'process raft request'  (duration: 78.067677ms)","trace[456230644] 'compare'  (duration: 31.511522ms)"],"step_count":2}
	{"level":"info","ts":"2025-12-29T07:17:07.139942Z","caller":"traceutil/trace.go:172","msg":"trace[16617097] transaction","detail":"{read_only:false; response_revision:568; number_of_response:1; }","duration":"110.277675ms","start":"2025-12-29T07:17:07.029646Z","end":"2025-12-29T07:17:07.139924Z","steps":["trace[16617097] 'process raft request'  (duration: 110.02756ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-29T07:17:07.140046Z","caller":"traceutil/trace.go:172","msg":"trace[36207201] transaction","detail":"{read_only:false; response_revision:570; number_of_response:1; }","duration":"108.34171ms","start":"2025-12-29T07:17:07.031695Z","end":"2025-12-29T07:17:07.140037Z","steps":["trace[36207201] 'process raft request'  (duration: 108.198845ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-29T07:17:07.140421Z","caller":"traceutil/trace.go:172","msg":"trace[2010144027] transaction","detail":"{read_only:false; response_revision:569; number_of_response:1; }","duration":"108.714185ms","start":"2025-12-29T07:17:07.031693Z","end":"2025-12-29T07:17:07.140407Z","steps":["trace[2010144027] 'process raft request'  (duration: 108.13916ms)"],"step_count":1}
	{"level":"warn","ts":"2025-12-29T07:17:18.494315Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"130.756714ms","expected-duration":"100ms","prefix":"","request":"header:<ID:9722598045696995638 > lease_revoke:<id:06ed9b68f6c4518f>","response":"size:28"}
	
	
	==> kernel <==
	 07:17:53 up  1:00,  0 user,  load average: 4.25, 3.10, 2.18
	Linux default-k8s-diff-port-798607 6.8.0-1045-gcp #48~22.04.1-Ubuntu SMP Tue Nov 25 13:07:56 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [afae4df43cd8f643833e16cb1295db765b29ebb67de964afad4a41ff8974936e] <==
	I1229 07:17:03.916501       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1229 07:17:03.917099       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1229 07:17:03.917328       1 main.go:148] setting mtu 1500 for CNI 
	I1229 07:17:03.917361       1 main.go:178] kindnetd IP family: "ipv4"
	I1229 07:17:03.917386       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-12-29T07:17:04Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1229 07:17:04.126334       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1229 07:17:04.317973       1 controller.go:381] "Waiting for informer caches to sync"
	I1229 07:17:04.318028       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1229 07:17:04.318527       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1229 07:17:04.618821       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1229 07:17:04.618844       1 metrics.go:72] Registering metrics
	I1229 07:17:04.618933       1 controller.go:711] "Syncing nftables rules"
	I1229 07:17:14.126532       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1229 07:17:14.126629       1 main.go:301] handling current node
	I1229 07:17:24.127335       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1229 07:17:24.127375       1 main.go:301] handling current node
	I1229 07:17:34.126304       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1229 07:17:34.126362       1 main.go:301] handling current node
	I1229 07:17:44.133301       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1229 07:17:44.133337       1 main.go:301] handling current node
	
	
	==> kube-apiserver [7adaca7a38cbd91d087cd7df5275e466d228d6e8dd4c54aa4a305ea9bee1f833] <==
	I1229 07:17:02.982340       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1229 07:17:02.982444       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1229 07:17:02.982681       1 shared_informer.go:377] "Caches are synced"
	I1229 07:17:02.983687       1 shared_informer.go:377] "Caches are synced"
	I1229 07:17:02.983768       1 shared_informer.go:356] "Caches are synced" controller="crd-autoregister"
	I1229 07:17:02.983896       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1229 07:17:02.983991       1 aggregator.go:187] initial CRD sync complete...
	I1229 07:17:02.984013       1 autoregister_controller.go:144] Starting autoregister controller
	I1229 07:17:02.984020       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1229 07:17:02.984027       1 cache.go:39] Caches are synced for autoregister controller
	I1229 07:17:02.984552       1 shared_informer.go:377] "Caches are synced"
	I1229 07:17:02.988979       1 cidrallocator.go:302] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	E1229 07:17:03.014630       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1229 07:17:03.020650       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1229 07:17:03.365667       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1229 07:17:03.376797       1 controller.go:667] quota admission added evaluator for: namespaces
	I1229 07:17:03.412805       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1229 07:17:03.436864       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1229 07:17:03.443678       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1229 07:17:03.489319       1 alloc.go:329] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.105.227.7"}
	I1229 07:17:03.500956       1 alloc.go:329] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.111.181.177"}
	I1229 07:17:03.887004       1 storage_scheduling.go:139] all system priority classes are created successfully or already exist.
	I1229 07:17:06.672269       1 controller.go:667] quota admission added evaluator for: endpoints
	I1229 07:17:06.732205       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1229 07:17:06.825329       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	
	
	==> kube-controller-manager [2b72f7f6b29d95aee779b60cd81822c9b177c8165e5f4b6f517ffabb7842f102] <==
	I1229 07:17:06.154309       1 shared_informer.go:377] "Caches are synced"
	I1229 07:17:06.154285       1 node_lifecycle_controller.go:886] "Missing timestamp for Node. Assuming now as a timestamp" node="default-k8s-diff-port-798607"
	I1229 07:17:06.154323       1 shared_informer.go:377] "Caches are synced"
	I1229 07:17:06.154340       1 shared_informer.go:377] "Caches are synced"
	I1229 07:17:06.154374       1 shared_informer.go:377] "Caches are synced"
	I1229 07:17:06.154354       1 node_lifecycle_controller.go:1080] "Controller detected that zone is now in new state" zone="" newState="Normal"
	I1229 07:17:06.154404       1 shared_informer.go:377] "Caches are synced"
	I1229 07:17:06.154287       1 shared_informer.go:377] "Caches are synced"
	I1229 07:17:06.154521       1 shared_informer.go:377] "Caches are synced"
	I1229 07:17:06.154093       1 shared_informer.go:377] "Caches are synced"
	I1229 07:17:06.154627       1 shared_informer.go:377] "Caches are synced"
	I1229 07:17:06.154639       1 shared_informer.go:377] "Caches are synced"
	I1229 07:17:06.154649       1 shared_informer.go:377] "Caches are synced"
	I1229 07:17:06.154378       1 shared_informer.go:377] "Caches are synced"
	I1229 07:17:06.154316       1 shared_informer.go:377] "Caches are synced"
	I1229 07:17:06.156367       1 shared_informer.go:377] "Caches are synced"
	I1229 07:17:06.154613       1 shared_informer.go:377] "Caches are synced"
	I1229 07:17:06.158232       1 shared_informer.go:370] "Waiting for caches to sync"
	I1229 07:17:06.160703       1 shared_informer.go:377] "Caches are synced"
	I1229 07:17:06.161403       1 shared_informer.go:377] "Caches are synced"
	I1229 07:17:06.255259       1 shared_informer.go:377] "Caches are synced"
	I1229 07:17:06.255338       1 garbagecollector.go:166] "Garbage collector: all resource monitors have synced"
	I1229 07:17:06.255357       1 garbagecollector.go:169] "Proceeding to collect garbage"
	I1229 07:17:06.258676       1 shared_informer.go:377] "Caches are synced"
	I1229 07:17:06.818430       1 endpointslice_controller.go:361] "Error syncing endpoint slices for service, retrying" key="kube-system/kube-dns" err="EndpointSlice informer cache is out of date"
	
	
	==> kube-proxy [b9f478121ddba24483732c5638ef28b71257f5d523b1dae6cfb332585c61c40c] <==
	I1229 07:17:03.709033       1 server_linux.go:53] "Using iptables proxy"
	I1229 07:17:03.797592       1 shared_informer.go:370] "Waiting for caches to sync"
	I1229 07:17:03.897947       1 shared_informer.go:377] "Caches are synced"
	I1229 07:17:03.897989       1 server.go:218] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1229 07:17:03.898099       1 server.go:255] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1229 07:17:03.923887       1 server.go:264] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1229 07:17:03.924004       1 server_linux.go:136] "Using iptables Proxier"
	I1229 07:17:03.931248       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1229 07:17:03.932004       1 server.go:529] "Version info" version="v1.35.0"
	I1229 07:17:03.932047       1 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1229 07:17:03.938504       1 config.go:200] "Starting service config controller"
	I1229 07:17:03.938534       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1229 07:17:03.938577       1 config.go:403] "Starting serviceCIDR config controller"
	I1229 07:17:03.938584       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1229 07:17:03.938601       1 config.go:106] "Starting endpoint slice config controller"
	I1229 07:17:03.938607       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1229 07:17:03.938671       1 config.go:309] "Starting node config controller"
	I1229 07:17:03.938684       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1229 07:17:03.938693       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1229 07:17:04.039627       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1229 07:17:04.039627       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1229 07:17:04.039744       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [b68c52dc0f0ed416a57bc48dc7336f1d94c6becc7da6d8e5dc24d055b6929608] <==
	I1229 07:17:01.065343       1 serving.go:386] Generated self-signed cert in-memory
	W1229 07:17:02.924831       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1229 07:17:02.924874       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system": RBAC: [role.rbac.authorization.k8s.io "system::leader-locking-kube-scheduler" not found, role.rbac.authorization.k8s.io "extension-apiserver-authentication-reader" not found]
	W1229 07:17:02.924890       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1229 07:17:02.924899       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1229 07:17:02.963947       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.35.0"
	I1229 07:17:02.964037       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1229 07:17:02.970823       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1229 07:17:02.970868       1 shared_informer.go:370] "Waiting for caches to sync"
	I1229 07:17:02.971088       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1229 07:17:02.971180       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1229 07:17:03.071298       1 shared_informer.go:377] "Caches are synced"
	
	
	==> kubelet <==
	Dec 29 07:17:19 default-k8s-diff-port-798607 kubelet[738]: E1229 07:17:19.347880     738 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/etcd-default-k8s-diff-port-798607" containerName="etcd"
	Dec 29 07:17:19 default-k8s-diff-port-798607 kubelet[738]: E1229 07:17:19.752430     738 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-w65p9" containerName="dashboard-metrics-scraper"
	Dec 29 07:17:19 default-k8s-diff-port-798607 kubelet[738]: I1229 07:17:19.752481     738 scope.go:122] "RemoveContainer" containerID="3738bfcf3c9d7ccb89fc0c46b42e05404e26c155a7c1ae17db478db2a226cc57"
	Dec 29 07:17:19 default-k8s-diff-port-798607 kubelet[738]: E1229 07:17:19.752730     738 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-867fb5f87b-w65p9_kubernetes-dashboard(9669dd74-02ce-410f-b5f8-17d2a418b77c)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-w65p9" podUID="9669dd74-02ce-410f-b5f8-17d2a418b77c"
	Dec 29 07:17:21 default-k8s-diff-port-798607 kubelet[738]: E1229 07:17:21.266060     738 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-w65p9" containerName="dashboard-metrics-scraper"
	Dec 29 07:17:21 default-k8s-diff-port-798607 kubelet[738]: I1229 07:17:21.266102     738 scope.go:122] "RemoveContainer" containerID="3738bfcf3c9d7ccb89fc0c46b42e05404e26c155a7c1ae17db478db2a226cc57"
	Dec 29 07:17:21 default-k8s-diff-port-798607 kubelet[738]: I1229 07:17:21.354064     738 scope.go:122] "RemoveContainer" containerID="3738bfcf3c9d7ccb89fc0c46b42e05404e26c155a7c1ae17db478db2a226cc57"
	Dec 29 07:17:21 default-k8s-diff-port-798607 kubelet[738]: E1229 07:17:21.354372     738 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-w65p9" containerName="dashboard-metrics-scraper"
	Dec 29 07:17:21 default-k8s-diff-port-798607 kubelet[738]: I1229 07:17:21.354406     738 scope.go:122] "RemoveContainer" containerID="33b48df23a22a9949889b1817868670fe70df6d3be8a35180711f39b34adf45a"
	Dec 29 07:17:21 default-k8s-diff-port-798607 kubelet[738]: E1229 07:17:21.354597     738 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-867fb5f87b-w65p9_kubernetes-dashboard(9669dd74-02ce-410f-b5f8-17d2a418b77c)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-w65p9" podUID="9669dd74-02ce-410f-b5f8-17d2a418b77c"
	Dec 29 07:17:29 default-k8s-diff-port-798607 kubelet[738]: E1229 07:17:29.752323     738 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-w65p9" containerName="dashboard-metrics-scraper"
	Dec 29 07:17:29 default-k8s-diff-port-798607 kubelet[738]: I1229 07:17:29.752371     738 scope.go:122] "RemoveContainer" containerID="33b48df23a22a9949889b1817868670fe70df6d3be8a35180711f39b34adf45a"
	Dec 29 07:17:29 default-k8s-diff-port-798607 kubelet[738]: E1229 07:17:29.752681     738 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-867fb5f87b-w65p9_kubernetes-dashboard(9669dd74-02ce-410f-b5f8-17d2a418b77c)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-w65p9" podUID="9669dd74-02ce-410f-b5f8-17d2a418b77c"
	Dec 29 07:17:34 default-k8s-diff-port-798607 kubelet[738]: I1229 07:17:34.385340     738 scope.go:122] "RemoveContainer" containerID="e449e0b1473e7e0fe4b34cc28dcd9fb7f66d2914bac76f028799024e8566d2cf"
	Dec 29 07:17:35 default-k8s-diff-port-798607 kubelet[738]: E1229 07:17:35.304569     738 prober_manager.go:209] "Readiness probe already exists for container" pod="kube-system/coredns-7d764666f9-jwmww" containerName="coredns"
	Dec 29 07:17:44 default-k8s-diff-port-798607 kubelet[738]: E1229 07:17:44.266912     738 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-w65p9" containerName="dashboard-metrics-scraper"
	Dec 29 07:17:44 default-k8s-diff-port-798607 kubelet[738]: I1229 07:17:44.266966     738 scope.go:122] "RemoveContainer" containerID="33b48df23a22a9949889b1817868670fe70df6d3be8a35180711f39b34adf45a"
	Dec 29 07:17:44 default-k8s-diff-port-798607 kubelet[738]: I1229 07:17:44.415647     738 scope.go:122] "RemoveContainer" containerID="33b48df23a22a9949889b1817868670fe70df6d3be8a35180711f39b34adf45a"
	Dec 29 07:17:44 default-k8s-diff-port-798607 kubelet[738]: E1229 07:17:44.415957     738 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-w65p9" containerName="dashboard-metrics-scraper"
	Dec 29 07:17:44 default-k8s-diff-port-798607 kubelet[738]: I1229 07:17:44.415990     738 scope.go:122] "RemoveContainer" containerID="47b17f578b8c3e03183857345147e74034e0f22b4c45b853fdf16dc0a02b0d5d"
	Dec 29 07:17:44 default-k8s-diff-port-798607 kubelet[738]: E1229 07:17:44.416197     738 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-867fb5f87b-w65p9_kubernetes-dashboard(9669dd74-02ce-410f-b5f8-17d2a418b77c)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-w65p9" podUID="9669dd74-02ce-410f-b5f8-17d2a418b77c"
	Dec 29 07:17:49 default-k8s-diff-port-798607 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Dec 29 07:17:49 default-k8s-diff-port-798607 systemd[1]: kubelet.service: Deactivated successfully.
	Dec 29 07:17:49 default-k8s-diff-port-798607 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 29 07:17:49 default-k8s-diff-port-798607 systemd[1]: kubelet.service: Consumed 1.702s CPU time.
	
	
	==> kubernetes-dashboard [2ed73ae37565746fb8f6e353e039947e5998ca65d383bea0334243d9ae71661b] <==
	2025/12/29 07:17:13 Using namespace: kubernetes-dashboard
	2025/12/29 07:17:13 Using in-cluster config to connect to apiserver
	2025/12/29 07:17:13 Using secret token for csrf signing
	2025/12/29 07:17:13 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/12/29 07:17:13 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/12/29 07:17:13 Successful initial request to the apiserver, version: v1.35.0
	2025/12/29 07:17:13 Generating JWE encryption key
	2025/12/29 07:17:13 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/12/29 07:17:13 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/12/29 07:17:13 Initializing JWE encryption key from synchronized object
	2025/12/29 07:17:13 Creating in-cluster Sidecar client
	2025/12/29 07:17:13 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/12/29 07:17:13 Serving insecurely on HTTP port: 9090
	2025/12/29 07:17:43 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/12/29 07:17:13 Starting overwatch
	
	
	==> storage-provisioner [69c2b94428a3aab745e9698658f6eb6d79fe6fbb241aa41ef296c07dec9ba9df] <==
	I1229 07:17:34.439172       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1229 07:17:34.447961       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1229 07:17:34.448050       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1229 07:17:34.451591       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1229 07:17:37.907021       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1229 07:17:42.167542       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1229 07:17:45.766984       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1229 07:17:48.820802       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1229 07:17:51.844196       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1229 07:17:51.849530       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 07:17:51.849701       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1229 07:17:51.849881       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-798607_002dbc3a-7251-4f49-87c2-2d3908ad2a2f!
	I1229 07:17:51.849899       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"5078cbe3-2c7d-4503-aba9-6d953718bd88", APIVersion:"v1", ResourceVersion:"647", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-798607_002dbc3a-7251-4f49-87c2-2d3908ad2a2f became leader
	W1229 07:17:51.853467       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1229 07:17:51.857496       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 07:17:51.950072       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-798607_002dbc3a-7251-4f49-87c2-2d3908ad2a2f!
	
	
	==> storage-provisioner [e449e0b1473e7e0fe4b34cc28dcd9fb7f66d2914bac76f028799024e8566d2cf] <==
	I1229 07:17:03.651601       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1229 07:17:33.654624       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-798607 -n default-k8s-diff-port-798607
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-798607 -n default-k8s-diff-port-798607: exit status 2 (432.692287ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:270: (dbg) Run:  kubectl --context default-k8s-diff-port-798607 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:294: <<< TestStartStop/group/default-k8s-diff-port/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:295: ---------------------/post-mortem---------------------------------
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect default-k8s-diff-port-798607
helpers_test.go:244: (dbg) docker inspect default-k8s-diff-port-798607:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "430601fd040d9a7e1a9a24ba66eb256f26a98ca00bf762061420bd7abf8da277",
	        "Created": "2025-12-29T07:15:54.159908787Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 269592,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-29T07:16:53.914489521Z",
	            "FinishedAt": "2025-12-29T07:16:52.999356505Z"
	        },
	        "Image": "sha256:a2c772d8c9235c5e8a66a3d0af7f5184436aa141d74f90a5659fe0d594ea14d5",
	        "ResolvConfPath": "/var/lib/docker/containers/430601fd040d9a7e1a9a24ba66eb256f26a98ca00bf762061420bd7abf8da277/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/430601fd040d9a7e1a9a24ba66eb256f26a98ca00bf762061420bd7abf8da277/hostname",
	        "HostsPath": "/var/lib/docker/containers/430601fd040d9a7e1a9a24ba66eb256f26a98ca00bf762061420bd7abf8da277/hosts",
	        "LogPath": "/var/lib/docker/containers/430601fd040d9a7e1a9a24ba66eb256f26a98ca00bf762061420bd7abf8da277/430601fd040d9a7e1a9a24ba66eb256f26a98ca00bf762061420bd7abf8da277-json.log",
	        "Name": "/default-k8s-diff-port-798607",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "default-k8s-diff-port-798607:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "default-k8s-diff-port-798607",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "430601fd040d9a7e1a9a24ba66eb256f26a98ca00bf762061420bd7abf8da277",
	                "LowerDir": "/var/lib/docker/overlay2/934a99af38cf59b603256a4b9c3c25dd4ffa4ebaa0e924a1acf3daedfa4003e5-init/diff:/var/lib/docker/overlay2/3c9e424a3298c821d4195922375c499065668db3230de879006c717dc268a220/diff",
	                "MergedDir": "/var/lib/docker/overlay2/934a99af38cf59b603256a4b9c3c25dd4ffa4ebaa0e924a1acf3daedfa4003e5/merged",
	                "UpperDir": "/var/lib/docker/overlay2/934a99af38cf59b603256a4b9c3c25dd4ffa4ebaa0e924a1acf3daedfa4003e5/diff",
	                "WorkDir": "/var/lib/docker/overlay2/934a99af38cf59b603256a4b9c3c25dd4ffa4ebaa0e924a1acf3daedfa4003e5/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "default-k8s-diff-port-798607",
	                "Source": "/var/lib/docker/volumes/default-k8s-diff-port-798607/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "default-k8s-diff-port-798607",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8444/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766979815-22353@sha256:20dad5895b49b986a1253c0faab60865204843ac97fd3a6e6210da5896244409",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "default-k8s-diff-port-798607",
	                "name.minikube.sigs.k8s.io": "default-k8s-diff-port-798607",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "c253a66473beba267d26ce0f5712ca2af1d4b0bd77a88785b80eaab7744ad659",
	            "SandboxKey": "/var/run/docker/netns/c253a66473be",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33088"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33089"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33092"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33090"
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33091"
	                    }
	                ]
	            },
	            "Networks": {
	                "default-k8s-diff-port-798607": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "a50196d85ec6cf5fe29b96f215bd3c465a58a5511f7e880d6481f36ac7ca686a",
	                    "EndpointID": "ae9b5d75bf8deee45afafaafc65eb9a476f6c22a7ef99ff7c7ae20e070cc1cf6",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "MacAddress": "e2:9d:1c:35:ca:58",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "default-k8s-diff-port-798607",
	                        "430601fd040d"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-798607 -n default-k8s-diff-port-798607
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-798607 -n default-k8s-diff-port-798607: exit status 2 (447.042412ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestStartStop/group/default-k8s-diff-port/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-798607 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-amd64 -p default-k8s-diff-port-798607 logs -n 25: (1.443798983s)
helpers_test.go:261: TestStartStop/group/default-k8s-diff-port/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ addons  │ enable metrics-server -p embed-certs-739827 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-739827           │ jenkins │ v1.37.0 │ 29 Dec 25 07:16 UTC │                     │
	│ stop    │ -p embed-certs-739827 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-739827           │ jenkins │ v1.37.0 │ 29 Dec 25 07:16 UTC │ 29 Dec 25 07:16 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-798607 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-798607 │ jenkins │ v1.37.0 │ 29 Dec 25 07:16 UTC │                     │
	│ stop    │ -p default-k8s-diff-port-798607 --alsologtostderr -v=3                                                                                                                                                                                        │ default-k8s-diff-port-798607 │ jenkins │ v1.37.0 │ 29 Dec 25 07:16 UTC │ 29 Dec 25 07:16 UTC │
	│ addons  │ enable dashboard -p embed-certs-739827 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-739827           │ jenkins │ v1.37.0 │ 29 Dec 25 07:16 UTC │ 29 Dec 25 07:16 UTC │
	│ start   │ -p embed-certs-739827 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0                                                                                        │ embed-certs-739827           │ jenkins │ v1.37.0 │ 29 Dec 25 07:16 UTC │ 29 Dec 25 07:17 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-798607 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                       │ default-k8s-diff-port-798607 │ jenkins │ v1.37.0 │ 29 Dec 25 07:16 UTC │ 29 Dec 25 07:16 UTC │
	│ start   │ -p default-k8s-diff-port-798607 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0                                                                      │ default-k8s-diff-port-798607 │ jenkins │ v1.37.0 │ 29 Dec 25 07:16 UTC │ 29 Dec 25 07:17 UTC │
	│ image   │ no-preload-122332 image list --format=json                                                                                                                                                                                                    │ no-preload-122332            │ jenkins │ v1.37.0 │ 29 Dec 25 07:17 UTC │ 29 Dec 25 07:17 UTC │
	│ pause   │ -p no-preload-122332 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-122332            │ jenkins │ v1.37.0 │ 29 Dec 25 07:17 UTC │                     │
	│ delete  │ -p no-preload-122332                                                                                                                                                                                                                          │ no-preload-122332            │ jenkins │ v1.37.0 │ 29 Dec 25 07:17 UTC │ 29 Dec 25 07:17 UTC │
	│ delete  │ -p no-preload-122332                                                                                                                                                                                                                          │ no-preload-122332            │ jenkins │ v1.37.0 │ 29 Dec 25 07:17 UTC │ 29 Dec 25 07:17 UTC │
	│ start   │ -p newest-cni-067566 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0 │ newest-cni-067566            │ jenkins │ v1.37.0 │ 29 Dec 25 07:17 UTC │ 29 Dec 25 07:17 UTC │
	│ addons  │ enable metrics-server -p newest-cni-067566 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ newest-cni-067566            │ jenkins │ v1.37.0 │ 29 Dec 25 07:17 UTC │                     │
	│ start   │ -p kubernetes-upgrade-174577 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio                                                                                                                             │ kubernetes-upgrade-174577    │ jenkins │ v1.37.0 │ 29 Dec 25 07:17 UTC │                     │
	│ start   │ -p kubernetes-upgrade-174577 --memory=3072 --kubernetes-version=v1.35.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                                                                                      │ kubernetes-upgrade-174577    │ jenkins │ v1.37.0 │ 29 Dec 25 07:17 UTC │ 29 Dec 25 07:17 UTC │
	│ stop    │ -p newest-cni-067566 --alsologtostderr -v=3                                                                                                                                                                                                   │ newest-cni-067566            │ jenkins │ v1.37.0 │ 29 Dec 25 07:17 UTC │ 29 Dec 25 07:17 UTC │
	│ addons  │ enable dashboard -p newest-cni-067566 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ newest-cni-067566            │ jenkins │ v1.37.0 │ 29 Dec 25 07:17 UTC │ 29 Dec 25 07:17 UTC │
	│ start   │ -p newest-cni-067566 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0 │ newest-cni-067566            │ jenkins │ v1.37.0 │ 29 Dec 25 07:17 UTC │                     │
	│ delete  │ -p kubernetes-upgrade-174577                                                                                                                                                                                                                  │ kubernetes-upgrade-174577    │ jenkins │ v1.37.0 │ 29 Dec 25 07:17 UTC │ 29 Dec 25 07:17 UTC │
	│ image   │ embed-certs-739827 image list --format=json                                                                                                                                                                                                   │ embed-certs-739827           │ jenkins │ v1.37.0 │ 29 Dec 25 07:17 UTC │ 29 Dec 25 07:17 UTC │
	│ pause   │ -p embed-certs-739827 --alsologtostderr -v=1                                                                                                                                                                                                  │ embed-certs-739827           │ jenkins │ v1.37.0 │ 29 Dec 25 07:17 UTC │                     │
	│ image   │ default-k8s-diff-port-798607 image list --format=json                                                                                                                                                                                         │ default-k8s-diff-port-798607 │ jenkins │ v1.37.0 │ 29 Dec 25 07:17 UTC │ 29 Dec 25 07:17 UTC │
	│ start   │ -p auto-619064 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio                                                                                                                       │ auto-619064                  │ jenkins │ v1.37.0 │ 29 Dec 25 07:17 UTC │                     │
	│ pause   │ -p default-k8s-diff-port-798607 --alsologtostderr -v=1                                                                                                                                                                                        │ default-k8s-diff-port-798607 │ jenkins │ v1.37.0 │ 29 Dec 25 07:17 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/29 07:17:48
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1229 07:17:48.840797  283828 out.go:360] Setting OutFile to fd 1 ...
	I1229 07:17:48.841045  283828 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1229 07:17:48.841055  283828 out.go:374] Setting ErrFile to fd 2...
	I1229 07:17:48.841062  283828 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1229 07:17:48.841280  283828 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22353-9207/.minikube/bin
	I1229 07:17:48.841791  283828 out.go:368] Setting JSON to false
	I1229 07:17:48.842910  283828 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":3621,"bootTime":1766989048,"procs":301,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1229 07:17:48.842979  283828 start.go:143] virtualization: kvm guest
	I1229 07:17:48.844736  283828 out.go:179] * [auto-619064] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1229 07:17:48.845826  283828 notify.go:221] Checking for updates...
	I1229 07:17:48.845856  283828 out.go:179]   - MINIKUBE_LOCATION=22353
	I1229 07:17:48.846918  283828 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1229 07:17:48.848029  283828 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22353-9207/kubeconfig
	I1229 07:17:48.849267  283828 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22353-9207/.minikube
	I1229 07:17:48.850430  283828 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1229 07:17:48.851389  283828 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1229 07:17:48.852763  283828 config.go:182] Loaded profile config "default-k8s-diff-port-798607": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1229 07:17:48.852851  283828 config.go:182] Loaded profile config "embed-certs-739827": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1229 07:17:48.852967  283828 config.go:182] Loaded profile config "newest-cni-067566": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1229 07:17:48.853079  283828 driver.go:422] Setting default libvirt URI to qemu:///system
	I1229 07:17:48.886042  283828 docker.go:124] docker version: linux-29.1.3:Docker Engine - Community
	I1229 07:17:48.886270  283828 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1229 07:17:48.958368  283828 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:65 OomKillDisable:false NGoroutines:75 SystemTime:2025-12-29 07:17:48.945550839 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:dea7da592f5d1d2b7755e3a161be07f43fad8f75 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1229 07:17:48.958528  283828 docker.go:319] overlay module found
	I1229 07:17:48.960723  283828 out.go:179] * Using the docker driver based on user configuration
	I1229 07:17:48.962115  283828 start.go:309] selected driver: docker
	I1229 07:17:48.962139  283828 start.go:928] validating driver "docker" against <nil>
	I1229 07:17:48.962156  283828 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1229 07:17:48.962959  283828 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1229 07:17:49.036845  283828 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:66 OomKillDisable:false NGoroutines:77 SystemTime:2025-12-29 07:17:49.023955716 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:dea7da592f5d1d2b7755e3a161be07f43fad8f75 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1229 07:17:49.037106  283828 start_flags.go:333] no existing cluster config was found, will generate one from the flags 
	I1229 07:17:49.037420  283828 start_flags.go:1019] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1229 07:17:49.039270  283828 out.go:179] * Using Docker driver with root privileges
	I1229 07:17:49.040420  283828 cni.go:84] Creating CNI manager for ""
	I1229 07:17:49.040485  283828 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1229 07:17:49.040495  283828 start_flags.go:342] Found "CNI" CNI - setting NetworkPlugin=cni
	I1229 07:17:49.040561  283828 start.go:353] cluster config:
	{Name:auto-619064 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766979815-22353@sha256:20dad5895b49b986a1253c0faab60865204843ac97fd3a6e6210da5896244409 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:auto-619064 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:cri
o CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: Au
toPauseInterval:1m0s Rosetta:false}
	I1229 07:17:49.041849  283828 out.go:179] * Starting "auto-619064" primary control-plane node in "auto-619064" cluster
	I1229 07:17:49.043144  283828 cache.go:134] Beginning downloading kic base image for docker with crio
	I1229 07:17:49.044792  283828 out.go:179] * Pulling base image v0.0.48-1766979815-22353 ...
	I1229 07:17:49.045939  283828 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime crio
	I1229 07:17:49.045971  283828 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22353-9207/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-cri-o-overlay-amd64.tar.lz4
	I1229 07:17:49.045979  283828 cache.go:65] Caching tarball of preloaded images
	I1229 07:17:49.046047  283828 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766979815-22353@sha256:20dad5895b49b986a1253c0faab60865204843ac97fd3a6e6210da5896244409 in local docker daemon
	I1229 07:17:49.046077  283828 preload.go:251] Found /home/jenkins/minikube-integration/22353-9207/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1229 07:17:49.046088  283828 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0 on crio
	I1229 07:17:49.046215  283828 profile.go:143] Saving config to /home/jenkins/minikube-integration/22353-9207/.minikube/profiles/auto-619064/config.json ...
	I1229 07:17:49.046253  283828 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22353-9207/.minikube/profiles/auto-619064/config.json: {Name:mk9baeefab07482d719bbe5fc1c8ed346993a174 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1229 07:17:49.074420  283828 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766979815-22353@sha256:20dad5895b49b986a1253c0faab60865204843ac97fd3a6e6210da5896244409 in local docker daemon, skipping pull
	I1229 07:17:49.074442  283828 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766979815-22353@sha256:20dad5895b49b986a1253c0faab60865204843ac97fd3a6e6210da5896244409 exists in daemon, skipping load
	I1229 07:17:49.074464  283828 cache.go:243] Successfully downloaded all kic artifacts
	I1229 07:17:49.074504  283828 start.go:360] acquireMachinesLock for auto-619064: {Name:mk846f65ba6df3e8e6a1f86164308301a22a7b28 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1229 07:17:49.074631  283828 start.go:364] duration metric: took 103.352µs to acquireMachinesLock for "auto-619064"
	I1229 07:17:49.074660  283828 start.go:93] Provisioning new machine with config: &{Name:auto-619064 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766979815-22353@sha256:20dad5895b49b986a1253c0faab60865204843ac97fd3a6e6210da5896244409 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:auto-619064 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: Soc
ketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false} &{Name: IP: Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1229 07:17:49.074755  283828 start.go:125] createHost starting for "" (driver="docker")
	I1229 07:17:44.397135  281965 out.go:252] * Restarting existing docker container for "newest-cni-067566" ...
	I1229 07:17:44.397210  281965 cli_runner.go:164] Run: docker start newest-cni-067566
	I1229 07:17:44.690272  281965 cli_runner.go:164] Run: docker container inspect newest-cni-067566 --format={{.State.Status}}
	I1229 07:17:44.717474  281965 kic.go:430] container "newest-cni-067566" state is running.
	I1229 07:17:44.717921  281965 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-067566
	I1229 07:17:44.743901  281965 profile.go:143] Saving config to /home/jenkins/minikube-integration/22353-9207/.minikube/profiles/newest-cni-067566/config.json ...
	I1229 07:17:44.744189  281965 machine.go:94] provisionDockerMachine start ...
	I1229 07:17:44.744274  281965 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-067566
	I1229 07:17:44.768576  281965 main.go:144] libmachine: Using SSH client type: native
	I1229 07:17:44.768891  281965 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e300] 0x850fa0 <nil>  [] 0s} 127.0.0.1 33098 <nil> <nil>}
	I1229 07:17:44.768924  281965 main.go:144] libmachine: About to run SSH command:
	hostname
	I1229 07:17:44.770493  281965 main.go:144] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:57186->127.0.0.1:33098: read: connection reset by peer
	I1229 07:17:47.914089  281965 main.go:144] libmachine: SSH cmd err, output: <nil>: newest-cni-067566
	
	I1229 07:17:47.914114  281965 ubuntu.go:182] provisioning hostname "newest-cni-067566"
	I1229 07:17:47.914174  281965 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-067566
	I1229 07:17:47.934478  281965 main.go:144] libmachine: Using SSH client type: native
	I1229 07:17:47.934810  281965 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e300] 0x850fa0 <nil>  [] 0s} 127.0.0.1 33098 <nil> <nil>}
	I1229 07:17:47.934832  281965 main.go:144] libmachine: About to run SSH command:
	sudo hostname newest-cni-067566 && echo "newest-cni-067566" | sudo tee /etc/hostname
	I1229 07:17:48.090581  281965 main.go:144] libmachine: SSH cmd err, output: <nil>: newest-cni-067566
	
	I1229 07:17:48.090653  281965 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-067566
	I1229 07:17:48.111676  281965 main.go:144] libmachine: Using SSH client type: native
	I1229 07:17:48.111979  281965 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e300] 0x850fa0 <nil>  [] 0s} 127.0.0.1 33098 <nil> <nil>}
	I1229 07:17:48.112010  281965 main.go:144] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-067566' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-067566/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-067566' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1229 07:17:48.252535  281965 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	I1229 07:17:48.252565  281965 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22353-9207/.minikube CaCertPath:/home/jenkins/minikube-integration/22353-9207/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22353-9207/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22353-9207/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22353-9207/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22353-9207/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22353-9207/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22353-9207/.minikube}
	I1229 07:17:48.252607  281965 ubuntu.go:190] setting up certificates
	I1229 07:17:48.252633  281965 provision.go:84] configureAuth start
	I1229 07:17:48.252749  281965 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-067566
	I1229 07:17:48.273854  281965 provision.go:143] copyHostCerts
	I1229 07:17:48.273916  281965 exec_runner.go:144] found /home/jenkins/minikube-integration/22353-9207/.minikube/ca.pem, removing ...
	I1229 07:17:48.273935  281965 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22353-9207/.minikube/ca.pem
	I1229 07:17:48.274004  281965 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22353-9207/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22353-9207/.minikube/ca.pem (1082 bytes)
	I1229 07:17:48.274141  281965 exec_runner.go:144] found /home/jenkins/minikube-integration/22353-9207/.minikube/cert.pem, removing ...
	I1229 07:17:48.274153  281965 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22353-9207/.minikube/cert.pem
	I1229 07:17:48.274197  281965 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22353-9207/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22353-9207/.minikube/cert.pem (1123 bytes)
	I1229 07:17:48.274307  281965 exec_runner.go:144] found /home/jenkins/minikube-integration/22353-9207/.minikube/key.pem, removing ...
	I1229 07:17:48.274318  281965 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22353-9207/.minikube/key.pem
	I1229 07:17:48.274356  281965 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22353-9207/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22353-9207/.minikube/key.pem (1675 bytes)
	I1229 07:17:48.274453  281965 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22353-9207/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22353-9207/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22353-9207/.minikube/certs/ca-key.pem org=jenkins.newest-cni-067566 san=[127.0.0.1 192.168.94.2 localhost minikube newest-cni-067566]
	I1229 07:17:48.299081  281965 provision.go:177] copyRemoteCerts
	I1229 07:17:48.299165  281965 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1229 07:17:48.299241  281965 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-067566
	I1229 07:17:48.317986  281965 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33098 SSHKeyPath:/home/jenkins/minikube-integration/22353-9207/.minikube/machines/newest-cni-067566/id_rsa Username:docker}
	I1229 07:17:48.420562  281965 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-9207/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1229 07:17:48.439388  281965 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-9207/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1229 07:17:48.458327  281965 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-9207/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1229 07:17:48.476027  281965 provision.go:87] duration metric: took 223.366415ms to configureAuth
	I1229 07:17:48.476058  281965 ubuntu.go:206] setting minikube options for container-runtime
	I1229 07:17:48.476241  281965 config.go:182] Loaded profile config "newest-cni-067566": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1229 07:17:48.476348  281965 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-067566
	I1229 07:17:48.497081  281965 main.go:144] libmachine: Using SSH client type: native
	I1229 07:17:48.497420  281965 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e300] 0x850fa0 <nil>  [] 0s} 127.0.0.1 33098 <nil> <nil>}
	I1229 07:17:48.497457  281965 main.go:144] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1229 07:17:48.800973  281965 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1229 07:17:48.801000  281965 machine.go:97] duration metric: took 4.056798061s to provisionDockerMachine
	I1229 07:17:48.801014  281965 start.go:293] postStartSetup for "newest-cni-067566" (driver="docker")
	I1229 07:17:48.801028  281965 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1229 07:17:48.801107  281965 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1229 07:17:48.801169  281965 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-067566
	I1229 07:17:48.822694  281965 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33098 SSHKeyPath:/home/jenkins/minikube-integration/22353-9207/.minikube/machines/newest-cni-067566/id_rsa Username:docker}
	I1229 07:17:48.929634  281965 ssh_runner.go:195] Run: cat /etc/os-release
	I1229 07:17:48.935128  281965 main.go:144] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1229 07:17:48.935160  281965 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1229 07:17:48.935174  281965 filesync.go:126] Scanning /home/jenkins/minikube-integration/22353-9207/.minikube/addons for local assets ...
	I1229 07:17:48.935265  281965 filesync.go:126] Scanning /home/jenkins/minikube-integration/22353-9207/.minikube/files for local assets ...
	I1229 07:17:48.935372  281965 filesync.go:149] local asset: /home/jenkins/minikube-integration/22353-9207/.minikube/files/etc/ssl/certs/127332.pem -> 127332.pem in /etc/ssl/certs
	I1229 07:17:48.935496  281965 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1229 07:17:48.945366  281965 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-9207/.minikube/files/etc/ssl/certs/127332.pem --> /etc/ssl/certs/127332.pem (1708 bytes)
	I1229 07:17:48.966328  281965 start.go:296] duration metric: took 165.300332ms for postStartSetup
	I1229 07:17:48.966399  281965 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1229 07:17:48.966445  281965 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-067566
	I1229 07:17:48.995761  281965 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33098 SSHKeyPath:/home/jenkins/minikube-integration/22353-9207/.minikube/machines/newest-cni-067566/id_rsa Username:docker}
	I1229 07:17:49.102276  281965 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1229 07:17:49.107575  281965 fix.go:56] duration metric: took 4.734013486s for fixHost
	I1229 07:17:49.107603  281965 start.go:83] releasing machines lock for "newest-cni-067566", held for 4.734065769s
	I1229 07:17:49.107664  281965 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-067566
	I1229 07:17:49.128559  281965 ssh_runner.go:195] Run: cat /version.json
	I1229 07:17:49.128616  281965 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-067566
	I1229 07:17:49.128663  281965 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1229 07:17:49.128754  281965 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-067566
	I1229 07:17:49.150708  281965 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33098 SSHKeyPath:/home/jenkins/minikube-integration/22353-9207/.minikube/machines/newest-cni-067566/id_rsa Username:docker}
	I1229 07:17:49.150993  281965 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33098 SSHKeyPath:/home/jenkins/minikube-integration/22353-9207/.minikube/machines/newest-cni-067566/id_rsa Username:docker}
	I1229 07:17:49.250916  281965 ssh_runner.go:195] Run: systemctl --version
	I1229 07:17:49.315877  281965 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1229 07:17:49.356072  281965 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1229 07:17:49.361829  281965 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1229 07:17:49.361914  281965 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1229 07:17:49.370024  281965 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1229 07:17:49.370051  281965 start.go:496] detecting cgroup driver to use...
	I1229 07:17:49.370093  281965 detect.go:190] detected "systemd" cgroup driver on host os
	I1229 07:17:49.370140  281965 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1229 07:17:49.384478  281965 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1229 07:17:49.399113  281965 docker.go:218] disabling cri-docker service (if available) ...
	I1229 07:17:49.399172  281965 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1229 07:17:49.416774  281965 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1229 07:17:49.431582  281965 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1229 07:17:49.548080  281965 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1229 07:17:49.643089  281965 docker.go:234] disabling docker service ...
	I1229 07:17:49.643159  281965 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1229 07:17:49.662626  281965 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1229 07:17:49.682582  281965 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1229 07:17:49.801951  281965 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1229 07:17:49.911105  281965 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1229 07:17:49.929444  281965 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1229 07:17:49.946306  281965 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1229 07:17:49.946376  281965 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1229 07:17:49.957380  281965 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1229 07:17:49.957441  281965 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1229 07:17:49.968147  281965 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1229 07:17:49.978493  281965 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1229 07:17:49.996157  281965 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1229 07:17:50.005465  281965 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1229 07:17:50.015637  281965 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1229 07:17:50.024813  281965 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1229 07:17:50.034513  281965 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1229 07:17:50.043120  281965 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1229 07:17:50.051941  281965 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1229 07:17:50.172961  281965 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1229 07:17:50.841119  281965 start.go:553] Will wait 60s for socket path /var/run/crio/crio.sock
	I1229 07:17:50.841193  281965 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1229 07:17:50.846002  281965 start.go:574] Will wait 60s for crictl version
	I1229 07:17:50.846051  281965 ssh_runner.go:195] Run: which crictl
	I1229 07:17:50.850121  281965 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1229 07:17:50.883548  281965 start.go:590] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.35.0
	RuntimeApiVersion:  v1
	I1229 07:17:50.883634  281965 ssh_runner.go:195] Run: crio --version
	I1229 07:17:50.912399  281965 ssh_runner.go:195] Run: crio --version
	I1229 07:17:50.953893  281965 out.go:179] * Preparing Kubernetes v1.35.0 on CRI-O 1.35.0 ...
	I1229 07:17:50.955740  281965 cli_runner.go:164] Run: docker network inspect newest-cni-067566 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1229 07:17:50.978260  281965 ssh_runner.go:195] Run: grep 192.168.94.1	host.minikube.internal$ /etc/hosts
	I1229 07:17:50.982618  281965 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.94.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1229 07:17:50.996958  281965 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I1229 07:17:50.998969  281965 kubeadm.go:884] updating cluster {Name:newest-cni-067566 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766979815-22353@sha256:20dad5895b49b986a1253c0faab60865204843ac97fd3a6e6210da5896244409 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:newest-cni-067566 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:
262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false} ...
	I1229 07:17:50.999133  281965 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime crio
	I1229 07:17:50.999199  281965 ssh_runner.go:195] Run: sudo crictl images --output json
	I1229 07:17:51.047006  281965 crio.go:561] all images are preloaded for cri-o runtime.
	I1229 07:17:51.047035  281965 crio.go:433] Images already preloaded, skipping extraction
	I1229 07:17:51.047104  281965 ssh_runner.go:195] Run: sudo crictl images --output json
	I1229 07:17:51.080967  281965 crio.go:561] all images are preloaded for cri-o runtime.
	I1229 07:17:51.080993  281965 cache_images.go:86] Images are preloaded, skipping loading
	I1229 07:17:51.081002  281965 kubeadm.go:935] updating node { 192.168.94.2 8443 v1.35.0 crio true true} ...
	I1229 07:17:51.081136  281965 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=newest-cni-067566 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.94.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0 ClusterName:newest-cni-067566 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1229 07:17:51.081264  281965 ssh_runner.go:195] Run: crio config
	I1229 07:17:51.143516  281965 cni.go:84] Creating CNI manager for ""
	I1229 07:17:51.143544  281965 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1229 07:17:51.143562  281965 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1229 07:17:51.143592  281965 kubeadm.go:197] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.94.2 APIServerPort:8443 KubernetesVersion:v1.35.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-067566 NodeName:newest-cni-067566 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.94.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.94.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/k
ubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1229 07:17:51.143792  281965 kubeadm.go:203] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.94.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-067566"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.94.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.94.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1229 07:17:51.143967  281965 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0
	I1229 07:17:51.154663  281965 binaries.go:51] Found k8s binaries, skipping transfer
	I1229 07:17:51.154739  281965 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1229 07:17:51.163231  281965 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I1229 07:17:51.180379  281965 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1229 07:17:51.194793  281965 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2211 bytes)
	I1229 07:17:51.209176  281965 ssh_runner.go:195] Run: grep 192.168.94.2	control-plane.minikube.internal$ /etc/hosts
	I1229 07:17:51.213781  281965 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.94.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1229 07:17:51.225619  281965 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1229 07:17:51.343234  281965 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1229 07:17:51.376074  281965 certs.go:69] Setting up /home/jenkins/minikube-integration/22353-9207/.minikube/profiles/newest-cni-067566 for IP: 192.168.94.2
	I1229 07:17:51.376093  281965 certs.go:195] generating shared ca certs ...
	I1229 07:17:51.376111  281965 certs.go:227] acquiring lock for ca certs: {Name:mk9ea2caa33885d3eff30cd6b31b0954826457bd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1229 07:17:51.376340  281965 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22353-9207/.minikube/ca.key
	I1229 07:17:51.376400  281965 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22353-9207/.minikube/proxy-client-ca.key
	I1229 07:17:51.376413  281965 certs.go:257] generating profile certs ...
	I1229 07:17:51.376517  281965 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22353-9207/.minikube/profiles/newest-cni-067566/client.key
	I1229 07:17:51.376583  281965 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22353-9207/.minikube/profiles/newest-cni-067566/apiserver.key.f6ce96bf
	I1229 07:17:51.376640  281965 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22353-9207/.minikube/profiles/newest-cni-067566/proxy-client.key
	I1229 07:17:51.376793  281965 certs.go:484] found cert: /home/jenkins/minikube-integration/22353-9207/.minikube/certs/12733.pem (1338 bytes)
	W1229 07:17:51.376849  281965 certs.go:480] ignoring /home/jenkins/minikube-integration/22353-9207/.minikube/certs/12733_empty.pem, impossibly tiny 0 bytes
	I1229 07:17:51.376868  281965 certs.go:484] found cert: /home/jenkins/minikube-integration/22353-9207/.minikube/certs/ca-key.pem (1675 bytes)
	I1229 07:17:51.376916  281965 certs.go:484] found cert: /home/jenkins/minikube-integration/22353-9207/.minikube/certs/ca.pem (1082 bytes)
	I1229 07:17:51.376953  281965 certs.go:484] found cert: /home/jenkins/minikube-integration/22353-9207/.minikube/certs/cert.pem (1123 bytes)
	I1229 07:17:51.376985  281965 certs.go:484] found cert: /home/jenkins/minikube-integration/22353-9207/.minikube/certs/key.pem (1675 bytes)
	I1229 07:17:51.377052  281965 certs.go:484] found cert: /home/jenkins/minikube-integration/22353-9207/.minikube/files/etc/ssl/certs/127332.pem (1708 bytes)
	I1229 07:17:51.377919  281965 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-9207/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1229 07:17:51.398520  281965 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-9207/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1229 07:17:51.419391  281965 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-9207/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1229 07:17:51.442650  281965 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-9207/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1229 07:17:51.470646  281965 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-9207/.minikube/profiles/newest-cni-067566/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1229 07:17:51.506385  281965 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-9207/.minikube/profiles/newest-cni-067566/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1229 07:17:51.531059  281965 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-9207/.minikube/profiles/newest-cni-067566/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1229 07:17:51.550655  281965 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-9207/.minikube/profiles/newest-cni-067566/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1229 07:17:51.571836  281965 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-9207/.minikube/files/etc/ssl/certs/127332.pem --> /usr/share/ca-certificates/127332.pem (1708 bytes)
	I1229 07:17:51.594410  281965 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-9207/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1229 07:17:51.614508  281965 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-9207/.minikube/certs/12733.pem --> /usr/share/ca-certificates/12733.pem (1338 bytes)
	I1229 07:17:51.634570  281965 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (722 bytes)
	I1229 07:17:51.650629  281965 ssh_runner.go:195] Run: openssl version
	I1229 07:17:51.657957  281965 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/12733.pem
	I1229 07:17:51.670436  281965 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/12733.pem /etc/ssl/certs/12733.pem
	I1229 07:17:51.681300  281965 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12733.pem
	I1229 07:17:51.685865  281965 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 29 06:49 /usr/share/ca-certificates/12733.pem
	I1229 07:17:51.685923  281965 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12733.pem
	I1229 07:17:51.731800  281965 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1229 07:17:51.739827  281965 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/127332.pem
	I1229 07:17:51.749550  281965 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/127332.pem /etc/ssl/certs/127332.pem
	I1229 07:17:51.759285  281965 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/127332.pem
	I1229 07:17:51.763661  281965 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 29 06:49 /usr/share/ca-certificates/127332.pem
	I1229 07:17:51.763715  281965 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/127332.pem
	I1229 07:17:51.808611  281965 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1229 07:17:51.816279  281965 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1229 07:17:51.824749  281965 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1229 07:17:51.833269  281965 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1229 07:17:51.837458  281965 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 29 06:46 /usr/share/ca-certificates/minikubeCA.pem
	I1229 07:17:51.837515  281965 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1229 07:17:51.887787  281965 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1229 07:17:51.896533  281965 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1229 07:17:51.901090  281965 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1229 07:17:51.952369  281965 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1229 07:17:52.010393  281965 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1229 07:17:52.047251  281965 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1229 07:17:52.097957  281965 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1229 07:17:52.136766  281965 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1229 07:17:52.191885  281965 kubeadm.go:401] StartCluster: {Name:newest-cni-067566 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766979815-22353@sha256:20dad5895b49b986a1253c0faab60865204843ac97fd3a6e6210da5896244409 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:newest-cni-067566 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262
144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1229 07:17:52.191994  281965 cri.go:61] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1229 07:17:52.192040  281965 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1229 07:17:52.228460  281965 cri.go:96] found id: ""
	I1229 07:17:52.228531  281965 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1229 07:17:52.237298  281965 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1229 07:17:52.237316  281965 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1229 07:17:52.237360  281965 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1229 07:17:52.245197  281965 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1229 07:17:52.246000  281965 kubeconfig.go:47] verify endpoint returned: get endpoint: "newest-cni-067566" does not appear in /home/jenkins/minikube-integration/22353-9207/kubeconfig
	I1229 07:17:52.246518  281965 kubeconfig.go:62] /home/jenkins/minikube-integration/22353-9207/kubeconfig needs updating (will repair): [kubeconfig missing "newest-cni-067566" cluster setting kubeconfig missing "newest-cni-067566" context setting]
	I1229 07:17:52.247499  281965 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22353-9207/kubeconfig: {Name:mkdd37af1bd6d0f9cc034a64c07c8d33ee5c10ac Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1229 07:17:52.249536  281965 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1229 07:17:52.257408  281965 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.94.2
	I1229 07:17:52.257439  281965 kubeadm.go:602] duration metric: took 20.118222ms to restartPrimaryControlPlane
	I1229 07:17:52.257449  281965 kubeadm.go:403] duration metric: took 65.574946ms to StartCluster
	I1229 07:17:52.257470  281965 settings.go:142] acquiring lock: {Name:mkc0a676e8e9f981283a1b55a28ae5261bf35d87 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1229 07:17:52.257532  281965 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22353-9207/kubeconfig
	I1229 07:17:52.259664  281965 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22353-9207/kubeconfig: {Name:mkdd37af1bd6d0f9cc034a64c07c8d33ee5c10ac Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1229 07:17:52.261498  281965 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1229 07:17:52.261582  281965 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1229 07:17:52.261694  281965 addons.go:70] Setting storage-provisioner=true in profile "newest-cni-067566"
	I1229 07:17:52.261703  281965 config.go:182] Loaded profile config "newest-cni-067566": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1229 07:17:52.261714  281965 addons.go:239] Setting addon storage-provisioner=true in "newest-cni-067566"
	W1229 07:17:52.261727  281965 addons.go:248] addon storage-provisioner should already be in state true
	I1229 07:17:52.261751  281965 addons.go:70] Setting default-storageclass=true in profile "newest-cni-067566"
	I1229 07:17:52.261756  281965 host.go:66] Checking if "newest-cni-067566" exists ...
	I1229 07:17:52.261767  281965 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-067566"
	I1229 07:17:52.261764  281965 addons.go:70] Setting dashboard=true in profile "newest-cni-067566"
	I1229 07:17:52.261805  281965 addons.go:239] Setting addon dashboard=true in "newest-cni-067566"
	W1229 07:17:52.261818  281965 addons.go:248] addon dashboard should already be in state true
	I1229 07:17:52.261850  281965 host.go:66] Checking if "newest-cni-067566" exists ...
	I1229 07:17:52.262089  281965 cli_runner.go:164] Run: docker container inspect newest-cni-067566 --format={{.State.Status}}
	I1229 07:17:52.262256  281965 cli_runner.go:164] Run: docker container inspect newest-cni-067566 --format={{.State.Status}}
	I1229 07:17:52.262365  281965 cli_runner.go:164] Run: docker container inspect newest-cni-067566 --format={{.State.Status}}
	I1229 07:17:52.390337  281965 addons.go:239] Setting addon default-storageclass=true in "newest-cni-067566"
	W1229 07:17:52.390362  281965 addons.go:248] addon default-storageclass should already be in state true
	I1229 07:17:52.390396  281965 host.go:66] Checking if "newest-cni-067566" exists ...
	I1229 07:17:52.390482  281965 out.go:179] * Verifying Kubernetes components...
	I1229 07:17:52.390882  281965 cli_runner.go:164] Run: docker container inspect newest-cni-067566 --format={{.State.Status}}
	I1229 07:17:52.431110  281965 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1229 07:17:52.431134  281965 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1229 07:17:52.431205  281965 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-067566
	I1229 07:17:52.434602  281965 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1229 07:17:52.434606  281965 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1229 07:17:52.450780  281965 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33098 SSHKeyPath:/home/jenkins/minikube-integration/22353-9207/.minikube/machines/newest-cni-067566/id_rsa Username:docker}
	I1229 07:17:52.554841  281965 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1229 07:17:52.576858  281965 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1229 07:17:52.577938  281965 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1229 07:17:52.577995  281965 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-067566
	I1229 07:17:52.607996  281965 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33098 SSHKeyPath:/home/jenkins/minikube-integration/22353-9207/.minikube/machines/newest-cni-067566/id_rsa Username:docker}
	I1229 07:17:52.619697  281965 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1229 07:17:49.076928  283828 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1229 07:17:49.077259  283828 start.go:159] libmachine.API.Create for "auto-619064" (driver="docker")
	I1229 07:17:49.077293  283828 client.go:173] LocalClient.Create starting
	I1229 07:17:49.077389  283828 main.go:144] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22353-9207/.minikube/certs/ca.pem
	I1229 07:17:49.077420  283828 main.go:144] libmachine: Decoding PEM data...
	I1229 07:17:49.077433  283828 main.go:144] libmachine: Parsing certificate...
	I1229 07:17:49.077484  283828 main.go:144] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22353-9207/.minikube/certs/cert.pem
	I1229 07:17:49.077506  283828 main.go:144] libmachine: Decoding PEM data...
	I1229 07:17:49.077519  283828 main.go:144] libmachine: Parsing certificate...
	I1229 07:17:49.077834  283828 cli_runner.go:164] Run: docker network inspect auto-619064 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1229 07:17:49.097350  283828 cli_runner.go:211] docker network inspect auto-619064 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1229 07:17:49.097476  283828 network_create.go:284] running [docker network inspect auto-619064] to gather additional debugging logs...
	I1229 07:17:49.097506  283828 cli_runner.go:164] Run: docker network inspect auto-619064
	W1229 07:17:49.120915  283828 cli_runner.go:211] docker network inspect auto-619064 returned with exit code 1
	I1229 07:17:49.120944  283828 network_create.go:287] error running [docker network inspect auto-619064]: docker network inspect auto-619064: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network auto-619064 not found
	I1229 07:17:49.120955  283828 network_create.go:289] output of [docker network inspect auto-619064]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network auto-619064 not found
	
	** /stderr **
	I1229 07:17:49.121043  283828 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1229 07:17:49.143128  283828 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-0cdc02b57a9c IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:d2:92:f5:d8:8c:53} reservation:<nil>}
	I1229 07:17:49.144159  283828 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-09c86d5ed1ab IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:da:3f:ba:d0:a8:f3} reservation:<nil>}
	I1229 07:17:49.145272  283828 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-5eb2f52e9e64 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:9e:e7:f2:5b:43:1d} reservation:<nil>}
	I1229 07:17:49.146239  283828 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001e1d020}
	I1229 07:17:49.146272  283828 network_create.go:124] attempt to create docker network auto-619064 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I1229 07:17:49.146326  283828 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=auto-619064 auto-619064
	I1229 07:17:49.199231  283828 network_create.go:108] docker network auto-619064 192.168.76.0/24 created
	I1229 07:17:49.199265  283828 kic.go:121] calculated static IP "192.168.76.2" for the "auto-619064" container
	I1229 07:17:49.199337  283828 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1229 07:17:49.218101  283828 cli_runner.go:164] Run: docker volume create auto-619064 --label name.minikube.sigs.k8s.io=auto-619064 --label created_by.minikube.sigs.k8s.io=true
	I1229 07:17:49.238420  283828 oci.go:103] Successfully created a docker volume auto-619064
	I1229 07:17:49.238527  283828 cli_runner.go:164] Run: docker run --rm --name auto-619064-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=auto-619064 --entrypoint /usr/bin/test -v auto-619064:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766979815-22353@sha256:20dad5895b49b986a1253c0faab60865204843ac97fd3a6e6210da5896244409 -d /var/lib
	I1229 07:17:49.701792  283828 oci.go:107] Successfully prepared a docker volume auto-619064
	I1229 07:17:49.701870  283828 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime crio
	I1229 07:17:49.701887  283828 kic.go:194] Starting extracting preloaded images to volume ...
	I1229 07:17:49.701993  283828 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22353-9207/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v auto-619064:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766979815-22353@sha256:20dad5895b49b986a1253c0faab60865204843ac97fd3a6e6210da5896244409 -I lz4 -xf /preloaded.tar -C /extractDir
	I1229 07:17:53.323834  283828 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22353-9207/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v auto-619064:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766979815-22353@sha256:20dad5895b49b986a1253c0faab60865204843ac97fd3a6e6210da5896244409 -I lz4 -xf /preloaded.tar -C /extractDir: (3.621782611s)
	I1229 07:17:53.323929  283828 kic.go:203] duration metric: took 3.62203779s to extract preloaded images to volume ...
	W1229 07:17:53.324036  283828 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1229 07:17:53.324114  283828 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1229 07:17:53.324174  283828 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1229 07:17:53.426505  283828 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname auto-619064 --name auto-619064 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=auto-619064 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=auto-619064 --network auto-619064 --ip 192.168.76.2 --volume auto-619064:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766979815-22353@sha256:20dad5895b49b986a1253c0faab60865204843ac97fd3a6e6210da5896244409
	I1229 07:17:53.792386  283828 cli_runner.go:164] Run: docker container inspect auto-619064 --format={{.State.Running}}
	I1229 07:17:53.819493  283828 cli_runner.go:164] Run: docker container inspect auto-619064 --format={{.State.Status}}
	I1229 07:17:52.650003  281965 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1229 07:17:52.652147  281965 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1229 07:17:52.660937  281965 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1229 07:17:52.660964  281965 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1229 07:17:52.661033  281965 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-067566
	I1229 07:17:52.671497  281965 api_server.go:52] waiting for apiserver process to appear ...
	I1229 07:17:52.671606  281965 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1229 07:17:52.684391  281965 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33098 SSHKeyPath:/home/jenkins/minikube-integration/22353-9207/.minikube/machines/newest-cni-067566/id_rsa Username:docker}
	I1229 07:17:52.790418  281965 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1229 07:17:52.791605  281965 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1229 07:17:52.791627  281965 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	W1229 07:17:52.799172  281965 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1229 07:17:52.799246  281965 retry.go:84] will retry after 300ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1229 07:17:52.809064  281965 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1229 07:17:52.809093  281965 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1229 07:17:52.826327  281965 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1229 07:17:52.826352  281965 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1229 07:17:52.842612  281965 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1229 07:17:52.842633  281965 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1229 07:17:52.859420  281965 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1229 07:17:52.859446  281965 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	W1229 07:17:52.869239  281965 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1229 07:17:52.873983  281965 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1229 07:17:52.874007  281965 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1229 07:17:52.886859  281965 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1229 07:17:52.886885  281965 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1229 07:17:52.901911  281965 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1229 07:17:52.901943  281965 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1229 07:17:52.914698  281965 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1229 07:17:52.914718  281965 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1229 07:17:52.927628  281965 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1229 07:17:52.985650  281965 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1229 07:17:53.147834  281965 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1229 07:17:53.172309  281965 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1229 07:17:53.172309  281965 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1229 07:17:53.216897  281965 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1229 07:17:53.263836  281965 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1229 07:17:53.334467  281965 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1229 07:17:53.571333  281965 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1229 07:17:53.607695  281965 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1229 07:17:53.672506  281965 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	
	
	==> CRI-O <==
	Dec 29 07:17:21 default-k8s-diff-port-798607 crio[573]: time="2025-12-29T07:17:21.321970664Z" level=info msg="Started container" PID=1793 containerID=33b48df23a22a9949889b1817868670fe70df6d3be8a35180711f39b34adf45a description=kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-w65p9/dashboard-metrics-scraper id=340f7cc2-93b3-4017-b322-7fc5200a180c name=/runtime.v1.RuntimeService/StartContainer sandboxID=0f345cab99434c686863a84487c68846422559230477f3ba3cdcbacaa339384a
	Dec 29 07:17:21 default-k8s-diff-port-798607 crio[573]: time="2025-12-29T07:17:21.3554131Z" level=info msg="Removing container: 3738bfcf3c9d7ccb89fc0c46b42e05404e26c155a7c1ae17db478db2a226cc57" id=cc12e7ce-93f6-4f21-b668-05ac36b15eb5 name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 29 07:17:21 default-k8s-diff-port-798607 crio[573]: time="2025-12-29T07:17:21.365352866Z" level=info msg="Removed container 3738bfcf3c9d7ccb89fc0c46b42e05404e26c155a7c1ae17db478db2a226cc57: kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-w65p9/dashboard-metrics-scraper" id=cc12e7ce-93f6-4f21-b668-05ac36b15eb5 name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 29 07:17:34 default-k8s-diff-port-798607 crio[573]: time="2025-12-29T07:17:34.385821127Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=7868b1d8-3b08-4a35-a938-a1db3463348f name=/runtime.v1.ImageService/ImageStatus
	Dec 29 07:17:34 default-k8s-diff-port-798607 crio[573]: time="2025-12-29T07:17:34.386773506Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=f6501361-4378-4cce-ac9c-6acf881a9c7b name=/runtime.v1.ImageService/ImageStatus
	Dec 29 07:17:34 default-k8s-diff-port-798607 crio[573]: time="2025-12-29T07:17:34.387848394Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=258f5f5b-ead9-4f0d-bdb8-a392f7d27078 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 29 07:17:34 default-k8s-diff-port-798607 crio[573]: time="2025-12-29T07:17:34.38799984Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 29 07:17:34 default-k8s-diff-port-798607 crio[573]: time="2025-12-29T07:17:34.392396268Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 29 07:17:34 default-k8s-diff-port-798607 crio[573]: time="2025-12-29T07:17:34.392559008Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/3f6e1d2125dd7e72e38d997cd458de1594256f4f2f1cce624f2df49640814f30/merged/etc/passwd: no such file or directory"
	Dec 29 07:17:34 default-k8s-diff-port-798607 crio[573]: time="2025-12-29T07:17:34.392583472Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/3f6e1d2125dd7e72e38d997cd458de1594256f4f2f1cce624f2df49640814f30/merged/etc/group: no such file or directory"
	Dec 29 07:17:34 default-k8s-diff-port-798607 crio[573]: time="2025-12-29T07:17:34.392906371Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 29 07:17:34 default-k8s-diff-port-798607 crio[573]: time="2025-12-29T07:17:34.420167676Z" level=info msg="Created container 69c2b94428a3aab745e9698658f6eb6d79fe6fbb241aa41ef296c07dec9ba9df: kube-system/storage-provisioner/storage-provisioner" id=258f5f5b-ead9-4f0d-bdb8-a392f7d27078 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 29 07:17:34 default-k8s-diff-port-798607 crio[573]: time="2025-12-29T07:17:34.420845864Z" level=info msg="Starting container: 69c2b94428a3aab745e9698658f6eb6d79fe6fbb241aa41ef296c07dec9ba9df" id=1a40e3bf-8cb0-4de8-925e-d46fdd0704c1 name=/runtime.v1.RuntimeService/StartContainer
	Dec 29 07:17:34 default-k8s-diff-port-798607 crio[573]: time="2025-12-29T07:17:34.423346958Z" level=info msg="Started container" PID=1808 containerID=69c2b94428a3aab745e9698658f6eb6d79fe6fbb241aa41ef296c07dec9ba9df description=kube-system/storage-provisioner/storage-provisioner id=1a40e3bf-8cb0-4de8-925e-d46fdd0704c1 name=/runtime.v1.RuntimeService/StartContainer sandboxID=eed119c40b9dd0e5d0e7007b036ea1849575c5fef760ee4fa9df09b24701e85a
	Dec 29 07:17:44 default-k8s-diff-port-798607 crio[573]: time="2025-12-29T07:17:44.267517715Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=0ef62de6-fe37-4501-a0a7-d9c67a8ba62f name=/runtime.v1.ImageService/ImageStatus
	Dec 29 07:17:44 default-k8s-diff-port-798607 crio[573]: time="2025-12-29T07:17:44.268505231Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=447be089-ac58-456e-bd31-dc13dabc7163 name=/runtime.v1.ImageService/ImageStatus
	Dec 29 07:17:44 default-k8s-diff-port-798607 crio[573]: time="2025-12-29T07:17:44.269688894Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-w65p9/dashboard-metrics-scraper" id=efb9ebd2-0600-432d-980a-eb5518c84570 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 29 07:17:44 default-k8s-diff-port-798607 crio[573]: time="2025-12-29T07:17:44.269847775Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 29 07:17:44 default-k8s-diff-port-798607 crio[573]: time="2025-12-29T07:17:44.275941026Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 29 07:17:44 default-k8s-diff-port-798607 crio[573]: time="2025-12-29T07:17:44.276496573Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 29 07:17:44 default-k8s-diff-port-798607 crio[573]: time="2025-12-29T07:17:44.315912484Z" level=info msg="Created container 47b17f578b8c3e03183857345147e74034e0f22b4c45b853fdf16dc0a02b0d5d: kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-w65p9/dashboard-metrics-scraper" id=efb9ebd2-0600-432d-980a-eb5518c84570 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 29 07:17:44 default-k8s-diff-port-798607 crio[573]: time="2025-12-29T07:17:44.317071916Z" level=info msg="Starting container: 47b17f578b8c3e03183857345147e74034e0f22b4c45b853fdf16dc0a02b0d5d" id=1413dc38-3ce4-4c7c-a260-1bef2db02553 name=/runtime.v1.RuntimeService/StartContainer
	Dec 29 07:17:44 default-k8s-diff-port-798607 crio[573]: time="2025-12-29T07:17:44.319441319Z" level=info msg="Started container" PID=1848 containerID=47b17f578b8c3e03183857345147e74034e0f22b4c45b853fdf16dc0a02b0d5d description=kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-w65p9/dashboard-metrics-scraper id=1413dc38-3ce4-4c7c-a260-1bef2db02553 name=/runtime.v1.RuntimeService/StartContainer sandboxID=0f345cab99434c686863a84487c68846422559230477f3ba3cdcbacaa339384a
	Dec 29 07:17:44 default-k8s-diff-port-798607 crio[573]: time="2025-12-29T07:17:44.417251799Z" level=info msg="Removing container: 33b48df23a22a9949889b1817868670fe70df6d3be8a35180711f39b34adf45a" id=2880610f-c6da-4ae9-a93b-75983a23f30d name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 29 07:17:44 default-k8s-diff-port-798607 crio[573]: time="2025-12-29T07:17:44.42599672Z" level=info msg="Removed container 33b48df23a22a9949889b1817868670fe70df6d3be8a35180711f39b34adf45a: kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-w65p9/dashboard-metrics-scraper" id=2880610f-c6da-4ae9-a93b-75983a23f30d name=/runtime.v1.RuntimeService/RemoveContainer
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                                    NAMESPACE
	47b17f578b8c3       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           11 seconds ago      Exited              dashboard-metrics-scraper   3                   0f345cab99434       dashboard-metrics-scraper-867fb5f87b-w65p9             kubernetes-dashboard
	69c2b94428a3a       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           21 seconds ago      Running             storage-provisioner         1                   eed119c40b9dd       storage-provisioner                                    kube-system
	2ed73ae375657       docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029   42 seconds ago      Running             kubernetes-dashboard        0                   d7832d2a1391c       kubernetes-dashboard-b84665fb8-mj5lz                   kubernetes-dashboard
	15e0b29dc7bd9       56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c                                           51 seconds ago      Running             busybox                     1                   606eb3e626ced       busybox                                                default
	a46d6765151b2       aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139                                           51 seconds ago      Running             coredns                     0                   079eebb5c8464       coredns-7d764666f9-jwmww                               kube-system
	afae4df43cd8f       4921d7a6dffa922dd679732ba4797085c4f39e9a53bee8b6fdb1d463e8571251                                           51 seconds ago      Running             kindnet-cni                 0                   20c808da8c2b3       kindnet-m6jd2                                          kube-system
	b9f478121ddba       32652ff1bbe6b6df16a3dc621fcad4e6e185a2852bffc5847aab79e36c54bfb8                                           51 seconds ago      Running             kube-proxy                  0                   e6a7968ba35a9       kube-proxy-4mnzc                                       kube-system
	e449e0b1473e7       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           51 seconds ago      Exited              storage-provisioner         0                   eed119c40b9dd       storage-provisioner                                    kube-system
	2b72f7f6b29d9       2c9a4b058bd7e6ec479c38f9e8a1dac2f5ee5b0a3ebda6dfac968e8720229508                                           54 seconds ago      Running             kube-controller-manager     0                   3ba471fdf50f1       kube-controller-manager-default-k8s-diff-port-798607   kube-system
	b68c52dc0f0ed       550794e3b12ac21ec7fd940bdfb45f7c8dae3c52acd8c70e580b882de20c3dcc                                           54 seconds ago      Running             kube-scheduler              0                   67239053ff742       kube-scheduler-default-k8s-diff-port-798607            kube-system
	7adaca7a38cbd       5c6acd67e9cd10eb60a246cd233db251ef62ea97e6572f897e873f0cb648f499                                           54 seconds ago      Running             kube-apiserver              0                   84418b38402af       kube-apiserver-default-k8s-diff-port-798607            kube-system
	c791e2da2999f       0a108f7189562e99793bdecab61fdf1a7c9d913af3385de9da17fb9d6ff430e2                                           54 seconds ago      Running             etcd                        0                   6696d2256aaef       etcd-default-k8s-diff-port-798607                      kube-system
	
	
	==> coredns [a46d6765151b2df42c57b4fd3ae7acdca7c9fc096b1807fb848aabf31db30901] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Plugins not ready: "kubernetes"
	[INFO] plugin/ready: Plugins not ready: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Plugins not ready: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.13.1
	linux/amd64, go1.25.2, 1db4568
	[INFO] 127.0.0.1:48293 - 40621 "HINFO IN 8612991051879489531.2804412481474677624. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.030638956s
	[INFO] plugin/ready: Plugins not ready: "kubernetes"
	[INFO] plugin/ready: Plugins not ready: "kubernetes"
	[ERROR] plugin/kubernetes: Failed to watch
	[ERROR] plugin/kubernetes: Failed to watch
	[ERROR] plugin/kubernetes: Failed to watch
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-798607
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=default-k8s-diff-port-798607
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=306c26d738eaa3534a776d7c684ed563998a48b8
	                    minikube.k8s.io/name=default-k8s-diff-port-798607
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_29T07_16_06_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 29 Dec 2025 07:16:02 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-798607
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 29 Dec 2025 07:17:43 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 29 Dec 2025 07:17:33 +0000   Mon, 29 Dec 2025 07:16:01 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 29 Dec 2025 07:17:33 +0000   Mon, 29 Dec 2025 07:16:01 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 29 Dec 2025 07:17:33 +0000   Mon, 29 Dec 2025 07:16:01 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 29 Dec 2025 07:17:33 +0000   Mon, 29 Dec 2025 07:16:24 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    default-k8s-diff-port-798607
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863360Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863360Ki
	  pods:               110
	System Info:
	  Machine ID:                 44f990608ba801cb32708aeb6951f96d
	  System UUID:                b24ee258-37aa-4e3b-b0b9-8a7f17d3bb24
	  Boot ID:                    38b59c2f-2e15-4538-b2f7-46a8b6545a02
	  Kernel Version:             6.8.0-1045-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.35.0
	  Kubelet Version:            v1.35.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         89s
	  kube-system                 coredns-7d764666f9-jwmww                                100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     105s
	  kube-system                 etcd-default-k8s-diff-port-798607                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         110s
	  kube-system                 kindnet-m6jd2                                           100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      105s
	  kube-system                 kube-apiserver-default-k8s-diff-port-798607             250m (3%)     0 (0%)      0 (0%)           0 (0%)         110s
	  kube-system                 kube-controller-manager-default-k8s-diff-port-798607    200m (2%)     0 (0%)      0 (0%)           0 (0%)         110s
	  kube-system                 kube-proxy-4mnzc                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         105s
	  kube-system                 kube-scheduler-default-k8s-diff-port-798607             100m (1%)     0 (0%)      0 (0%)           0 (0%)         111s
	  kube-system                 storage-provisioner                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         104s
	  kubernetes-dashboard        dashboard-metrics-scraper-867fb5f87b-w65p9              0 (0%)        0 (0%)      0 (0%)           0 (0%)         49s
	  kubernetes-dashboard        kubernetes-dashboard-b84665fb8-mj5lz                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         49s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason          Age   From             Message
	  ----    ------          ----  ----             -------
	  Normal  RegisteredNode  106s  node-controller  Node default-k8s-diff-port-798607 event: Registered Node default-k8s-diff-port-798607 in Controller
	  Normal  RegisteredNode  49s   node-controller  Node default-k8s-diff-port-798607 event: Registered Node default-k8s-diff-port-798607 in Controller
	
	
	==> dmesg <==
	[Dec29 06:17] MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details.
	[  +0.001714] TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details.
	[  +0.001000] MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details.
	[  +0.084008] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge
	[  +0.380672] i8042: Warning: Keylock active
	[  +0.013979] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.493300] block sda: the capability attribute has been deprecated.
	[  +0.084603] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.024785] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +4.737934] kauditd_printk_skb: 47 callbacks suppressed
	
	
	==> etcd [c791e2da2999f159e921bf68b6eb0ff81a9e870d3867e046bd180bb6857643da] <==
	{"level":"info","ts":"2025-12-29T07:17:00.857087Z","caller":"membership/cluster.go:433","msg":"ignore already added member","cluster-id":"68eaea490fab4e05","local-member-id":"9f0758e1c58a86ed","added-peer-id":"9f0758e1c58a86ed","added-peer-peer-urls":["https://192.168.85.2:2380"],"added-peer-is-learner":false}
	{"level":"info","ts":"2025-12-29T07:17:00.857256Z","caller":"membership/cluster.go:674","msg":"updated cluster version","cluster-id":"68eaea490fab4e05","local-member-id":"9f0758e1c58a86ed","from":"3.6","to":"3.6"}
	{"level":"info","ts":"2025-12-29T07:17:01.847299Z","logger":"raft","caller":"v3@v3.6.0/raft.go:988","msg":"9f0758e1c58a86ed is starting a new election at term 2"}
	{"level":"info","ts":"2025-12-29T07:17:01.847375Z","logger":"raft","caller":"v3@v3.6.0/raft.go:930","msg":"9f0758e1c58a86ed became pre-candidate at term 2"}
	{"level":"info","ts":"2025-12-29T07:17:01.847448Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1077","msg":"9f0758e1c58a86ed received MsgPreVoteResp from 9f0758e1c58a86ed at term 2"}
	{"level":"info","ts":"2025-12-29T07:17:01.847473Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1693","msg":"9f0758e1c58a86ed has received 1 MsgPreVoteResp votes and 0 vote rejections"}
	{"level":"info","ts":"2025-12-29T07:17:01.847491Z","logger":"raft","caller":"v3@v3.6.0/raft.go:912","msg":"9f0758e1c58a86ed became candidate at term 3"}
	{"level":"info","ts":"2025-12-29T07:17:01.848832Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1077","msg":"9f0758e1c58a86ed received MsgVoteResp from 9f0758e1c58a86ed at term 3"}
	{"level":"info","ts":"2025-12-29T07:17:01.848866Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1693","msg":"9f0758e1c58a86ed has received 1 MsgVoteResp votes and 0 vote rejections"}
	{"level":"info","ts":"2025-12-29T07:17:01.848885Z","logger":"raft","caller":"v3@v3.6.0/raft.go:970","msg":"9f0758e1c58a86ed became leader at term 3"}
	{"level":"info","ts":"2025-12-29T07:17:01.848892Z","logger":"raft","caller":"v3@v3.6.0/node.go:370","msg":"raft.node: 9f0758e1c58a86ed elected leader 9f0758e1c58a86ed at term 3"}
	{"level":"info","ts":"2025-12-29T07:17:01.849659Z","caller":"etcdserver/server.go:1820","msg":"published local member to cluster through raft","local-member-id":"9f0758e1c58a86ed","local-member-attributes":"{Name:default-k8s-diff-port-798607 ClientURLs:[https://192.168.85.2:2379]}","cluster-id":"68eaea490fab4e05","publish-timeout":"7s"}
	{"level":"info","ts":"2025-12-29T07:17:01.849681Z","caller":"embed/serve.go:138","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-12-29T07:17:01.849676Z","caller":"embed/serve.go:138","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-12-29T07:17:01.849912Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-12-29T07:17:01.849949Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-12-29T07:17:01.851888Z","caller":"v3rpc/health.go:63","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-12-29T07:17:01.851961Z","caller":"v3rpc/health.go:63","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-12-29T07:17:01.854505Z","caller":"embed/serve.go:283","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.85.2:2379"}
	{"level":"info","ts":"2025-12-29T07:17:01.854860Z","caller":"embed/serve.go:283","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2025-12-29T07:17:07.139785Z","caller":"traceutil/trace.go:172","msg":"trace[456230644] transaction","detail":"{read_only:false; response_revision:567; number_of_response:1; }","duration":"110.192207ms","start":"2025-12-29T07:17:07.029569Z","end":"2025-12-29T07:17:07.139761Z","steps":["trace[456230644] 'process raft request'  (duration: 78.067677ms)","trace[456230644] 'compare'  (duration: 31.511522ms)"],"step_count":2}
	{"level":"info","ts":"2025-12-29T07:17:07.139942Z","caller":"traceutil/trace.go:172","msg":"trace[16617097] transaction","detail":"{read_only:false; response_revision:568; number_of_response:1; }","duration":"110.277675ms","start":"2025-12-29T07:17:07.029646Z","end":"2025-12-29T07:17:07.139924Z","steps":["trace[16617097] 'process raft request'  (duration: 110.02756ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-29T07:17:07.140046Z","caller":"traceutil/trace.go:172","msg":"trace[36207201] transaction","detail":"{read_only:false; response_revision:570; number_of_response:1; }","duration":"108.34171ms","start":"2025-12-29T07:17:07.031695Z","end":"2025-12-29T07:17:07.140037Z","steps":["trace[36207201] 'process raft request'  (duration: 108.198845ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-29T07:17:07.140421Z","caller":"traceutil/trace.go:172","msg":"trace[2010144027] transaction","detail":"{read_only:false; response_revision:569; number_of_response:1; }","duration":"108.714185ms","start":"2025-12-29T07:17:07.031693Z","end":"2025-12-29T07:17:07.140407Z","steps":["trace[2010144027] 'process raft request'  (duration: 108.13916ms)"],"step_count":1}
	{"level":"warn","ts":"2025-12-29T07:17:18.494315Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"130.756714ms","expected-duration":"100ms","prefix":"","request":"header:<ID:9722598045696995638 > lease_revoke:<id:06ed9b68f6c4518f>","response":"size:28"}
	
	
	==> kernel <==
	 07:17:55 up  1:00,  0 user,  load average: 4.79, 3.23, 2.22
	Linux default-k8s-diff-port-798607 6.8.0-1045-gcp #48~22.04.1-Ubuntu SMP Tue Nov 25 13:07:56 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [afae4df43cd8f643833e16cb1295db765b29ebb67de964afad4a41ff8974936e] <==
	I1229 07:17:03.916501       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1229 07:17:03.917099       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1229 07:17:03.917328       1 main.go:148] setting mtu 1500 for CNI 
	I1229 07:17:03.917361       1 main.go:178] kindnetd IP family: "ipv4"
	I1229 07:17:03.917386       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-12-29T07:17:04Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1229 07:17:04.126334       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1229 07:17:04.317973       1 controller.go:381] "Waiting for informer caches to sync"
	I1229 07:17:04.318028       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1229 07:17:04.318527       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1229 07:17:04.618821       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1229 07:17:04.618844       1 metrics.go:72] Registering metrics
	I1229 07:17:04.618933       1 controller.go:711] "Syncing nftables rules"
	I1229 07:17:14.126532       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1229 07:17:14.126629       1 main.go:301] handling current node
	I1229 07:17:24.127335       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1229 07:17:24.127375       1 main.go:301] handling current node
	I1229 07:17:34.126304       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1229 07:17:34.126362       1 main.go:301] handling current node
	I1229 07:17:44.133301       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1229 07:17:44.133337       1 main.go:301] handling current node
	I1229 07:17:54.134306       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1229 07:17:54.134383       1 main.go:301] handling current node
	
	
	==> kube-apiserver [7adaca7a38cbd91d087cd7df5275e466d228d6e8dd4c54aa4a305ea9bee1f833] <==
	I1229 07:17:02.982340       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1229 07:17:02.982444       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1229 07:17:02.982681       1 shared_informer.go:377] "Caches are synced"
	I1229 07:17:02.983687       1 shared_informer.go:377] "Caches are synced"
	I1229 07:17:02.983768       1 shared_informer.go:356] "Caches are synced" controller="crd-autoregister"
	I1229 07:17:02.983896       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1229 07:17:02.983991       1 aggregator.go:187] initial CRD sync complete...
	I1229 07:17:02.984013       1 autoregister_controller.go:144] Starting autoregister controller
	I1229 07:17:02.984020       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1229 07:17:02.984027       1 cache.go:39] Caches are synced for autoregister controller
	I1229 07:17:02.984552       1 shared_informer.go:377] "Caches are synced"
	I1229 07:17:02.988979       1 cidrallocator.go:302] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	E1229 07:17:03.014630       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1229 07:17:03.020650       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1229 07:17:03.365667       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1229 07:17:03.376797       1 controller.go:667] quota admission added evaluator for: namespaces
	I1229 07:17:03.412805       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1229 07:17:03.436864       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1229 07:17:03.443678       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1229 07:17:03.489319       1 alloc.go:329] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.105.227.7"}
	I1229 07:17:03.500956       1 alloc.go:329] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.111.181.177"}
	I1229 07:17:03.887004       1 storage_scheduling.go:139] all system priority classes are created successfully or already exist.
	I1229 07:17:06.672269       1 controller.go:667] quota admission added evaluator for: endpoints
	I1229 07:17:06.732205       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1229 07:17:06.825329       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	
	
	==> kube-controller-manager [2b72f7f6b29d95aee779b60cd81822c9b177c8165e5f4b6f517ffabb7842f102] <==
	I1229 07:17:06.154309       1 shared_informer.go:377] "Caches are synced"
	I1229 07:17:06.154285       1 node_lifecycle_controller.go:886] "Missing timestamp for Node. Assuming now as a timestamp" node="default-k8s-diff-port-798607"
	I1229 07:17:06.154323       1 shared_informer.go:377] "Caches are synced"
	I1229 07:17:06.154340       1 shared_informer.go:377] "Caches are synced"
	I1229 07:17:06.154374       1 shared_informer.go:377] "Caches are synced"
	I1229 07:17:06.154354       1 node_lifecycle_controller.go:1080] "Controller detected that zone is now in new state" zone="" newState="Normal"
	I1229 07:17:06.154404       1 shared_informer.go:377] "Caches are synced"
	I1229 07:17:06.154287       1 shared_informer.go:377] "Caches are synced"
	I1229 07:17:06.154521       1 shared_informer.go:377] "Caches are synced"
	I1229 07:17:06.154093       1 shared_informer.go:377] "Caches are synced"
	I1229 07:17:06.154627       1 shared_informer.go:377] "Caches are synced"
	I1229 07:17:06.154639       1 shared_informer.go:377] "Caches are synced"
	I1229 07:17:06.154649       1 shared_informer.go:377] "Caches are synced"
	I1229 07:17:06.154378       1 shared_informer.go:377] "Caches are synced"
	I1229 07:17:06.154316       1 shared_informer.go:377] "Caches are synced"
	I1229 07:17:06.156367       1 shared_informer.go:377] "Caches are synced"
	I1229 07:17:06.154613       1 shared_informer.go:377] "Caches are synced"
	I1229 07:17:06.158232       1 shared_informer.go:370] "Waiting for caches to sync"
	I1229 07:17:06.160703       1 shared_informer.go:377] "Caches are synced"
	I1229 07:17:06.161403       1 shared_informer.go:377] "Caches are synced"
	I1229 07:17:06.255259       1 shared_informer.go:377] "Caches are synced"
	I1229 07:17:06.255338       1 garbagecollector.go:166] "Garbage collector: all resource monitors have synced"
	I1229 07:17:06.255357       1 garbagecollector.go:169] "Proceeding to collect garbage"
	I1229 07:17:06.258676       1 shared_informer.go:377] "Caches are synced"
	I1229 07:17:06.818430       1 endpointslice_controller.go:361] "Error syncing endpoint slices for service, retrying" key="kube-system/kube-dns" err="EndpointSlice informer cache is out of date"
	
	
	==> kube-proxy [b9f478121ddba24483732c5638ef28b71257f5d523b1dae6cfb332585c61c40c] <==
	I1229 07:17:03.709033       1 server_linux.go:53] "Using iptables proxy"
	I1229 07:17:03.797592       1 shared_informer.go:370] "Waiting for caches to sync"
	I1229 07:17:03.897947       1 shared_informer.go:377] "Caches are synced"
	I1229 07:17:03.897989       1 server.go:218] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1229 07:17:03.898099       1 server.go:255] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1229 07:17:03.923887       1 server.go:264] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1229 07:17:03.924004       1 server_linux.go:136] "Using iptables Proxier"
	I1229 07:17:03.931248       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1229 07:17:03.932004       1 server.go:529] "Version info" version="v1.35.0"
	I1229 07:17:03.932047       1 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1229 07:17:03.938504       1 config.go:200] "Starting service config controller"
	I1229 07:17:03.938534       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1229 07:17:03.938577       1 config.go:403] "Starting serviceCIDR config controller"
	I1229 07:17:03.938584       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1229 07:17:03.938601       1 config.go:106] "Starting endpoint slice config controller"
	I1229 07:17:03.938607       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1229 07:17:03.938671       1 config.go:309] "Starting node config controller"
	I1229 07:17:03.938684       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1229 07:17:03.938693       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1229 07:17:04.039627       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1229 07:17:04.039627       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1229 07:17:04.039744       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [b68c52dc0f0ed416a57bc48dc7336f1d94c6becc7da6d8e5dc24d055b6929608] <==
	I1229 07:17:01.065343       1 serving.go:386] Generated self-signed cert in-memory
	W1229 07:17:02.924831       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1229 07:17:02.924874       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system": RBAC: [role.rbac.authorization.k8s.io "system::leader-locking-kube-scheduler" not found, role.rbac.authorization.k8s.io "extension-apiserver-authentication-reader" not found]
	W1229 07:17:02.924890       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1229 07:17:02.924899       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1229 07:17:02.963947       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.35.0"
	I1229 07:17:02.964037       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1229 07:17:02.970823       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1229 07:17:02.970868       1 shared_informer.go:370] "Waiting for caches to sync"
	I1229 07:17:02.971088       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1229 07:17:02.971180       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1229 07:17:03.071298       1 shared_informer.go:377] "Caches are synced"
	
	
	==> kubelet <==
	Dec 29 07:17:19 default-k8s-diff-port-798607 kubelet[738]: E1229 07:17:19.347880     738 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/etcd-default-k8s-diff-port-798607" containerName="etcd"
	Dec 29 07:17:19 default-k8s-diff-port-798607 kubelet[738]: E1229 07:17:19.752430     738 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-w65p9" containerName="dashboard-metrics-scraper"
	Dec 29 07:17:19 default-k8s-diff-port-798607 kubelet[738]: I1229 07:17:19.752481     738 scope.go:122] "RemoveContainer" containerID="3738bfcf3c9d7ccb89fc0c46b42e05404e26c155a7c1ae17db478db2a226cc57"
	Dec 29 07:17:19 default-k8s-diff-port-798607 kubelet[738]: E1229 07:17:19.752730     738 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-867fb5f87b-w65p9_kubernetes-dashboard(9669dd74-02ce-410f-b5f8-17d2a418b77c)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-w65p9" podUID="9669dd74-02ce-410f-b5f8-17d2a418b77c"
	Dec 29 07:17:21 default-k8s-diff-port-798607 kubelet[738]: E1229 07:17:21.266060     738 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-w65p9" containerName="dashboard-metrics-scraper"
	Dec 29 07:17:21 default-k8s-diff-port-798607 kubelet[738]: I1229 07:17:21.266102     738 scope.go:122] "RemoveContainer" containerID="3738bfcf3c9d7ccb89fc0c46b42e05404e26c155a7c1ae17db478db2a226cc57"
	Dec 29 07:17:21 default-k8s-diff-port-798607 kubelet[738]: I1229 07:17:21.354064     738 scope.go:122] "RemoveContainer" containerID="3738bfcf3c9d7ccb89fc0c46b42e05404e26c155a7c1ae17db478db2a226cc57"
	Dec 29 07:17:21 default-k8s-diff-port-798607 kubelet[738]: E1229 07:17:21.354372     738 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-w65p9" containerName="dashboard-metrics-scraper"
	Dec 29 07:17:21 default-k8s-diff-port-798607 kubelet[738]: I1229 07:17:21.354406     738 scope.go:122] "RemoveContainer" containerID="33b48df23a22a9949889b1817868670fe70df6d3be8a35180711f39b34adf45a"
	Dec 29 07:17:21 default-k8s-diff-port-798607 kubelet[738]: E1229 07:17:21.354597     738 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-867fb5f87b-w65p9_kubernetes-dashboard(9669dd74-02ce-410f-b5f8-17d2a418b77c)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-w65p9" podUID="9669dd74-02ce-410f-b5f8-17d2a418b77c"
	Dec 29 07:17:29 default-k8s-diff-port-798607 kubelet[738]: E1229 07:17:29.752323     738 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-w65p9" containerName="dashboard-metrics-scraper"
	Dec 29 07:17:29 default-k8s-diff-port-798607 kubelet[738]: I1229 07:17:29.752371     738 scope.go:122] "RemoveContainer" containerID="33b48df23a22a9949889b1817868670fe70df6d3be8a35180711f39b34adf45a"
	Dec 29 07:17:29 default-k8s-diff-port-798607 kubelet[738]: E1229 07:17:29.752681     738 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-867fb5f87b-w65p9_kubernetes-dashboard(9669dd74-02ce-410f-b5f8-17d2a418b77c)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-w65p9" podUID="9669dd74-02ce-410f-b5f8-17d2a418b77c"
	Dec 29 07:17:34 default-k8s-diff-port-798607 kubelet[738]: I1229 07:17:34.385340     738 scope.go:122] "RemoveContainer" containerID="e449e0b1473e7e0fe4b34cc28dcd9fb7f66d2914bac76f028799024e8566d2cf"
	Dec 29 07:17:35 default-k8s-diff-port-798607 kubelet[738]: E1229 07:17:35.304569     738 prober_manager.go:209] "Readiness probe already exists for container" pod="kube-system/coredns-7d764666f9-jwmww" containerName="coredns"
	Dec 29 07:17:44 default-k8s-diff-port-798607 kubelet[738]: E1229 07:17:44.266912     738 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-w65p9" containerName="dashboard-metrics-scraper"
	Dec 29 07:17:44 default-k8s-diff-port-798607 kubelet[738]: I1229 07:17:44.266966     738 scope.go:122] "RemoveContainer" containerID="33b48df23a22a9949889b1817868670fe70df6d3be8a35180711f39b34adf45a"
	Dec 29 07:17:44 default-k8s-diff-port-798607 kubelet[738]: I1229 07:17:44.415647     738 scope.go:122] "RemoveContainer" containerID="33b48df23a22a9949889b1817868670fe70df6d3be8a35180711f39b34adf45a"
	Dec 29 07:17:44 default-k8s-diff-port-798607 kubelet[738]: E1229 07:17:44.415957     738 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-w65p9" containerName="dashboard-metrics-scraper"
	Dec 29 07:17:44 default-k8s-diff-port-798607 kubelet[738]: I1229 07:17:44.415990     738 scope.go:122] "RemoveContainer" containerID="47b17f578b8c3e03183857345147e74034e0f22b4c45b853fdf16dc0a02b0d5d"
	Dec 29 07:17:44 default-k8s-diff-port-798607 kubelet[738]: E1229 07:17:44.416197     738 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-867fb5f87b-w65p9_kubernetes-dashboard(9669dd74-02ce-410f-b5f8-17d2a418b77c)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-w65p9" podUID="9669dd74-02ce-410f-b5f8-17d2a418b77c"
	Dec 29 07:17:49 default-k8s-diff-port-798607 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Dec 29 07:17:49 default-k8s-diff-port-798607 systemd[1]: kubelet.service: Deactivated successfully.
	Dec 29 07:17:49 default-k8s-diff-port-798607 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 29 07:17:49 default-k8s-diff-port-798607 systemd[1]: kubelet.service: Consumed 1.702s CPU time.
	
	
	==> kubernetes-dashboard [2ed73ae37565746fb8f6e353e039947e5998ca65d383bea0334243d9ae71661b] <==
	2025/12/29 07:17:13 Using namespace: kubernetes-dashboard
	2025/12/29 07:17:13 Using in-cluster config to connect to apiserver
	2025/12/29 07:17:13 Using secret token for csrf signing
	2025/12/29 07:17:13 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/12/29 07:17:13 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/12/29 07:17:13 Successful initial request to the apiserver, version: v1.35.0
	2025/12/29 07:17:13 Generating JWE encryption key
	2025/12/29 07:17:13 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/12/29 07:17:13 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/12/29 07:17:13 Initializing JWE encryption key from synchronized object
	2025/12/29 07:17:13 Creating in-cluster Sidecar client
	2025/12/29 07:17:13 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/12/29 07:17:13 Serving insecurely on HTTP port: 9090
	2025/12/29 07:17:43 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/12/29 07:17:13 Starting overwatch
	
	
	==> storage-provisioner [69c2b94428a3aab745e9698658f6eb6d79fe6fbb241aa41ef296c07dec9ba9df] <==
	I1229 07:17:34.439172       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1229 07:17:34.447961       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1229 07:17:34.448050       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1229 07:17:34.451591       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1229 07:17:37.907021       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1229 07:17:42.167542       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1229 07:17:45.766984       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1229 07:17:48.820802       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1229 07:17:51.844196       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1229 07:17:51.849530       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 07:17:51.849701       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1229 07:17:51.849881       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-798607_002dbc3a-7251-4f49-87c2-2d3908ad2a2f!
	I1229 07:17:51.849899       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"5078cbe3-2c7d-4503-aba9-6d953718bd88", APIVersion:"v1", ResourceVersion:"647", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-798607_002dbc3a-7251-4f49-87c2-2d3908ad2a2f became leader
	W1229 07:17:51.853467       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1229 07:17:51.857496       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 07:17:51.950072       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-798607_002dbc3a-7251-4f49-87c2-2d3908ad2a2f!
	W1229 07:17:53.861322       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1229 07:17:53.867129       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1229 07:17:55.879534       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1229 07:17:55.890879       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	
	
	==> storage-provisioner [e449e0b1473e7e0fe4b34cc28dcd9fb7f66d2914bac76f028799024e8566d2cf] <==
	I1229 07:17:03.651601       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1229 07:17:33.654624       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-798607 -n default-k8s-diff-port-798607
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-798607 -n default-k8s-diff-port-798607: exit status 2 (380.482424ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:270: (dbg) Run:  kubectl --context default-k8s-diff-port-798607 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:294: <<< TestStartStop/group/default-k8s-diff-port/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:295: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/Pause (7.69s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (8.04s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p newest-cni-067566 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 pause -p newest-cni-067566 --alsologtostderr -v=1: exit status 80 (2.580316753s)

                                                
                                                
-- stdout --
	* Pausing node newest-cni-067566 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1229 07:17:57.612822  290102 out.go:360] Setting OutFile to fd 1 ...
	I1229 07:17:57.613064  290102 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1229 07:17:57.613072  290102 out.go:374] Setting ErrFile to fd 2...
	I1229 07:17:57.613076  290102 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1229 07:17:57.613278  290102 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22353-9207/.minikube/bin
	I1229 07:17:57.613495  290102 out.go:368] Setting JSON to false
	I1229 07:17:57.613511  290102 mustload.go:66] Loading cluster: newest-cni-067566
	I1229 07:17:57.613815  290102 config.go:182] Loaded profile config "newest-cni-067566": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1229 07:17:57.614149  290102 cli_runner.go:164] Run: docker container inspect newest-cni-067566 --format={{.State.Status}}
	I1229 07:17:57.633712  290102 host.go:66] Checking if "newest-cni-067566" exists ...
	I1229 07:17:57.633993  290102 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1229 07:17:57.702857  290102 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:78 OomKillDisable:false NGoroutines:77 SystemTime:2025-12-29 07:17:57.690455087 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:dea7da592f5d1d2b7755e3a161be07f43fad8f75 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1229 07:17:57.703667  290102 pause.go:60] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766979815-22353@sha256:20dad5895b49b986a1253c0faab60865204843ac97fd3a6e6210da5896244409 binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/22353/minikube-v1.37.0-1766979747-22353-amd64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1766979747-22353/minikube-v1.37.0-1766979747-22353-amd64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1766979747-22353-amd64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) preload-source:auto profile:newest-cni-067566 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) rosetta:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool
=true) wantupdatenotification:%!s(bool=true) wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1229 07:17:57.712290  290102 out.go:179] * Pausing node newest-cni-067566 ... 
	I1229 07:17:57.714554  290102 host.go:66] Checking if "newest-cni-067566" exists ...
	I1229 07:17:57.714857  290102 ssh_runner.go:195] Run: systemctl --version
	I1229 07:17:57.714906  290102 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-067566
	I1229 07:17:57.735013  290102 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33098 SSHKeyPath:/home/jenkins/minikube-integration/22353-9207/.minikube/machines/newest-cni-067566/id_rsa Username:docker}
	I1229 07:17:57.835408  290102 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1229 07:17:57.849028  290102 pause.go:52] kubelet running: true
	I1229 07:17:57.849089  290102 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1229 07:17:57.990276  290102 cri.go:61] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1229 07:17:57.990407  290102 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1229 07:17:58.061148  290102 cri.go:96] found id: "821d626824aedc96a16db2c3ab3ee70b841d420650131b013db05a2ae8f2db6c"
	I1229 07:17:58.061177  290102 cri.go:96] found id: "5daa627431c76b57f1e6d385202b31e64233812baaa0d52dbe7a0c9d048a5bf2"
	I1229 07:17:58.061184  290102 cri.go:96] found id: "4f88bc0ef769e194956a8ea7fb29170dc4260a5e23de1105125bb2807c952c26"
	I1229 07:17:58.061190  290102 cri.go:96] found id: "58f920dd6d3cdcc2d114032801f7d3c13b1b7bb301072d63bbb3bb9e8d89d75f"
	I1229 07:17:58.061196  290102 cri.go:96] found id: "d873ac75817273ec07814c1a2031fa9b1c6fca13a44fd61b7d0991dca5682b1f"
	I1229 07:17:58.061202  290102 cri.go:96] found id: "96f6157381e219a2c12a140e62823a74b529c9ac0bb607ba91663a3e3b2c12ac"
	I1229 07:17:58.061208  290102 cri.go:96] found id: ""
	I1229 07:17:58.061301  290102 ssh_runner.go:195] Run: sudo runc list -f json
	I1229 07:17:58.074943  290102 retry.go:84] will retry after 200ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-29T07:17:58Z" level=error msg="open /run/runc: no such file or directory"
	I1229 07:17:58.309329  290102 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1229 07:17:58.327560  290102 pause.go:52] kubelet running: false
	I1229 07:17:58.327619  290102 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1229 07:17:58.482068  290102 cri.go:61] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1229 07:17:58.482172  290102 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1229 07:17:58.562747  290102 cri.go:96] found id: "821d626824aedc96a16db2c3ab3ee70b841d420650131b013db05a2ae8f2db6c"
	I1229 07:17:58.562772  290102 cri.go:96] found id: "5daa627431c76b57f1e6d385202b31e64233812baaa0d52dbe7a0c9d048a5bf2"
	I1229 07:17:58.562779  290102 cri.go:96] found id: "4f88bc0ef769e194956a8ea7fb29170dc4260a5e23de1105125bb2807c952c26"
	I1229 07:17:58.562784  290102 cri.go:96] found id: "58f920dd6d3cdcc2d114032801f7d3c13b1b7bb301072d63bbb3bb9e8d89d75f"
	I1229 07:17:58.562789  290102 cri.go:96] found id: "d873ac75817273ec07814c1a2031fa9b1c6fca13a44fd61b7d0991dca5682b1f"
	I1229 07:17:58.562794  290102 cri.go:96] found id: "96f6157381e219a2c12a140e62823a74b529c9ac0bb607ba91663a3e3b2c12ac"
	I1229 07:17:58.562799  290102 cri.go:96] found id: ""
	I1229 07:17:58.562843  290102 ssh_runner.go:195] Run: sudo runc list -f json
	I1229 07:17:58.784352  290102 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1229 07:17:58.800898  290102 pause.go:52] kubelet running: false
	I1229 07:17:58.801061  290102 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1229 07:17:58.964894  290102 cri.go:61] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1229 07:17:58.964993  290102 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1229 07:17:59.043809  290102 cri.go:96] found id: "821d626824aedc96a16db2c3ab3ee70b841d420650131b013db05a2ae8f2db6c"
	I1229 07:17:59.043839  290102 cri.go:96] found id: "5daa627431c76b57f1e6d385202b31e64233812baaa0d52dbe7a0c9d048a5bf2"
	I1229 07:17:59.043845  290102 cri.go:96] found id: "4f88bc0ef769e194956a8ea7fb29170dc4260a5e23de1105125bb2807c952c26"
	I1229 07:17:59.043851  290102 cri.go:96] found id: "58f920dd6d3cdcc2d114032801f7d3c13b1b7bb301072d63bbb3bb9e8d89d75f"
	I1229 07:17:59.043856  290102 cri.go:96] found id: "d873ac75817273ec07814c1a2031fa9b1c6fca13a44fd61b7d0991dca5682b1f"
	I1229 07:17:59.043861  290102 cri.go:96] found id: "96f6157381e219a2c12a140e62823a74b529c9ac0bb607ba91663a3e3b2c12ac"
	I1229 07:17:59.043866  290102 cri.go:96] found id: ""
	I1229 07:17:59.043929  290102 ssh_runner.go:195] Run: sudo runc list -f json
	I1229 07:17:59.832421  290102 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1229 07:17:59.848813  290102 pause.go:52] kubelet running: false
	I1229 07:17:59.848878  290102 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1229 07:17:59.992685  290102 cri.go:61] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1229 07:17:59.992744  290102 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1229 07:18:00.091869  290102 cri.go:96] found id: "821d626824aedc96a16db2c3ab3ee70b841d420650131b013db05a2ae8f2db6c"
	I1229 07:18:00.092019  290102 cri.go:96] found id: "5daa627431c76b57f1e6d385202b31e64233812baaa0d52dbe7a0c9d048a5bf2"
	I1229 07:18:00.092027  290102 cri.go:96] found id: "4f88bc0ef769e194956a8ea7fb29170dc4260a5e23de1105125bb2807c952c26"
	I1229 07:18:00.092033  290102 cri.go:96] found id: "58f920dd6d3cdcc2d114032801f7d3c13b1b7bb301072d63bbb3bb9e8d89d75f"
	I1229 07:18:00.092037  290102 cri.go:96] found id: "d873ac75817273ec07814c1a2031fa9b1c6fca13a44fd61b7d0991dca5682b1f"
	I1229 07:18:00.092095  290102 cri.go:96] found id: "96f6157381e219a2c12a140e62823a74b529c9ac0bb607ba91663a3e3b2c12ac"
	I1229 07:18:00.092105  290102 cri.go:96] found id: ""
	I1229 07:18:00.092180  290102 ssh_runner.go:195] Run: sudo runc list -f json
	I1229 07:18:00.108480  290102 out.go:203] 
	W1229 07:18:00.109713  290102 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-29T07:18:00Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-29T07:18:00Z" level=error msg="open /run/runc: no such file or directory"
	
	W1229 07:18:00.109728  290102 out.go:285] * 
	* 
	W1229 07:18:00.111699  290102 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1229 07:18:00.116614  290102 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:309: out/minikube-linux-amd64 pause -p newest-cni-067566 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect newest-cni-067566
helpers_test.go:244: (dbg) docker inspect newest-cni-067566:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "b76ee009518ec14b0c932931cbe7212bb296f3a3ddab5a4a534790020b0ed16b",
	        "Created": "2025-12-29T07:17:19.198674026Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 282280,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-29T07:17:44.428555841Z",
	            "FinishedAt": "2025-12-29T07:17:43.480396064Z"
	        },
	        "Image": "sha256:a2c772d8c9235c5e8a66a3d0af7f5184436aa141d74f90a5659fe0d594ea14d5",
	        "ResolvConfPath": "/var/lib/docker/containers/b76ee009518ec14b0c932931cbe7212bb296f3a3ddab5a4a534790020b0ed16b/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/b76ee009518ec14b0c932931cbe7212bb296f3a3ddab5a4a534790020b0ed16b/hostname",
	        "HostsPath": "/var/lib/docker/containers/b76ee009518ec14b0c932931cbe7212bb296f3a3ddab5a4a534790020b0ed16b/hosts",
	        "LogPath": "/var/lib/docker/containers/b76ee009518ec14b0c932931cbe7212bb296f3a3ddab5a4a534790020b0ed16b/b76ee009518ec14b0c932931cbe7212bb296f3a3ddab5a4a534790020b0ed16b-json.log",
	        "Name": "/newest-cni-067566",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "newest-cni-067566:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "newest-cni-067566",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "b76ee009518ec14b0c932931cbe7212bb296f3a3ddab5a4a534790020b0ed16b",
	                "LowerDir": "/var/lib/docker/overlay2/fda93af0dbbbb86c0eaf303db055c7aa4292d50ef2979641234b302fe67b93af-init/diff:/var/lib/docker/overlay2/3c9e424a3298c821d4195922375c499065668db3230de879006c717dc268a220/diff",
	                "MergedDir": "/var/lib/docker/overlay2/fda93af0dbbbb86c0eaf303db055c7aa4292d50ef2979641234b302fe67b93af/merged",
	                "UpperDir": "/var/lib/docker/overlay2/fda93af0dbbbb86c0eaf303db055c7aa4292d50ef2979641234b302fe67b93af/diff",
	                "WorkDir": "/var/lib/docker/overlay2/fda93af0dbbbb86c0eaf303db055c7aa4292d50ef2979641234b302fe67b93af/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "newest-cni-067566",
	                "Source": "/var/lib/docker/volumes/newest-cni-067566/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "newest-cni-067566",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766979815-22353@sha256:20dad5895b49b986a1253c0faab60865204843ac97fd3a6e6210da5896244409",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "newest-cni-067566",
	                "name.minikube.sigs.k8s.io": "newest-cni-067566",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "c30445a33562503c406b02c0e170415c20edd3e8f212814b02ed357b83ad3c78",
	            "SandboxKey": "/var/run/docker/netns/c30445a33562",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33098"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33099"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33102"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33100"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33101"
	                    }
	                ]
	            },
	            "Networks": {
	                "newest-cni-067566": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.94.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "f04963e259f7b5d31f9667ec06f8c8e0c565f69ad587935b8feaa506efff99b2",
	                    "EndpointID": "c2daf91d57e195111338dd3e05b7f4b4c7d5c395229c6b01f8dd629ce4ef104c",
	                    "Gateway": "192.168.94.1",
	                    "IPAddress": "192.168.94.2",
	                    "MacAddress": "4a:98:6c:ac:71:88",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "newest-cni-067566",
	                        "b76ee009518e"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-067566 -n newest-cni-067566
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-067566 -n newest-cni-067566: exit status 2 (377.679031ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestStartStop/group/newest-cni/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-067566 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-amd64 -p newest-cni-067566 logs -n 25: (2.240155422s)
helpers_test.go:261: TestStartStop/group/newest-cni/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ image   │ no-preload-122332 image list --format=json                                                                                                                                                                                                    │ no-preload-122332            │ jenkins │ v1.37.0 │ 29 Dec 25 07:17 UTC │ 29 Dec 25 07:17 UTC │
	│ pause   │ -p no-preload-122332 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-122332            │ jenkins │ v1.37.0 │ 29 Dec 25 07:17 UTC │                     │
	│ delete  │ -p no-preload-122332                                                                                                                                                                                                                          │ no-preload-122332            │ jenkins │ v1.37.0 │ 29 Dec 25 07:17 UTC │ 29 Dec 25 07:17 UTC │
	│ delete  │ -p no-preload-122332                                                                                                                                                                                                                          │ no-preload-122332            │ jenkins │ v1.37.0 │ 29 Dec 25 07:17 UTC │ 29 Dec 25 07:17 UTC │
	│ start   │ -p newest-cni-067566 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0 │ newest-cni-067566            │ jenkins │ v1.37.0 │ 29 Dec 25 07:17 UTC │ 29 Dec 25 07:17 UTC │
	│ addons  │ enable metrics-server -p newest-cni-067566 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ newest-cni-067566            │ jenkins │ v1.37.0 │ 29 Dec 25 07:17 UTC │                     │
	│ start   │ -p kubernetes-upgrade-174577 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio                                                                                                                             │ kubernetes-upgrade-174577    │ jenkins │ v1.37.0 │ 29 Dec 25 07:17 UTC │                     │
	│ start   │ -p kubernetes-upgrade-174577 --memory=3072 --kubernetes-version=v1.35.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                                                                                      │ kubernetes-upgrade-174577    │ jenkins │ v1.37.0 │ 29 Dec 25 07:17 UTC │ 29 Dec 25 07:17 UTC │
	│ stop    │ -p newest-cni-067566 --alsologtostderr -v=3                                                                                                                                                                                                   │ newest-cni-067566            │ jenkins │ v1.37.0 │ 29 Dec 25 07:17 UTC │ 29 Dec 25 07:17 UTC │
	│ addons  │ enable dashboard -p newest-cni-067566 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ newest-cni-067566            │ jenkins │ v1.37.0 │ 29 Dec 25 07:17 UTC │ 29 Dec 25 07:17 UTC │
	│ start   │ -p newest-cni-067566 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0 │ newest-cni-067566            │ jenkins │ v1.37.0 │ 29 Dec 25 07:17 UTC │ 29 Dec 25 07:17 UTC │
	│ delete  │ -p kubernetes-upgrade-174577                                                                                                                                                                                                                  │ kubernetes-upgrade-174577    │ jenkins │ v1.37.0 │ 29 Dec 25 07:17 UTC │ 29 Dec 25 07:17 UTC │
	│ image   │ embed-certs-739827 image list --format=json                                                                                                                                                                                                   │ embed-certs-739827           │ jenkins │ v1.37.0 │ 29 Dec 25 07:17 UTC │ 29 Dec 25 07:17 UTC │
	│ pause   │ -p embed-certs-739827 --alsologtostderr -v=1                                                                                                                                                                                                  │ embed-certs-739827           │ jenkins │ v1.37.0 │ 29 Dec 25 07:17 UTC │                     │
	│ image   │ default-k8s-diff-port-798607 image list --format=json                                                                                                                                                                                         │ default-k8s-diff-port-798607 │ jenkins │ v1.37.0 │ 29 Dec 25 07:17 UTC │ 29 Dec 25 07:17 UTC │
	│ start   │ -p auto-619064 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio                                                                                                                       │ auto-619064                  │ jenkins │ v1.37.0 │ 29 Dec 25 07:17 UTC │                     │
	│ pause   │ -p default-k8s-diff-port-798607 --alsologtostderr -v=1                                                                                                                                                                                        │ default-k8s-diff-port-798607 │ jenkins │ v1.37.0 │ 29 Dec 25 07:17 UTC │                     │
	│ delete  │ -p embed-certs-739827                                                                                                                                                                                                                         │ embed-certs-739827           │ jenkins │ v1.37.0 │ 29 Dec 25 07:17 UTC │ 29 Dec 25 07:17 UTC │
	│ delete  │ -p default-k8s-diff-port-798607                                                                                                                                                                                                               │ default-k8s-diff-port-798607 │ jenkins │ v1.37.0 │ 29 Dec 25 07:17 UTC │ 29 Dec 25 07:17 UTC │
	│ image   │ newest-cni-067566 image list --format=json                                                                                                                                                                                                    │ newest-cni-067566            │ jenkins │ v1.37.0 │ 29 Dec 25 07:17 UTC │ 29 Dec 25 07:17 UTC │
	│ pause   │ -p newest-cni-067566 --alsologtostderr -v=1                                                                                                                                                                                                   │ newest-cni-067566            │ jenkins │ v1.37.0 │ 29 Dec 25 07:17 UTC │                     │
	│ delete  │ -p embed-certs-739827                                                                                                                                                                                                                         │ embed-certs-739827           │ jenkins │ v1.37.0 │ 29 Dec 25 07:17 UTC │ 29 Dec 25 07:17 UTC │
	│ start   │ -p kindnet-619064 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=crio                                                                                                      │ kindnet-619064               │ jenkins │ v1.37.0 │ 29 Dec 25 07:17 UTC │                     │
	│ delete  │ -p default-k8s-diff-port-798607                                                                                                                                                                                                               │ default-k8s-diff-port-798607 │ jenkins │ v1.37.0 │ 29 Dec 25 07:17 UTC │ 29 Dec 25 07:17 UTC │
	│ start   │ -p calico-619064 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=crio                                                                                                        │ calico-619064                │ jenkins │ v1.37.0 │ 29 Dec 25 07:17 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/29 07:17:59
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1229 07:17:59.971321  291397 out.go:360] Setting OutFile to fd 1 ...
	I1229 07:17:59.971688  291397 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1229 07:17:59.971702  291397 out.go:374] Setting ErrFile to fd 2...
	I1229 07:17:59.971709  291397 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1229 07:17:59.971961  291397 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22353-9207/.minikube/bin
	I1229 07:17:59.972479  291397 out.go:368] Setting JSON to false
	I1229 07:17:59.973615  291397 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":3632,"bootTime":1766989048,"procs":275,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1229 07:17:59.973671  291397 start.go:143] virtualization: kvm guest
	I1229 07:17:59.978795  291397 out.go:179] * [calico-619064] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1229 07:17:59.980733  291397 notify.go:221] Checking for updates...
	I1229 07:17:59.981035  291397 out.go:179]   - MINIKUBE_LOCATION=22353
	I1229 07:17:59.982265  291397 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1229 07:17:59.983468  291397 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22353-9207/kubeconfig
	I1229 07:17:59.985028  291397 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22353-9207/.minikube
	I1229 07:17:59.986320  291397 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1229 07:17:59.988496  291397 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1229 07:17:59.990979  291397 config.go:182] Loaded profile config "auto-619064": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1229 07:17:59.991129  291397 config.go:182] Loaded profile config "kindnet-619064": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1229 07:17:59.991306  291397 config.go:182] Loaded profile config "newest-cni-067566": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1229 07:17:59.991420  291397 driver.go:422] Setting default libvirt URI to qemu:///system
	I1229 07:18:00.028139  291397 docker.go:124] docker version: linux-29.1.3:Docker Engine - Community
	I1229 07:18:00.028264  291397 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1229 07:18:00.100780  291397 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:63 OomKillDisable:false NGoroutines:82 SystemTime:2025-12-29 07:18:00.089446993 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:dea7da592f5d1d2b7755e3a161be07f43fad8f75 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1229 07:18:00.100942  291397 docker.go:319] overlay module found
	I1229 07:18:00.103031  291397 out.go:179] * Using the docker driver based on user configuration
	I1229 07:18:00.104380  291397 start.go:309] selected driver: docker
	I1229 07:18:00.104399  291397 start.go:928] validating driver "docker" against <nil>
	I1229 07:18:00.104417  291397 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1229 07:18:00.105143  291397 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1229 07:18:00.171095  291397 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:63 OomKillDisable:false NGoroutines:82 SystemTime:2025-12-29 07:18:00.160559125 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:dea7da592f5d1d2b7755e3a161be07f43fad8f75 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1229 07:18:00.171279  291397 start_flags.go:333] no existing cluster config was found, will generate one from the flags 
	I1229 07:18:00.171514  291397 start_flags.go:1019] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1229 07:18:00.173277  291397 out.go:179] * Using Docker driver with root privileges
	I1229 07:18:00.174410  291397 cni.go:84] Creating CNI manager for "calico"
	I1229 07:18:00.174438  291397 start_flags.go:342] Found "Calico" CNI - setting NetworkPlugin=cni
	I1229 07:18:00.174539  291397 start.go:353] cluster config:
	{Name:calico-619064 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766979815-22353@sha256:20dad5895b49b986a1253c0faab60865204843ac97fd3a6e6210da5896244409 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:calico-619064 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:
0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1229 07:18:00.175857  291397 out.go:179] * Starting "calico-619064" primary control-plane node in "calico-619064" cluster
	I1229 07:18:00.177060  291397 cache.go:134] Beginning downloading kic base image for docker with crio
	I1229 07:18:00.178440  291397 out.go:179] * Pulling base image v0.0.48-1766979815-22353 ...
	I1229 07:18:00.179678  291397 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime crio
	I1229 07:18:00.179717  291397 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22353-9207/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-cri-o-overlay-amd64.tar.lz4
	I1229 07:18:00.179727  291397 cache.go:65] Caching tarball of preloaded images
	I1229 07:18:00.179776  291397 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766979815-22353@sha256:20dad5895b49b986a1253c0faab60865204843ac97fd3a6e6210da5896244409 in local docker daemon
	I1229 07:18:00.179826  291397 preload.go:251] Found /home/jenkins/minikube-integration/22353-9207/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1229 07:18:00.179841  291397 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0 on crio
	I1229 07:18:00.179965  291397 profile.go:143] Saving config to /home/jenkins/minikube-integration/22353-9207/.minikube/profiles/calico-619064/config.json ...
	I1229 07:18:00.179991  291397 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22353-9207/.minikube/profiles/calico-619064/config.json: {Name:mka9f3e7299f017eb9169ed3c8c3f5e20a9f17c5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1229 07:18:00.205067  291397 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766979815-22353@sha256:20dad5895b49b986a1253c0faab60865204843ac97fd3a6e6210da5896244409 in local docker daemon, skipping pull
	I1229 07:18:00.205091  291397 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766979815-22353@sha256:20dad5895b49b986a1253c0faab60865204843ac97fd3a6e6210da5896244409 exists in daemon, skipping load
	I1229 07:18:00.205117  291397 cache.go:243] Successfully downloaded all kic artifacts
	I1229 07:18:00.205160  291397 start.go:360] acquireMachinesLock for calico-619064: {Name:mka5e706abfde0328f0cbb9e0cef3514a4fc8546 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1229 07:18:00.205286  291397 start.go:364] duration metric: took 104.689µs to acquireMachinesLock for "calico-619064"
	I1229 07:18:00.205320  291397 start.go:93] Provisioning new machine with config: &{Name:calico-619064 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766979815-22353@sha256:20dad5895b49b986a1253c0faab60865204843ac97fd3a6e6210da5896244409 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:calico-619064 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwar
ePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false} &{Name: IP: Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1229 07:18:00.205419  291397 start.go:125] createHost starting for "" (driver="docker")
	I1229 07:17:58.999368  283828 out.go:252]   - Generating certificates and keys ...
	I1229 07:17:58.999467  283828 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1229 07:17:58.999580  283828 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1229 07:17:59.083846  283828 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1229 07:17:59.106390  283828 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1229 07:17:59.174782  283828 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1229 07:17:59.262046  283828 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1229 07:17:59.394584  283828 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1229 07:17:59.394804  283828 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [auto-619064 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1229 07:17:59.596834  283828 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1229 07:17:59.597044  283828 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [auto-619064 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1229 07:17:59.705411  283828 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1229 07:17:59.770275  283828 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1229 07:17:59.891659  283828 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1229 07:17:59.891759  283828 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1229 07:18:00.064806  283828 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1229 07:18:00.305443  283828 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1229 07:18:00.411092  283828 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1229 07:18:00.612628  283828 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1229 07:18:00.639245  283828 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1229 07:18:00.648315  283828 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1229 07:18:00.727844  283828 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	
	
	==> CRI-O <==
	Dec 29 07:17:56 newest-cni-067566 crio[519]: time="2025-12-29T07:17:56.7656055Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 29 07:17:56 newest-cni-067566 crio[519]: time="2025-12-29T07:17:56.769917935Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=f84907e1-093a-4d9f-a4e5-4edc42ae6f5b name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 29 07:17:56 newest-cni-067566 crio[519]: time="2025-12-29T07:17:56.770600255Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=f781bde4-81b6-4ebf-90ff-767d28fd9933 name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 29 07:17:56 newest-cni-067566 crio[519]: time="2025-12-29T07:17:56.771808968Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Dec 29 07:17:56 newest-cni-067566 crio[519]: time="2025-12-29T07:17:56.773154049Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Dec 29 07:17:56 newest-cni-067566 crio[519]: time="2025-12-29T07:17:56.773503681Z" level=info msg="Ran pod sandbox 0ec34fa5ada40d3fcc473d03dfb49f48fd70a6e67d53d5641796c5f31e74c457 with infra container: kube-system/kindnet-xsh5z/POD" id=f84907e1-093a-4d9f-a4e5-4edc42ae6f5b name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 29 07:17:56 newest-cni-067566 crio[519]: time="2025-12-29T07:17:56.773979216Z" level=info msg="Ran pod sandbox c2ab32cfcfec6fc4b8c544bbecfe1756daaa3121d19e65a4076bbc7d20ec7fc1 with infra container: kube-system/kube-proxy-bgwp5/POD" id=f781bde4-81b6-4ebf-90ff-767d28fd9933 name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 29 07:17:56 newest-cni-067566 crio[519]: time="2025-12-29T07:17:56.774900821Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20251212-v0.29.0-alpha-105-g20ccfc88" id=738d036a-325a-448a-a061-067e31981f9a name=/runtime.v1.ImageService/ImageStatus
	Dec 29 07:17:56 newest-cni-067566 crio[519]: time="2025-12-29T07:17:56.775286252Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.35.0" id=1d52727a-47c2-42e7-9625-325fb009faea name=/runtime.v1.ImageService/ImageStatus
	Dec 29 07:17:56 newest-cni-067566 crio[519]: time="2025-12-29T07:17:56.77593274Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20251212-v0.29.0-alpha-105-g20ccfc88" id=286156d9-a4e6-4ca5-93ab-1e9a3d5a4de0 name=/runtime.v1.ImageService/ImageStatus
	Dec 29 07:17:56 newest-cni-067566 crio[519]: time="2025-12-29T07:17:56.776324601Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.35.0" id=8cdef796-20ae-4c63-bba3-d2cc67f76e24 name=/runtime.v1.ImageService/ImageStatus
	Dec 29 07:17:56 newest-cni-067566 crio[519]: time="2025-12-29T07:17:56.777125166Z" level=info msg="Creating container: kube-system/kindnet-xsh5z/kindnet-cni" id=36c10c2e-690f-472e-b66f-81ccba810f45 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 29 07:17:56 newest-cni-067566 crio[519]: time="2025-12-29T07:17:56.777210221Z" level=info msg="Creating container: kube-system/kube-proxy-bgwp5/kube-proxy" id=8c88b334-e70f-4479-98ea-7d3e20ab0d1a name=/runtime.v1.RuntimeService/CreateContainer
	Dec 29 07:17:56 newest-cni-067566 crio[519]: time="2025-12-29T07:17:56.777291811Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 29 07:17:56 newest-cni-067566 crio[519]: time="2025-12-29T07:17:56.777367348Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 29 07:17:56 newest-cni-067566 crio[519]: time="2025-12-29T07:17:56.783896454Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 29 07:17:56 newest-cni-067566 crio[519]: time="2025-12-29T07:17:56.784501487Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 29 07:17:56 newest-cni-067566 crio[519]: time="2025-12-29T07:17:56.784891315Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 29 07:17:56 newest-cni-067566 crio[519]: time="2025-12-29T07:17:56.785542976Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 29 07:17:56 newest-cni-067566 crio[519]: time="2025-12-29T07:17:56.813708406Z" level=info msg="Created container 821d626824aedc96a16db2c3ab3ee70b841d420650131b013db05a2ae8f2db6c: kube-system/kindnet-xsh5z/kindnet-cni" id=36c10c2e-690f-472e-b66f-81ccba810f45 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 29 07:17:56 newest-cni-067566 crio[519]: time="2025-12-29T07:17:56.814534467Z" level=info msg="Starting container: 821d626824aedc96a16db2c3ab3ee70b841d420650131b013db05a2ae8f2db6c" id=0b5c8486-673f-4834-ab85-ab4eab91dd22 name=/runtime.v1.RuntimeService/StartContainer
	Dec 29 07:17:56 newest-cni-067566 crio[519]: time="2025-12-29T07:17:56.817202058Z" level=info msg="Started container" PID=1093 containerID=821d626824aedc96a16db2c3ab3ee70b841d420650131b013db05a2ae8f2db6c description=kube-system/kindnet-xsh5z/kindnet-cni id=0b5c8486-673f-4834-ab85-ab4eab91dd22 name=/runtime.v1.RuntimeService/StartContainer sandboxID=0ec34fa5ada40d3fcc473d03dfb49f48fd70a6e67d53d5641796c5f31e74c457
	Dec 29 07:17:56 newest-cni-067566 crio[519]: time="2025-12-29T07:17:56.817473968Z" level=info msg="Created container 5daa627431c76b57f1e6d385202b31e64233812baaa0d52dbe7a0c9d048a5bf2: kube-system/kube-proxy-bgwp5/kube-proxy" id=8c88b334-e70f-4479-98ea-7d3e20ab0d1a name=/runtime.v1.RuntimeService/CreateContainer
	Dec 29 07:17:56 newest-cni-067566 crio[519]: time="2025-12-29T07:17:56.818035928Z" level=info msg="Starting container: 5daa627431c76b57f1e6d385202b31e64233812baaa0d52dbe7a0c9d048a5bf2" id=e40fb895-e928-4f9c-a216-0f3057554d90 name=/runtime.v1.RuntimeService/StartContainer
	Dec 29 07:17:56 newest-cni-067566 crio[519]: time="2025-12-29T07:17:56.821704297Z" level=info msg="Started container" PID=1094 containerID=5daa627431c76b57f1e6d385202b31e64233812baaa0d52dbe7a0c9d048a5bf2 description=kube-system/kube-proxy-bgwp5/kube-proxy id=e40fb895-e928-4f9c-a216-0f3057554d90 name=/runtime.v1.RuntimeService/StartContainer sandboxID=c2ab32cfcfec6fc4b8c544bbecfe1756daaa3121d19e65a4076bbc7d20ec7fc1
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	821d626824aed       4921d7a6dffa922dd679732ba4797085c4f39e9a53bee8b6fdb1d463e8571251   4 seconds ago       Running             kindnet-cni               1                   0ec34fa5ada40       kindnet-xsh5z                               kube-system
	5daa627431c76       32652ff1bbe6b6df16a3dc621fcad4e6e185a2852bffc5847aab79e36c54bfb8   4 seconds ago       Running             kube-proxy                1                   c2ab32cfcfec6       kube-proxy-bgwp5                            kube-system
	4f88bc0ef769e       2c9a4b058bd7e6ec479c38f9e8a1dac2f5ee5b0a3ebda6dfac968e8720229508   8 seconds ago       Running             kube-controller-manager   1                   c590b732cdd1e       kube-controller-manager-newest-cni-067566   kube-system
	58f920dd6d3cd       5c6acd67e9cd10eb60a246cd233db251ef62ea97e6572f897e873f0cb648f499   8 seconds ago       Running             kube-apiserver            1                   db91d8084f3bc       kube-apiserver-newest-cni-067566            kube-system
	d873ac7581727       0a108f7189562e99793bdecab61fdf1a7c9d913af3385de9da17fb9d6ff430e2   8 seconds ago       Running             etcd                      1                   1b732d3d43f6c       etcd-newest-cni-067566                      kube-system
	96f6157381e21       550794e3b12ac21ec7fd940bdfb45f7c8dae3c52acd8c70e580b882de20c3dcc   8 seconds ago       Running             kube-scheduler            1                   6077be9e8957f       kube-scheduler-newest-cni-067566            kube-system
	
	
	==> describe nodes <==
	Name:               newest-cni-067566
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=newest-cni-067566
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=306c26d738eaa3534a776d7c684ed563998a48b8
	                    minikube.k8s.io/name=newest-cni-067566
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_29T07_17_33_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 29 Dec 2025 07:17:30 +0000
	Taints:             node.kubernetes.io/not-ready:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  newest-cni-067566
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 29 Dec 2025 07:17:55 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 29 Dec 2025 07:17:55 +0000   Mon, 29 Dec 2025 07:17:28 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 29 Dec 2025 07:17:55 +0000   Mon, 29 Dec 2025 07:17:28 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 29 Dec 2025 07:17:55 +0000   Mon, 29 Dec 2025 07:17:28 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            False   Mon, 29 Dec 2025 07:17:55 +0000   Mon, 29 Dec 2025 07:17:28 +0000   KubeletNotReady              container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/cni/net.d/. Has your network provider started?
	Addresses:
	  InternalIP:  192.168.94.2
	  Hostname:    newest-cni-067566
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863360Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863360Ki
	  pods:               110
	System Info:
	  Machine ID:                 44f990608ba801cb32708aeb6951f96d
	  System UUID:                914357fb-65ed-487f-9aef-7a75495f3546
	  Boot ID:                    38b59c2f-2e15-4538-b2f7-46a8b6545a02
	  Kernel Version:             6.8.0-1045-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.35.0
	  Kubelet Version:            v1.35.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.42.0.0/24
	PodCIDRs:                     10.42.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  kube-system                 etcd-newest-cni-067566                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         29s
	  kube-system                 kindnet-xsh5z                                100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      23s
	  kube-system                 kube-apiserver-newest-cni-067566             250m (3%)     0 (0%)      0 (0%)           0 (0%)         29s
	  kube-system                 kube-controller-manager-newest-cni-067566    200m (2%)     0 (0%)      0 (0%)           0 (0%)         29s
	  kube-system                 kube-proxy-bgwp5                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         23s
	  kube-system                 kube-scheduler-newest-cni-067566             100m (1%)     0 (0%)      0 (0%)           0 (0%)         29s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (9%)   100m (1%)
	  memory             150Mi (0%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason          Age   From             Message
	  ----    ------          ----  ----             -------
	  Normal  RegisteredNode  25s   node-controller  Node newest-cni-067566 event: Registered Node newest-cni-067566 in Controller
	  Normal  RegisteredNode  3s    node-controller  Node newest-cni-067566 event: Registered Node newest-cni-067566 in Controller
	
	
	==> dmesg <==
	[Dec29 06:17] MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details.
	[  +0.001714] TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details.
	[  +0.001000] MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details.
	[  +0.084008] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge
	[  +0.380672] i8042: Warning: Keylock active
	[  +0.013979] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.493300] block sda: the capability attribute has been deprecated.
	[  +0.084603] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.024785] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +4.737934] kauditd_printk_skb: 47 callbacks suppressed
	
	
	==> etcd [d873ac75817273ec07814c1a2031fa9b1c6fca13a44fd61b7d0991dca5682b1f] <==
	{"level":"info","ts":"2025-12-29T07:17:53.399671Z","caller":"membership/cluster.go:433","msg":"ignore already added member","cluster-id":"da400bbece288f5a","local-member-id":"dfc97eb0aae75b33","added-peer-id":"dfc97eb0aae75b33","added-peer-peer-urls":["https://192.168.94.2:2380"],"added-peer-is-learner":false}
	{"level":"info","ts":"2025-12-29T07:17:53.399898Z","caller":"embed/etcd.go:766","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2025-12-29T07:17:53.400048Z","caller":"embed/etcd.go:640","msg":"serving peer traffic","address":"192.168.94.2:2380"}
	{"level":"info","ts":"2025-12-29T07:17:53.400392Z","caller":"embed/etcd.go:611","msg":"cmux::serve","address":"192.168.94.2:2380"}
	{"level":"info","ts":"2025-12-29T07:17:53.400157Z","caller":"membership/cluster.go:674","msg":"updated cluster version","cluster-id":"da400bbece288f5a","local-member-id":"dfc97eb0aae75b33","from":"3.6","to":"3.6"}
	{"level":"info","ts":"2025-12-29T07:17:53.400895Z","caller":"embed/etcd.go:292","msg":"now serving peer/client/metrics","local-member-id":"dfc97eb0aae75b33","initial-advertise-peer-urls":["https://192.168.94.2:2380"],"listen-peer-urls":["https://192.168.94.2:2380"],"advertise-client-urls":["https://192.168.94.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.94.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2025-12-29T07:17:53.401179Z","caller":"embed/etcd.go:890","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2025-12-29T07:17:53.676746Z","logger":"raft","caller":"v3@v3.6.0/raft.go:988","msg":"dfc97eb0aae75b33 is starting a new election at term 2"}
	{"level":"info","ts":"2025-12-29T07:17:53.676804Z","logger":"raft","caller":"v3@v3.6.0/raft.go:930","msg":"dfc97eb0aae75b33 became pre-candidate at term 2"}
	{"level":"info","ts":"2025-12-29T07:17:53.676863Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1077","msg":"dfc97eb0aae75b33 received MsgPreVoteResp from dfc97eb0aae75b33 at term 2"}
	{"level":"info","ts":"2025-12-29T07:17:53.676878Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1693","msg":"dfc97eb0aae75b33 has received 1 MsgPreVoteResp votes and 0 vote rejections"}
	{"level":"info","ts":"2025-12-29T07:17:53.676897Z","logger":"raft","caller":"v3@v3.6.0/raft.go:912","msg":"dfc97eb0aae75b33 became candidate at term 3"}
	{"level":"info","ts":"2025-12-29T07:17:53.677539Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1077","msg":"dfc97eb0aae75b33 received MsgVoteResp from dfc97eb0aae75b33 at term 3"}
	{"level":"info","ts":"2025-12-29T07:17:53.677621Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1693","msg":"dfc97eb0aae75b33 has received 1 MsgVoteResp votes and 0 vote rejections"}
	{"level":"info","ts":"2025-12-29T07:17:53.677680Z","logger":"raft","caller":"v3@v3.6.0/raft.go:970","msg":"dfc97eb0aae75b33 became leader at term 3"}
	{"level":"info","ts":"2025-12-29T07:17:53.677713Z","logger":"raft","caller":"v3@v3.6.0/node.go:370","msg":"raft.node: dfc97eb0aae75b33 elected leader dfc97eb0aae75b33 at term 3"}
	{"level":"info","ts":"2025-12-29T07:17:53.680429Z","caller":"etcdserver/server.go:1820","msg":"published local member to cluster through raft","local-member-id":"dfc97eb0aae75b33","local-member-attributes":"{Name:newest-cni-067566 ClientURLs:[https://192.168.94.2:2379]}","cluster-id":"da400bbece288f5a","publish-timeout":"7s"}
	{"level":"info","ts":"2025-12-29T07:17:53.680599Z","caller":"embed/serve.go:138","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-12-29T07:17:53.680646Z","caller":"embed/serve.go:138","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-12-29T07:17:53.682434Z","caller":"v3rpc/health.go:63","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-12-29T07:17:53.682607Z","caller":"v3rpc/health.go:63","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-12-29T07:17:53.685657Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-12-29T07:17:53.685724Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-12-29T07:17:53.688594Z","caller":"embed/serve.go:283","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.94.2:2379"}
	{"level":"info","ts":"2025-12-29T07:17:53.689340Z","caller":"embed/serve.go:283","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	
	
	==> kernel <==
	 07:18:02 up  1:00,  0 user,  load average: 4.89, 3.28, 2.25
	Linux newest-cni-067566 6.8.0-1045-gcp #48~22.04.1-Ubuntu SMP Tue Nov 25 13:07:56 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [821d626824aedc96a16db2c3ab3ee70b841d420650131b013db05a2ae8f2db6c] <==
	I1229 07:17:57.047816       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1229 07:17:57.048299       1 main.go:139] hostIP = 192.168.94.2
	podIP = 192.168.94.2
	I1229 07:17:57.048503       1 main.go:148] setting mtu 1500 for CNI 
	I1229 07:17:57.048535       1 main.go:178] kindnetd IP family: "ipv4"
	I1229 07:17:57.048563       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-12-29T07:17:57Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1229 07:17:57.345838       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1229 07:17:57.345877       1 controller.go:381] "Waiting for informer caches to sync"
	I1229 07:17:57.345892       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1229 07:17:57.346051       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1229 07:17:57.746059       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1229 07:17:57.746092       1 metrics.go:72] Registering metrics
	I1229 07:17:57.746158       1 controller.go:711] "Syncing nftables rules"
	
	
	==> kube-apiserver [58f920dd6d3cdcc2d114032801f7d3c13b1b7bb301072d63bbb3bb9e8d89d75f] <==
	I1229 07:17:55.103324       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1229 07:17:55.104367       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1229 07:17:55.104734       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1229 07:17:55.105260       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1229 07:17:55.106053       1 shared_informer.go:356] "Caches are synced" controller="crd-autoregister"
	I1229 07:17:55.113467       1 aggregator.go:187] initial CRD sync complete...
	I1229 07:17:55.113481       1 autoregister_controller.go:144] Starting autoregister controller
	I1229 07:17:55.113492       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1229 07:17:55.113499       1 cache.go:39] Caches are synced for autoregister controller
	I1229 07:17:55.115559       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1229 07:17:55.116179       1 cidrallocator.go:302] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1229 07:17:55.131854       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	E1229 07:17:55.138965       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1229 07:17:55.475414       1 controller.go:667] quota admission added evaluator for: namespaces
	I1229 07:17:55.507748       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1229 07:17:55.528746       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1229 07:17:55.536514       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1229 07:17:55.547482       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1229 07:17:55.583449       1 alloc.go:329] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.108.253.148"}
	I1229 07:17:55.597873       1 alloc.go:329] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.108.210.207"}
	I1229 07:17:56.006294       1 storage_scheduling.go:139] all system priority classes are created successfully or already exist.
	I1229 07:17:58.583051       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1229 07:17:58.683433       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1229 07:17:58.734709       1 controller.go:667] quota admission added evaluator for: endpoints
	I1229 07:17:58.734709       1 controller.go:667] quota admission added evaluator for: endpoints
	
	
	==> kube-controller-manager [4f88bc0ef769e194956a8ea7fb29170dc4260a5e23de1105125bb2807c952c26] <==
	I1229 07:17:58.253409       1 shared_informer.go:377] "Caches are synced"
	I1229 07:17:58.253436       1 shared_informer.go:377] "Caches are synced"
	I1229 07:17:58.253059       1 shared_informer.go:377] "Caches are synced"
	I1229 07:17:58.253613       1 shared_informer.go:377] "Caches are synced"
	I1229 07:17:58.253293       1 shared_informer.go:377] "Caches are synced"
	I1229 07:17:58.253650       1 garbagecollector.go:166] "Garbage collector: all resource monitors have synced"
	I1229 07:17:58.253656       1 garbagecollector.go:169] "Proceeding to collect garbage"
	I1229 07:17:58.253705       1 shared_informer.go:377] "Caches are synced"
	I1229 07:17:58.253381       1 shared_informer.go:377] "Caches are synced"
	I1229 07:17:58.253972       1 shared_informer.go:377] "Caches are synced"
	I1229 07:17:58.254052       1 shared_informer.go:377] "Caches are synced"
	I1229 07:17:58.254093       1 shared_informer.go:377] "Caches are synced"
	I1229 07:17:58.254119       1 shared_informer.go:377] "Caches are synced"
	I1229 07:17:58.254183       1 shared_informer.go:377] "Caches are synced"
	I1229 07:17:58.254231       1 shared_informer.go:377] "Caches are synced"
	I1229 07:17:58.256288       1 shared_informer.go:377] "Caches are synced"
	I1229 07:17:58.256494       1 shared_informer.go:377] "Caches are synced"
	I1229 07:17:58.256652       1 shared_informer.go:377] "Caches are synced"
	I1229 07:17:58.254063       1 shared_informer.go:377] "Caches are synced"
	I1229 07:17:58.256674       1 shared_informer.go:377] "Caches are synced"
	I1229 07:17:58.256948       1 node_lifecycle_controller.go:1234] "Initializing eviction metric for zone" zone=""
	I1229 07:17:58.258732       1 node_lifecycle_controller.go:886] "Missing timestamp for Node. Assuming now as a timestamp" node="newest-cni-067566"
	I1229 07:17:58.258797       1 node_lifecycle_controller.go:1038] "Controller detected that all Nodes are not-Ready. Entering master disruption mode"
	I1229 07:17:58.263068       1 shared_informer.go:377] "Caches are synced"
	I1229 07:17:58.344737       1 shared_informer.go:377] "Caches are synced"
	
	
	==> kube-proxy [5daa627431c76b57f1e6d385202b31e64233812baaa0d52dbe7a0c9d048a5bf2] <==
	I1229 07:17:56.887455       1 server_linux.go:53] "Using iptables proxy"
	I1229 07:17:56.955301       1 shared_informer.go:370] "Waiting for caches to sync"
	I1229 07:17:57.056529       1 shared_informer.go:377] "Caches are synced"
	I1229 07:17:57.056571       1 server.go:218] "Successfully retrieved NodeIPs" NodeIPs=["192.168.94.2"]
	E1229 07:17:57.056675       1 server.go:255] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1229 07:17:57.081346       1 server.go:264] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1229 07:17:57.081427       1 server_linux.go:136] "Using iptables Proxier"
	I1229 07:17:57.088528       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1229 07:17:57.088905       1 server.go:529] "Version info" version="v1.35.0"
	I1229 07:17:57.088921       1 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1229 07:17:57.090557       1 config.go:403] "Starting serviceCIDR config controller"
	I1229 07:17:57.090600       1 config.go:309] "Starting node config controller"
	I1229 07:17:57.090604       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1229 07:17:57.090610       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1229 07:17:57.090652       1 config.go:200] "Starting service config controller"
	I1229 07:17:57.090662       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1229 07:17:57.090779       1 config.go:106] "Starting endpoint slice config controller"
	I1229 07:17:57.090798       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1229 07:17:57.191238       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1229 07:17:57.191348       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1229 07:17:57.191380       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1229 07:17:57.191402       1 shared_informer.go:356] "Caches are synced" controller="node config"
	
	
	==> kube-scheduler [96f6157381e219a2c12a140e62823a74b529c9ac0bb607ba91663a3e3b2c12ac] <==
	I1229 07:17:53.606718       1 serving.go:386] Generated self-signed cert in-memory
	W1229 07:17:55.041970       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1229 07:17:55.042041       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1229 07:17:55.042057       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1229 07:17:55.042086       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1229 07:17:55.069168       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.35.0"
	I1229 07:17:55.069301       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1229 07:17:55.075385       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1229 07:17:55.075516       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1229 07:17:55.075538       1 shared_informer.go:370] "Waiting for caches to sync"
	I1229 07:17:55.075559       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1229 07:17:55.175928       1 shared_informer.go:377] "Caches are synced"
	
	
	==> kubelet <==
	Dec 29 07:17:55 newest-cni-067566 kubelet[664]: I1229 07:17:55.866184     664 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-newest-cni-067566"
	Dec 29 07:17:55 newest-cni-067566 kubelet[664]: E1229 07:17:55.879689     664 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"etcd-newest-cni-067566\" already exists" pod="kube-system/etcd-newest-cni-067566"
	Dec 29 07:17:55 newest-cni-067566 kubelet[664]: I1229 07:17:55.879736     664 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-newest-cni-067566"
	Dec 29 07:17:55 newest-cni-067566 kubelet[664]: E1229 07:17:55.881114     664 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-apiserver-newest-cni-067566\" already exists" pod="kube-system/kube-apiserver-newest-cni-067566"
	Dec 29 07:17:55 newest-cni-067566 kubelet[664]: E1229 07:17:55.881764     664 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-newest-cni-067566" containerName="kube-apiserver"
	Dec 29 07:17:55 newest-cni-067566 kubelet[664]: E1229 07:17:55.893201     664 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-apiserver-newest-cni-067566\" already exists" pod="kube-system/kube-apiserver-newest-cni-067566"
	Dec 29 07:17:55 newest-cni-067566 kubelet[664]: I1229 07:17:55.893262     664 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-newest-cni-067566"
	Dec 29 07:17:55 newest-cni-067566 kubelet[664]: E1229 07:17:55.911775     664 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-controller-manager-newest-cni-067566\" already exists" pod="kube-system/kube-controller-manager-newest-cni-067566"
	Dec 29 07:17:55 newest-cni-067566 kubelet[664]: I1229 07:17:55.911835     664 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-newest-cni-067566"
	Dec 29 07:17:55 newest-cni-067566 kubelet[664]: E1229 07:17:55.920019     664 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-scheduler-newest-cni-067566\" already exists" pod="kube-system/kube-scheduler-newest-cni-067566"
	Dec 29 07:17:56 newest-cni-067566 kubelet[664]: I1229 07:17:56.454804     664 apiserver.go:52] "Watching apiserver"
	Dec 29 07:17:56 newest-cni-067566 kubelet[664]: E1229 07:17:56.460528     664 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-controller-manager-newest-cni-067566" containerName="kube-controller-manager"
	Dec 29 07:17:56 newest-cni-067566 kubelet[664]: E1229 07:17:56.460946     664 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-newest-cni-067566" containerName="kube-scheduler"
	Dec 29 07:17:56 newest-cni-067566 kubelet[664]: E1229 07:17:56.543197     664 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/etcd-newest-cni-067566" containerName="etcd"
	Dec 29 07:17:56 newest-cni-067566 kubelet[664]: E1229 07:17:56.543550     664 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-newest-cni-067566" containerName="kube-apiserver"
	Dec 29 07:17:56 newest-cni-067566 kubelet[664]: I1229 07:17:56.559567     664 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world"
	Dec 29 07:17:56 newest-cni-067566 kubelet[664]: E1229 07:17:56.617012     664 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-newest-cni-067566" containerName="kube-scheduler"
	Dec 29 07:17:56 newest-cni-067566 kubelet[664]: I1229 07:17:56.650835     664 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/8b8c4415-a221-4dfa-a159-aafc30482453-lib-modules\") pod \"kindnet-xsh5z\" (UID: \"8b8c4415-a221-4dfa-a159-aafc30482453\") " pod="kube-system/kindnet-xsh5z"
	Dec 29 07:17:56 newest-cni-067566 kubelet[664]: I1229 07:17:56.650912     664 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/8b8c4415-a221-4dfa-a159-aafc30482453-cni-cfg\") pod \"kindnet-xsh5z\" (UID: \"8b8c4415-a221-4dfa-a159-aafc30482453\") " pod="kube-system/kindnet-xsh5z"
	Dec 29 07:17:56 newest-cni-067566 kubelet[664]: I1229 07:17:56.650937     664 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/8b8c4415-a221-4dfa-a159-aafc30482453-xtables-lock\") pod \"kindnet-xsh5z\" (UID: \"8b8c4415-a221-4dfa-a159-aafc30482453\") " pod="kube-system/kindnet-xsh5z"
	Dec 29 07:17:56 newest-cni-067566 kubelet[664]: I1229 07:17:56.650996     664 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a08835fd-da4b-4946-8106-ef878654d316-lib-modules\") pod \"kube-proxy-bgwp5\" (UID: \"a08835fd-da4b-4946-8106-ef878654d316\") " pod="kube-system/kube-proxy-bgwp5"
	Dec 29 07:17:56 newest-cni-067566 kubelet[664]: I1229 07:17:56.651057     664 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a08835fd-da4b-4946-8106-ef878654d316-xtables-lock\") pod \"kube-proxy-bgwp5\" (UID: \"a08835fd-da4b-4946-8106-ef878654d316\") " pod="kube-system/kube-proxy-bgwp5"
	Dec 29 07:17:57 newest-cni-067566 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Dec 29 07:17:57 newest-cni-067566 systemd[1]: kubelet.service: Deactivated successfully.
	Dec 29 07:17:57 newest-cni-067566 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-067566 -n newest-cni-067566
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-067566 -n newest-cni-067566: exit status 2 (384.042792ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:270: (dbg) Run:  kubectl --context newest-cni-067566 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:281: non-running pods: coredns-7d764666f9-8z8sl storage-provisioner dashboard-metrics-scraper-867fb5f87b-8ttlf kubernetes-dashboard-b84665fb8-kgf6k
helpers_test.go:283: ======> post-mortem[TestStartStop/group/newest-cni/serial/Pause]: describe non-running pods <======
helpers_test.go:286: (dbg) Run:  kubectl --context newest-cni-067566 describe pod coredns-7d764666f9-8z8sl storage-provisioner dashboard-metrics-scraper-867fb5f87b-8ttlf kubernetes-dashboard-b84665fb8-kgf6k
helpers_test.go:286: (dbg) Non-zero exit: kubectl --context newest-cni-067566 describe pod coredns-7d764666f9-8z8sl storage-provisioner dashboard-metrics-scraper-867fb5f87b-8ttlf kubernetes-dashboard-b84665fb8-kgf6k: exit status 1 (68.792392ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "coredns-7d764666f9-8z8sl" not found
	Error from server (NotFound): pods "storage-provisioner" not found
	Error from server (NotFound): pods "dashboard-metrics-scraper-867fb5f87b-8ttlf" not found
	Error from server (NotFound): pods "kubernetes-dashboard-b84665fb8-kgf6k" not found

                                                
                                                
** /stderr **
helpers_test.go:288: kubectl --context newest-cni-067566 describe pod coredns-7d764666f9-8z8sl storage-provisioner dashboard-metrics-scraper-867fb5f87b-8ttlf kubernetes-dashboard-b84665fb8-kgf6k: exit status 1
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect newest-cni-067566
helpers_test.go:244: (dbg) docker inspect newest-cni-067566:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "b76ee009518ec14b0c932931cbe7212bb296f3a3ddab5a4a534790020b0ed16b",
	        "Created": "2025-12-29T07:17:19.198674026Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 282280,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-29T07:17:44.428555841Z",
	            "FinishedAt": "2025-12-29T07:17:43.480396064Z"
	        },
	        "Image": "sha256:a2c772d8c9235c5e8a66a3d0af7f5184436aa141d74f90a5659fe0d594ea14d5",
	        "ResolvConfPath": "/var/lib/docker/containers/b76ee009518ec14b0c932931cbe7212bb296f3a3ddab5a4a534790020b0ed16b/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/b76ee009518ec14b0c932931cbe7212bb296f3a3ddab5a4a534790020b0ed16b/hostname",
	        "HostsPath": "/var/lib/docker/containers/b76ee009518ec14b0c932931cbe7212bb296f3a3ddab5a4a534790020b0ed16b/hosts",
	        "LogPath": "/var/lib/docker/containers/b76ee009518ec14b0c932931cbe7212bb296f3a3ddab5a4a534790020b0ed16b/b76ee009518ec14b0c932931cbe7212bb296f3a3ddab5a4a534790020b0ed16b-json.log",
	        "Name": "/newest-cni-067566",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "newest-cni-067566:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "newest-cni-067566",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "b76ee009518ec14b0c932931cbe7212bb296f3a3ddab5a4a534790020b0ed16b",
	                "LowerDir": "/var/lib/docker/overlay2/fda93af0dbbbb86c0eaf303db055c7aa4292d50ef2979641234b302fe67b93af-init/diff:/var/lib/docker/overlay2/3c9e424a3298c821d4195922375c499065668db3230de879006c717dc268a220/diff",
	                "MergedDir": "/var/lib/docker/overlay2/fda93af0dbbbb86c0eaf303db055c7aa4292d50ef2979641234b302fe67b93af/merged",
	                "UpperDir": "/var/lib/docker/overlay2/fda93af0dbbbb86c0eaf303db055c7aa4292d50ef2979641234b302fe67b93af/diff",
	                "WorkDir": "/var/lib/docker/overlay2/fda93af0dbbbb86c0eaf303db055c7aa4292d50ef2979641234b302fe67b93af/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "newest-cni-067566",
	                "Source": "/var/lib/docker/volumes/newest-cni-067566/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "newest-cni-067566",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766979815-22353@sha256:20dad5895b49b986a1253c0faab60865204843ac97fd3a6e6210da5896244409",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "newest-cni-067566",
	                "name.minikube.sigs.k8s.io": "newest-cni-067566",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "c30445a33562503c406b02c0e170415c20edd3e8f212814b02ed357b83ad3c78",
	            "SandboxKey": "/var/run/docker/netns/c30445a33562",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33098"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33099"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33102"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33100"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33101"
	                    }
	                ]
	            },
	            "Networks": {
	                "newest-cni-067566": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.94.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "f04963e259f7b5d31f9667ec06f8c8e0c565f69ad587935b8feaa506efff99b2",
	                    "EndpointID": "c2daf91d57e195111338dd3e05b7f4b4c7d5c395229c6b01f8dd629ce4ef104c",
	                    "Gateway": "192.168.94.1",
	                    "IPAddress": "192.168.94.2",
	                    "MacAddress": "4a:98:6c:ac:71:88",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "newest-cni-067566",
	                        "b76ee009518e"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-067566 -n newest-cni-067566
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-067566 -n newest-cni-067566: exit status 2 (352.037246ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestStartStop/group/newest-cni/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-067566 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-amd64 -p newest-cni-067566 logs -n 25: (1.351899169s)
helpers_test.go:261: TestStartStop/group/newest-cni/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ image   │ no-preload-122332 image list --format=json                                                                                                                                                                                                    │ no-preload-122332            │ jenkins │ v1.37.0 │ 29 Dec 25 07:17 UTC │ 29 Dec 25 07:17 UTC │
	│ pause   │ -p no-preload-122332 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-122332            │ jenkins │ v1.37.0 │ 29 Dec 25 07:17 UTC │                     │
	│ delete  │ -p no-preload-122332                                                                                                                                                                                                                          │ no-preload-122332            │ jenkins │ v1.37.0 │ 29 Dec 25 07:17 UTC │ 29 Dec 25 07:17 UTC │
	│ delete  │ -p no-preload-122332                                                                                                                                                                                                                          │ no-preload-122332            │ jenkins │ v1.37.0 │ 29 Dec 25 07:17 UTC │ 29 Dec 25 07:17 UTC │
	│ start   │ -p newest-cni-067566 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0 │ newest-cni-067566            │ jenkins │ v1.37.0 │ 29 Dec 25 07:17 UTC │ 29 Dec 25 07:17 UTC │
	│ addons  │ enable metrics-server -p newest-cni-067566 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ newest-cni-067566            │ jenkins │ v1.37.0 │ 29 Dec 25 07:17 UTC │                     │
	│ start   │ -p kubernetes-upgrade-174577 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio                                                                                                                             │ kubernetes-upgrade-174577    │ jenkins │ v1.37.0 │ 29 Dec 25 07:17 UTC │                     │
	│ start   │ -p kubernetes-upgrade-174577 --memory=3072 --kubernetes-version=v1.35.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                                                                                      │ kubernetes-upgrade-174577    │ jenkins │ v1.37.0 │ 29 Dec 25 07:17 UTC │ 29 Dec 25 07:17 UTC │
	│ stop    │ -p newest-cni-067566 --alsologtostderr -v=3                                                                                                                                                                                                   │ newest-cni-067566            │ jenkins │ v1.37.0 │ 29 Dec 25 07:17 UTC │ 29 Dec 25 07:17 UTC │
	│ addons  │ enable dashboard -p newest-cni-067566 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ newest-cni-067566            │ jenkins │ v1.37.0 │ 29 Dec 25 07:17 UTC │ 29 Dec 25 07:17 UTC │
	│ start   │ -p newest-cni-067566 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0 │ newest-cni-067566            │ jenkins │ v1.37.0 │ 29 Dec 25 07:17 UTC │ 29 Dec 25 07:17 UTC │
	│ delete  │ -p kubernetes-upgrade-174577                                                                                                                                                                                                                  │ kubernetes-upgrade-174577    │ jenkins │ v1.37.0 │ 29 Dec 25 07:17 UTC │ 29 Dec 25 07:17 UTC │
	│ image   │ embed-certs-739827 image list --format=json                                                                                                                                                                                                   │ embed-certs-739827           │ jenkins │ v1.37.0 │ 29 Dec 25 07:17 UTC │ 29 Dec 25 07:17 UTC │
	│ pause   │ -p embed-certs-739827 --alsologtostderr -v=1                                                                                                                                                                                                  │ embed-certs-739827           │ jenkins │ v1.37.0 │ 29 Dec 25 07:17 UTC │                     │
	│ image   │ default-k8s-diff-port-798607 image list --format=json                                                                                                                                                                                         │ default-k8s-diff-port-798607 │ jenkins │ v1.37.0 │ 29 Dec 25 07:17 UTC │ 29 Dec 25 07:17 UTC │
	│ start   │ -p auto-619064 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio                                                                                                                       │ auto-619064                  │ jenkins │ v1.37.0 │ 29 Dec 25 07:17 UTC │                     │
	│ pause   │ -p default-k8s-diff-port-798607 --alsologtostderr -v=1                                                                                                                                                                                        │ default-k8s-diff-port-798607 │ jenkins │ v1.37.0 │ 29 Dec 25 07:17 UTC │                     │
	│ delete  │ -p embed-certs-739827                                                                                                                                                                                                                         │ embed-certs-739827           │ jenkins │ v1.37.0 │ 29 Dec 25 07:17 UTC │ 29 Dec 25 07:17 UTC │
	│ delete  │ -p default-k8s-diff-port-798607                                                                                                                                                                                                               │ default-k8s-diff-port-798607 │ jenkins │ v1.37.0 │ 29 Dec 25 07:17 UTC │ 29 Dec 25 07:17 UTC │
	│ image   │ newest-cni-067566 image list --format=json                                                                                                                                                                                                    │ newest-cni-067566            │ jenkins │ v1.37.0 │ 29 Dec 25 07:17 UTC │ 29 Dec 25 07:17 UTC │
	│ pause   │ -p newest-cni-067566 --alsologtostderr -v=1                                                                                                                                                                                                   │ newest-cni-067566            │ jenkins │ v1.37.0 │ 29 Dec 25 07:17 UTC │                     │
	│ delete  │ -p embed-certs-739827                                                                                                                                                                                                                         │ embed-certs-739827           │ jenkins │ v1.37.0 │ 29 Dec 25 07:17 UTC │ 29 Dec 25 07:17 UTC │
	│ start   │ -p kindnet-619064 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=crio                                                                                                      │ kindnet-619064               │ jenkins │ v1.37.0 │ 29 Dec 25 07:17 UTC │                     │
	│ delete  │ -p default-k8s-diff-port-798607                                                                                                                                                                                                               │ default-k8s-diff-port-798607 │ jenkins │ v1.37.0 │ 29 Dec 25 07:17 UTC │ 29 Dec 25 07:17 UTC │
	│ start   │ -p calico-619064 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=crio                                                                                                        │ calico-619064                │ jenkins │ v1.37.0 │ 29 Dec 25 07:17 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/29 07:17:59
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1229 07:17:59.971321  291397 out.go:360] Setting OutFile to fd 1 ...
	I1229 07:17:59.971688  291397 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1229 07:17:59.971702  291397 out.go:374] Setting ErrFile to fd 2...
	I1229 07:17:59.971709  291397 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1229 07:17:59.971961  291397 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22353-9207/.minikube/bin
	I1229 07:17:59.972479  291397 out.go:368] Setting JSON to false
	I1229 07:17:59.973615  291397 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":3632,"bootTime":1766989048,"procs":275,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1229 07:17:59.973671  291397 start.go:143] virtualization: kvm guest
	I1229 07:17:59.978795  291397 out.go:179] * [calico-619064] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1229 07:17:59.980733  291397 notify.go:221] Checking for updates...
	I1229 07:17:59.981035  291397 out.go:179]   - MINIKUBE_LOCATION=22353
	I1229 07:17:59.982265  291397 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1229 07:17:59.983468  291397 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22353-9207/kubeconfig
	I1229 07:17:59.985028  291397 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22353-9207/.minikube
	I1229 07:17:59.986320  291397 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1229 07:17:59.988496  291397 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1229 07:17:59.990979  291397 config.go:182] Loaded profile config "auto-619064": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1229 07:17:59.991129  291397 config.go:182] Loaded profile config "kindnet-619064": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1229 07:17:59.991306  291397 config.go:182] Loaded profile config "newest-cni-067566": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1229 07:17:59.991420  291397 driver.go:422] Setting default libvirt URI to qemu:///system
	I1229 07:18:00.028139  291397 docker.go:124] docker version: linux-29.1.3:Docker Engine - Community
	I1229 07:18:00.028264  291397 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1229 07:18:00.100780  291397 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:63 OomKillDisable:false NGoroutines:82 SystemTime:2025-12-29 07:18:00.089446993 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:dea7da592f5d1d2b7755e3a161be07f43fad8f75 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1229 07:18:00.100942  291397 docker.go:319] overlay module found
	I1229 07:18:00.103031  291397 out.go:179] * Using the docker driver based on user configuration
	I1229 07:18:00.104380  291397 start.go:309] selected driver: docker
	I1229 07:18:00.104399  291397 start.go:928] validating driver "docker" against <nil>
	I1229 07:18:00.104417  291397 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1229 07:18:00.105143  291397 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1229 07:18:00.171095  291397 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:63 OomKillDisable:false NGoroutines:82 SystemTime:2025-12-29 07:18:00.160559125 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:dea7da592f5d1d2b7755e3a161be07f43fad8f75 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1229 07:18:00.171279  291397 start_flags.go:333] no existing cluster config was found, will generate one from the flags 
	I1229 07:18:00.171514  291397 start_flags.go:1019] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1229 07:18:00.173277  291397 out.go:179] * Using Docker driver with root privileges
	I1229 07:18:00.174410  291397 cni.go:84] Creating CNI manager for "calico"
	I1229 07:18:00.174438  291397 start_flags.go:342] Found "Calico" CNI - setting NetworkPlugin=cni
	I1229 07:18:00.174539  291397 start.go:353] cluster config:
	{Name:calico-619064 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766979815-22353@sha256:20dad5895b49b986a1253c0faab60865204843ac97fd3a6e6210da5896244409 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:calico-619064 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:
0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1229 07:18:00.175857  291397 out.go:179] * Starting "calico-619064" primary control-plane node in "calico-619064" cluster
	I1229 07:18:00.177060  291397 cache.go:134] Beginning downloading kic base image for docker with crio
	I1229 07:18:00.178440  291397 out.go:179] * Pulling base image v0.0.48-1766979815-22353 ...
	I1229 07:18:00.179678  291397 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime crio
	I1229 07:18:00.179717  291397 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22353-9207/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-cri-o-overlay-amd64.tar.lz4
	I1229 07:18:00.179727  291397 cache.go:65] Caching tarball of preloaded images
	I1229 07:18:00.179776  291397 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766979815-22353@sha256:20dad5895b49b986a1253c0faab60865204843ac97fd3a6e6210da5896244409 in local docker daemon
	I1229 07:18:00.179826  291397 preload.go:251] Found /home/jenkins/minikube-integration/22353-9207/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1229 07:18:00.179841  291397 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0 on crio
	I1229 07:18:00.179965  291397 profile.go:143] Saving config to /home/jenkins/minikube-integration/22353-9207/.minikube/profiles/calico-619064/config.json ...
	I1229 07:18:00.179991  291397 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22353-9207/.minikube/profiles/calico-619064/config.json: {Name:mka9f3e7299f017eb9169ed3c8c3f5e20a9f17c5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1229 07:18:00.205067  291397 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766979815-22353@sha256:20dad5895b49b986a1253c0faab60865204843ac97fd3a6e6210da5896244409 in local docker daemon, skipping pull
	I1229 07:18:00.205091  291397 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766979815-22353@sha256:20dad5895b49b986a1253c0faab60865204843ac97fd3a6e6210da5896244409 exists in daemon, skipping load
	I1229 07:18:00.205117  291397 cache.go:243] Successfully downloaded all kic artifacts
	I1229 07:18:00.205160  291397 start.go:360] acquireMachinesLock for calico-619064: {Name:mka5e706abfde0328f0cbb9e0cef3514a4fc8546 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1229 07:18:00.205286  291397 start.go:364] duration metric: took 104.689µs to acquireMachinesLock for "calico-619064"
	I1229 07:18:00.205320  291397 start.go:93] Provisioning new machine with config: &{Name:calico-619064 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766979815-22353@sha256:20dad5895b49b986a1253c0faab60865204843ac97fd3a6e6210da5896244409 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:calico-619064 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwar
ePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false} &{Name: IP: Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1229 07:18:00.205419  291397 start.go:125] createHost starting for "" (driver="docker")
	I1229 07:17:58.999368  283828 out.go:252]   - Generating certificates and keys ...
	I1229 07:17:58.999467  283828 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1229 07:17:58.999580  283828 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1229 07:17:59.083846  283828 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1229 07:17:59.106390  283828 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1229 07:17:59.174782  283828 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1229 07:17:59.262046  283828 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1229 07:17:59.394584  283828 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1229 07:17:59.394804  283828 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [auto-619064 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1229 07:17:59.596834  283828 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1229 07:17:59.597044  283828 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [auto-619064 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1229 07:17:59.705411  283828 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1229 07:17:59.770275  283828 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1229 07:17:59.891659  283828 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1229 07:17:59.891759  283828 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1229 07:18:00.064806  283828 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1229 07:18:00.305443  283828 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1229 07:18:00.411092  283828 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1229 07:18:00.612628  283828 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1229 07:18:00.639245  283828 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1229 07:18:00.648315  283828 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1229 07:18:00.727844  283828 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1229 07:17:58.816043  290627 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1229 07:17:58.816330  290627 start.go:159] libmachine.API.Create for "kindnet-619064" (driver="docker")
	I1229 07:17:58.816367  290627 client.go:173] LocalClient.Create starting
	I1229 07:17:58.816444  290627 main.go:144] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22353-9207/.minikube/certs/ca.pem
	I1229 07:17:58.816482  290627 main.go:144] libmachine: Decoding PEM data...
	I1229 07:17:58.816506  290627 main.go:144] libmachine: Parsing certificate...
	I1229 07:17:58.816574  290627 main.go:144] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22353-9207/.minikube/certs/cert.pem
	I1229 07:17:58.816602  290627 main.go:144] libmachine: Decoding PEM data...
	I1229 07:17:58.816618  290627 main.go:144] libmachine: Parsing certificate...
	I1229 07:17:58.816998  290627 cli_runner.go:164] Run: docker network inspect kindnet-619064 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1229 07:17:58.839789  290627 cli_runner.go:211] docker network inspect kindnet-619064 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1229 07:17:58.840122  290627 network_create.go:284] running [docker network inspect kindnet-619064] to gather additional debugging logs...
	I1229 07:17:58.840186  290627 cli_runner.go:164] Run: docker network inspect kindnet-619064
	W1229 07:17:58.866294  290627 cli_runner.go:211] docker network inspect kindnet-619064 returned with exit code 1
	I1229 07:17:58.866327  290627 network_create.go:287] error running [docker network inspect kindnet-619064]: docker network inspect kindnet-619064: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network kindnet-619064 not found
	I1229 07:17:58.866343  290627 network_create.go:289] output of [docker network inspect kindnet-619064]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network kindnet-619064 not found
	
	** /stderr **
	I1229 07:17:58.866456  290627 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1229 07:17:58.887080  290627 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-0cdc02b57a9c IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:d2:92:f5:d8:8c:53} reservation:<nil>}
	I1229 07:17:58.888022  290627 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-09c86d5ed1ab IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:da:3f:ba:d0:a8:f3} reservation:<nil>}
	I1229 07:17:58.888967  290627 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-5eb2f52e9e64 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:9e:e7:f2:5b:43:1d} reservation:<nil>}
	I1229 07:17:58.889792  290627 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-03d4317dd96e IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:1e:80:fa:85:c9:bd} reservation:<nil>}
	I1229 07:17:58.890459  290627 network.go:211] skipping subnet 192.168.85.0/24 that is taken: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName:br-a50196d85ec6 IfaceIPv4:192.168.85.1 IfaceMTU:1500 IfaceMAC:52:30:53:e5:57:03} reservation:<nil>}
	I1229 07:17:58.891150  290627 network.go:211] skipping subnet 192.168.94.0/24 that is taken: &{IP:192.168.94.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.94.0/24 Gateway:192.168.94.1 ClientMin:192.168.94.2 ClientMax:192.168.94.254 Broadcast:192.168.94.255 IsPrivate:true Interface:{IfaceName:br-f04963e259f7 IfaceIPv4:192.168.94.1 IfaceMTU:1500 IfaceMAC:92:d5:3a:55:14:72} reservation:<nil>}
	I1229 07:17:58.892174  290627 network.go:206] using free private subnet 192.168.103.0/24: &{IP:192.168.103.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.103.0/24 Gateway:192.168.103.1 ClientMin:192.168.103.2 ClientMax:192.168.103.254 Broadcast:192.168.103.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001ea2810}
	I1229 07:17:58.892214  290627 network_create.go:124] attempt to create docker network kindnet-619064 192.168.103.0/24 with gateway 192.168.103.1 and MTU of 1500 ...
	I1229 07:17:58.892300  290627 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.103.0/24 --gateway=192.168.103.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=kindnet-619064 kindnet-619064
	I1229 07:17:58.955368  290627 network_create.go:108] docker network kindnet-619064 192.168.103.0/24 created
	I1229 07:17:58.955405  290627 kic.go:121] calculated static IP "192.168.103.2" for the "kindnet-619064" container
	I1229 07:17:58.955472  290627 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1229 07:17:58.983179  290627 cli_runner.go:164] Run: docker volume create kindnet-619064 --label name.minikube.sigs.k8s.io=kindnet-619064 --label created_by.minikube.sigs.k8s.io=true
	I1229 07:17:59.401596  290627 oci.go:103] Successfully created a docker volume kindnet-619064
	I1229 07:17:59.401669  290627 cli_runner.go:164] Run: docker run --rm --name kindnet-619064-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=kindnet-619064 --entrypoint /usr/bin/test -v kindnet-619064:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766979815-22353@sha256:20dad5895b49b986a1253c0faab60865204843ac97fd3a6e6210da5896244409 -d /var/lib
	I1229 07:17:59.823434  290627 oci.go:107] Successfully prepared a docker volume kindnet-619064
	I1229 07:17:59.823519  290627 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime crio
	I1229 07:17:59.823537  290627 kic.go:194] Starting extracting preloaded images to volume ...
	I1229 07:17:59.823602  290627 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22353-9207/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v kindnet-619064:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766979815-22353@sha256:20dad5895b49b986a1253c0faab60865204843ac97fd3a6e6210da5896244409 -I lz4 -xf /preloaded.tar -C /extractDir
	I1229 07:18:03.140620  290627 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22353-9207/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v kindnet-619064:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766979815-22353@sha256:20dad5895b49b986a1253c0faab60865204843ac97fd3a6e6210da5896244409 -I lz4 -xf /preloaded.tar -C /extractDir: (3.316950176s)
	I1229 07:18:03.140660  290627 kic.go:203] duration metric: took 3.317119025s to extract preloaded images to volume ...
	W1229 07:18:03.140751  290627 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1229 07:18:03.140793  290627 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1229 07:18:03.140851  290627 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1229 07:18:03.205799  290627 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname kindnet-619064 --name kindnet-619064 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=kindnet-619064 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=kindnet-619064 --network kindnet-619064 --ip 192.168.103.2 --volume kindnet-619064:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766979815-22353@sha256:20dad5895b49b986a1253c0faab60865204843ac97fd3a6e6210da5896244409
	I1229 07:18:00.769284  283828 out.go:252]   - Booting up control plane ...
	I1229 07:18:00.769411  283828 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1229 07:18:00.769555  283828 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1229 07:18:00.769664  283828 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1229 07:18:00.769820  283828 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1229 07:18:00.769962  283828 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1229 07:18:00.789357  283828 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1229 07:18:00.789694  283828 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1229 07:18:00.789793  283828 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1229 07:18:00.893856  283828 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1229 07:18:00.894033  283828 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1229 07:18:01.895664  283828 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.002024818s
	I1229 07:18:01.898438  283828 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1229 07:18:01.898556  283828 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.76.2:8443/livez
	I1229 07:18:01.898697  283828 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1229 07:18:01.898824  283828 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1229 07:18:03.403961  283828 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 1.50540354s
	
	
	==> CRI-O <==
	Dec 29 07:17:56 newest-cni-067566 crio[519]: time="2025-12-29T07:17:56.7656055Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 29 07:17:56 newest-cni-067566 crio[519]: time="2025-12-29T07:17:56.769917935Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=f84907e1-093a-4d9f-a4e5-4edc42ae6f5b name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 29 07:17:56 newest-cni-067566 crio[519]: time="2025-12-29T07:17:56.770600255Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=f781bde4-81b6-4ebf-90ff-767d28fd9933 name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 29 07:17:56 newest-cni-067566 crio[519]: time="2025-12-29T07:17:56.771808968Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Dec 29 07:17:56 newest-cni-067566 crio[519]: time="2025-12-29T07:17:56.773154049Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Dec 29 07:17:56 newest-cni-067566 crio[519]: time="2025-12-29T07:17:56.773503681Z" level=info msg="Ran pod sandbox 0ec34fa5ada40d3fcc473d03dfb49f48fd70a6e67d53d5641796c5f31e74c457 with infra container: kube-system/kindnet-xsh5z/POD" id=f84907e1-093a-4d9f-a4e5-4edc42ae6f5b name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 29 07:17:56 newest-cni-067566 crio[519]: time="2025-12-29T07:17:56.773979216Z" level=info msg="Ran pod sandbox c2ab32cfcfec6fc4b8c544bbecfe1756daaa3121d19e65a4076bbc7d20ec7fc1 with infra container: kube-system/kube-proxy-bgwp5/POD" id=f781bde4-81b6-4ebf-90ff-767d28fd9933 name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 29 07:17:56 newest-cni-067566 crio[519]: time="2025-12-29T07:17:56.774900821Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20251212-v0.29.0-alpha-105-g20ccfc88" id=738d036a-325a-448a-a061-067e31981f9a name=/runtime.v1.ImageService/ImageStatus
	Dec 29 07:17:56 newest-cni-067566 crio[519]: time="2025-12-29T07:17:56.775286252Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.35.0" id=1d52727a-47c2-42e7-9625-325fb009faea name=/runtime.v1.ImageService/ImageStatus
	Dec 29 07:17:56 newest-cni-067566 crio[519]: time="2025-12-29T07:17:56.77593274Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20251212-v0.29.0-alpha-105-g20ccfc88" id=286156d9-a4e6-4ca5-93ab-1e9a3d5a4de0 name=/runtime.v1.ImageService/ImageStatus
	Dec 29 07:17:56 newest-cni-067566 crio[519]: time="2025-12-29T07:17:56.776324601Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.35.0" id=8cdef796-20ae-4c63-bba3-d2cc67f76e24 name=/runtime.v1.ImageService/ImageStatus
	Dec 29 07:17:56 newest-cni-067566 crio[519]: time="2025-12-29T07:17:56.777125166Z" level=info msg="Creating container: kube-system/kindnet-xsh5z/kindnet-cni" id=36c10c2e-690f-472e-b66f-81ccba810f45 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 29 07:17:56 newest-cni-067566 crio[519]: time="2025-12-29T07:17:56.777210221Z" level=info msg="Creating container: kube-system/kube-proxy-bgwp5/kube-proxy" id=8c88b334-e70f-4479-98ea-7d3e20ab0d1a name=/runtime.v1.RuntimeService/CreateContainer
	Dec 29 07:17:56 newest-cni-067566 crio[519]: time="2025-12-29T07:17:56.777291811Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 29 07:17:56 newest-cni-067566 crio[519]: time="2025-12-29T07:17:56.777367348Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 29 07:17:56 newest-cni-067566 crio[519]: time="2025-12-29T07:17:56.783896454Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 29 07:17:56 newest-cni-067566 crio[519]: time="2025-12-29T07:17:56.784501487Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 29 07:17:56 newest-cni-067566 crio[519]: time="2025-12-29T07:17:56.784891315Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 29 07:17:56 newest-cni-067566 crio[519]: time="2025-12-29T07:17:56.785542976Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 29 07:17:56 newest-cni-067566 crio[519]: time="2025-12-29T07:17:56.813708406Z" level=info msg="Created container 821d626824aedc96a16db2c3ab3ee70b841d420650131b013db05a2ae8f2db6c: kube-system/kindnet-xsh5z/kindnet-cni" id=36c10c2e-690f-472e-b66f-81ccba810f45 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 29 07:17:56 newest-cni-067566 crio[519]: time="2025-12-29T07:17:56.814534467Z" level=info msg="Starting container: 821d626824aedc96a16db2c3ab3ee70b841d420650131b013db05a2ae8f2db6c" id=0b5c8486-673f-4834-ab85-ab4eab91dd22 name=/runtime.v1.RuntimeService/StartContainer
	Dec 29 07:17:56 newest-cni-067566 crio[519]: time="2025-12-29T07:17:56.817202058Z" level=info msg="Started container" PID=1093 containerID=821d626824aedc96a16db2c3ab3ee70b841d420650131b013db05a2ae8f2db6c description=kube-system/kindnet-xsh5z/kindnet-cni id=0b5c8486-673f-4834-ab85-ab4eab91dd22 name=/runtime.v1.RuntimeService/StartContainer sandboxID=0ec34fa5ada40d3fcc473d03dfb49f48fd70a6e67d53d5641796c5f31e74c457
	Dec 29 07:17:56 newest-cni-067566 crio[519]: time="2025-12-29T07:17:56.817473968Z" level=info msg="Created container 5daa627431c76b57f1e6d385202b31e64233812baaa0d52dbe7a0c9d048a5bf2: kube-system/kube-proxy-bgwp5/kube-proxy" id=8c88b334-e70f-4479-98ea-7d3e20ab0d1a name=/runtime.v1.RuntimeService/CreateContainer
	Dec 29 07:17:56 newest-cni-067566 crio[519]: time="2025-12-29T07:17:56.818035928Z" level=info msg="Starting container: 5daa627431c76b57f1e6d385202b31e64233812baaa0d52dbe7a0c9d048a5bf2" id=e40fb895-e928-4f9c-a216-0f3057554d90 name=/runtime.v1.RuntimeService/StartContainer
	Dec 29 07:17:56 newest-cni-067566 crio[519]: time="2025-12-29T07:17:56.821704297Z" level=info msg="Started container" PID=1094 containerID=5daa627431c76b57f1e6d385202b31e64233812baaa0d52dbe7a0c9d048a5bf2 description=kube-system/kube-proxy-bgwp5/kube-proxy id=e40fb895-e928-4f9c-a216-0f3057554d90 name=/runtime.v1.RuntimeService/StartContainer sandboxID=c2ab32cfcfec6fc4b8c544bbecfe1756daaa3121d19e65a4076bbc7d20ec7fc1
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	821d626824aed       4921d7a6dffa922dd679732ba4797085c4f39e9a53bee8b6fdb1d463e8571251   7 seconds ago       Running             kindnet-cni               1                   0ec34fa5ada40       kindnet-xsh5z                               kube-system
	5daa627431c76       32652ff1bbe6b6df16a3dc621fcad4e6e185a2852bffc5847aab79e36c54bfb8   7 seconds ago       Running             kube-proxy                1                   c2ab32cfcfec6       kube-proxy-bgwp5                            kube-system
	4f88bc0ef769e       2c9a4b058bd7e6ec479c38f9e8a1dac2f5ee5b0a3ebda6dfac968e8720229508   11 seconds ago      Running             kube-controller-manager   1                   c590b732cdd1e       kube-controller-manager-newest-cni-067566   kube-system
	58f920dd6d3cd       5c6acd67e9cd10eb60a246cd233db251ef62ea97e6572f897e873f0cb648f499   11 seconds ago      Running             kube-apiserver            1                   db91d8084f3bc       kube-apiserver-newest-cni-067566            kube-system
	d873ac7581727       0a108f7189562e99793bdecab61fdf1a7c9d913af3385de9da17fb9d6ff430e2   11 seconds ago      Running             etcd                      1                   1b732d3d43f6c       etcd-newest-cni-067566                      kube-system
	96f6157381e21       550794e3b12ac21ec7fd940bdfb45f7c8dae3c52acd8c70e580b882de20c3dcc   11 seconds ago      Running             kube-scheduler            1                   6077be9e8957f       kube-scheduler-newest-cni-067566            kube-system
	
	
	==> describe nodes <==
	Name:               newest-cni-067566
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=newest-cni-067566
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=306c26d738eaa3534a776d7c684ed563998a48b8
	                    minikube.k8s.io/name=newest-cni-067566
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_29T07_17_33_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 29 Dec 2025 07:17:30 +0000
	Taints:             node.kubernetes.io/not-ready:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  newest-cni-067566
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 29 Dec 2025 07:17:55 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 29 Dec 2025 07:17:55 +0000   Mon, 29 Dec 2025 07:17:28 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 29 Dec 2025 07:17:55 +0000   Mon, 29 Dec 2025 07:17:28 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 29 Dec 2025 07:17:55 +0000   Mon, 29 Dec 2025 07:17:28 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            False   Mon, 29 Dec 2025 07:17:55 +0000   Mon, 29 Dec 2025 07:17:28 +0000   KubeletNotReady              container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/cni/net.d/. Has your network provider started?
	Addresses:
	  InternalIP:  192.168.94.2
	  Hostname:    newest-cni-067566
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863360Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863360Ki
	  pods:               110
	System Info:
	  Machine ID:                 44f990608ba801cb32708aeb6951f96d
	  System UUID:                914357fb-65ed-487f-9aef-7a75495f3546
	  Boot ID:                    38b59c2f-2e15-4538-b2f7-46a8b6545a02
	  Kernel Version:             6.8.0-1045-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.35.0
	  Kubelet Version:            v1.35.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.42.0.0/24
	PodCIDRs:                     10.42.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  kube-system                 etcd-newest-cni-067566                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         32s
	  kube-system                 kindnet-xsh5z                                100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      26s
	  kube-system                 kube-apiserver-newest-cni-067566             250m (3%)     0 (0%)      0 (0%)           0 (0%)         32s
	  kube-system                 kube-controller-manager-newest-cni-067566    200m (2%)     0 (0%)      0 (0%)           0 (0%)         32s
	  kube-system                 kube-proxy-bgwp5                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         26s
	  kube-system                 kube-scheduler-newest-cni-067566             100m (1%)     0 (0%)      0 (0%)           0 (0%)         32s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (9%)   100m (1%)
	  memory             150Mi (0%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason          Age   From             Message
	  ----    ------          ----  ----             -------
	  Normal  RegisteredNode  28s   node-controller  Node newest-cni-067566 event: Registered Node newest-cni-067566 in Controller
	  Normal  RegisteredNode  6s    node-controller  Node newest-cni-067566 event: Registered Node newest-cni-067566 in Controller
	
	
	==> dmesg <==
	[Dec29 06:17] MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details.
	[  +0.001714] TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details.
	[  +0.001000] MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details.
	[  +0.084008] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge
	[  +0.380672] i8042: Warning: Keylock active
	[  +0.013979] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.493300] block sda: the capability attribute has been deprecated.
	[  +0.084603] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.024785] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +4.737934] kauditd_printk_skb: 47 callbacks suppressed
	
	
	==> etcd [d873ac75817273ec07814c1a2031fa9b1c6fca13a44fd61b7d0991dca5682b1f] <==
	{"level":"info","ts":"2025-12-29T07:17:53.399671Z","caller":"membership/cluster.go:433","msg":"ignore already added member","cluster-id":"da400bbece288f5a","local-member-id":"dfc97eb0aae75b33","added-peer-id":"dfc97eb0aae75b33","added-peer-peer-urls":["https://192.168.94.2:2380"],"added-peer-is-learner":false}
	{"level":"info","ts":"2025-12-29T07:17:53.399898Z","caller":"embed/etcd.go:766","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2025-12-29T07:17:53.400048Z","caller":"embed/etcd.go:640","msg":"serving peer traffic","address":"192.168.94.2:2380"}
	{"level":"info","ts":"2025-12-29T07:17:53.400392Z","caller":"embed/etcd.go:611","msg":"cmux::serve","address":"192.168.94.2:2380"}
	{"level":"info","ts":"2025-12-29T07:17:53.400157Z","caller":"membership/cluster.go:674","msg":"updated cluster version","cluster-id":"da400bbece288f5a","local-member-id":"dfc97eb0aae75b33","from":"3.6","to":"3.6"}
	{"level":"info","ts":"2025-12-29T07:17:53.400895Z","caller":"embed/etcd.go:292","msg":"now serving peer/client/metrics","local-member-id":"dfc97eb0aae75b33","initial-advertise-peer-urls":["https://192.168.94.2:2380"],"listen-peer-urls":["https://192.168.94.2:2380"],"advertise-client-urls":["https://192.168.94.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.94.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2025-12-29T07:17:53.401179Z","caller":"embed/etcd.go:890","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2025-12-29T07:17:53.676746Z","logger":"raft","caller":"v3@v3.6.0/raft.go:988","msg":"dfc97eb0aae75b33 is starting a new election at term 2"}
	{"level":"info","ts":"2025-12-29T07:17:53.676804Z","logger":"raft","caller":"v3@v3.6.0/raft.go:930","msg":"dfc97eb0aae75b33 became pre-candidate at term 2"}
	{"level":"info","ts":"2025-12-29T07:17:53.676863Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1077","msg":"dfc97eb0aae75b33 received MsgPreVoteResp from dfc97eb0aae75b33 at term 2"}
	{"level":"info","ts":"2025-12-29T07:17:53.676878Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1693","msg":"dfc97eb0aae75b33 has received 1 MsgPreVoteResp votes and 0 vote rejections"}
	{"level":"info","ts":"2025-12-29T07:17:53.676897Z","logger":"raft","caller":"v3@v3.6.0/raft.go:912","msg":"dfc97eb0aae75b33 became candidate at term 3"}
	{"level":"info","ts":"2025-12-29T07:17:53.677539Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1077","msg":"dfc97eb0aae75b33 received MsgVoteResp from dfc97eb0aae75b33 at term 3"}
	{"level":"info","ts":"2025-12-29T07:17:53.677621Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1693","msg":"dfc97eb0aae75b33 has received 1 MsgVoteResp votes and 0 vote rejections"}
	{"level":"info","ts":"2025-12-29T07:17:53.677680Z","logger":"raft","caller":"v3@v3.6.0/raft.go:970","msg":"dfc97eb0aae75b33 became leader at term 3"}
	{"level":"info","ts":"2025-12-29T07:17:53.677713Z","logger":"raft","caller":"v3@v3.6.0/node.go:370","msg":"raft.node: dfc97eb0aae75b33 elected leader dfc97eb0aae75b33 at term 3"}
	{"level":"info","ts":"2025-12-29T07:17:53.680429Z","caller":"etcdserver/server.go:1820","msg":"published local member to cluster through raft","local-member-id":"dfc97eb0aae75b33","local-member-attributes":"{Name:newest-cni-067566 ClientURLs:[https://192.168.94.2:2379]}","cluster-id":"da400bbece288f5a","publish-timeout":"7s"}
	{"level":"info","ts":"2025-12-29T07:17:53.680599Z","caller":"embed/serve.go:138","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-12-29T07:17:53.680646Z","caller":"embed/serve.go:138","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-12-29T07:17:53.682434Z","caller":"v3rpc/health.go:63","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-12-29T07:17:53.682607Z","caller":"v3rpc/health.go:63","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-12-29T07:17:53.685657Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-12-29T07:17:53.685724Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-12-29T07:17:53.688594Z","caller":"embed/serve.go:283","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.94.2:2379"}
	{"level":"info","ts":"2025-12-29T07:17:53.689340Z","caller":"embed/serve.go:283","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	
	
	==> kernel <==
	 07:18:04 up  1:00,  0 user,  load average: 5.06, 3.34, 2.27
	Linux newest-cni-067566 6.8.0-1045-gcp #48~22.04.1-Ubuntu SMP Tue Nov 25 13:07:56 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [821d626824aedc96a16db2c3ab3ee70b841d420650131b013db05a2ae8f2db6c] <==
	I1229 07:17:57.047816       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1229 07:17:57.048299       1 main.go:139] hostIP = 192.168.94.2
	podIP = 192.168.94.2
	I1229 07:17:57.048503       1 main.go:148] setting mtu 1500 for CNI 
	I1229 07:17:57.048535       1 main.go:178] kindnetd IP family: "ipv4"
	I1229 07:17:57.048563       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-12-29T07:17:57Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1229 07:17:57.345838       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1229 07:17:57.345877       1 controller.go:381] "Waiting for informer caches to sync"
	I1229 07:17:57.345892       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1229 07:17:57.346051       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1229 07:17:57.746059       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1229 07:17:57.746092       1 metrics.go:72] Registering metrics
	I1229 07:17:57.746158       1 controller.go:711] "Syncing nftables rules"
	
	
	==> kube-apiserver [58f920dd6d3cdcc2d114032801f7d3c13b1b7bb301072d63bbb3bb9e8d89d75f] <==
	I1229 07:17:55.103324       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1229 07:17:55.104367       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1229 07:17:55.104734       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1229 07:17:55.105260       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1229 07:17:55.106053       1 shared_informer.go:356] "Caches are synced" controller="crd-autoregister"
	I1229 07:17:55.113467       1 aggregator.go:187] initial CRD sync complete...
	I1229 07:17:55.113481       1 autoregister_controller.go:144] Starting autoregister controller
	I1229 07:17:55.113492       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1229 07:17:55.113499       1 cache.go:39] Caches are synced for autoregister controller
	I1229 07:17:55.115559       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1229 07:17:55.116179       1 cidrallocator.go:302] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1229 07:17:55.131854       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	E1229 07:17:55.138965       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1229 07:17:55.475414       1 controller.go:667] quota admission added evaluator for: namespaces
	I1229 07:17:55.507748       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1229 07:17:55.528746       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1229 07:17:55.536514       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1229 07:17:55.547482       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1229 07:17:55.583449       1 alloc.go:329] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.108.253.148"}
	I1229 07:17:55.597873       1 alloc.go:329] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.108.210.207"}
	I1229 07:17:56.006294       1 storage_scheduling.go:139] all system priority classes are created successfully or already exist.
	I1229 07:17:58.583051       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1229 07:17:58.683433       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1229 07:17:58.734709       1 controller.go:667] quota admission added evaluator for: endpoints
	I1229 07:17:58.734709       1 controller.go:667] quota admission added evaluator for: endpoints
	
	
	==> kube-controller-manager [4f88bc0ef769e194956a8ea7fb29170dc4260a5e23de1105125bb2807c952c26] <==
	I1229 07:17:58.253409       1 shared_informer.go:377] "Caches are synced"
	I1229 07:17:58.253436       1 shared_informer.go:377] "Caches are synced"
	I1229 07:17:58.253059       1 shared_informer.go:377] "Caches are synced"
	I1229 07:17:58.253613       1 shared_informer.go:377] "Caches are synced"
	I1229 07:17:58.253293       1 shared_informer.go:377] "Caches are synced"
	I1229 07:17:58.253650       1 garbagecollector.go:166] "Garbage collector: all resource monitors have synced"
	I1229 07:17:58.253656       1 garbagecollector.go:169] "Proceeding to collect garbage"
	I1229 07:17:58.253705       1 shared_informer.go:377] "Caches are synced"
	I1229 07:17:58.253381       1 shared_informer.go:377] "Caches are synced"
	I1229 07:17:58.253972       1 shared_informer.go:377] "Caches are synced"
	I1229 07:17:58.254052       1 shared_informer.go:377] "Caches are synced"
	I1229 07:17:58.254093       1 shared_informer.go:377] "Caches are synced"
	I1229 07:17:58.254119       1 shared_informer.go:377] "Caches are synced"
	I1229 07:17:58.254183       1 shared_informer.go:377] "Caches are synced"
	I1229 07:17:58.254231       1 shared_informer.go:377] "Caches are synced"
	I1229 07:17:58.256288       1 shared_informer.go:377] "Caches are synced"
	I1229 07:17:58.256494       1 shared_informer.go:377] "Caches are synced"
	I1229 07:17:58.256652       1 shared_informer.go:377] "Caches are synced"
	I1229 07:17:58.254063       1 shared_informer.go:377] "Caches are synced"
	I1229 07:17:58.256674       1 shared_informer.go:377] "Caches are synced"
	I1229 07:17:58.256948       1 node_lifecycle_controller.go:1234] "Initializing eviction metric for zone" zone=""
	I1229 07:17:58.258732       1 node_lifecycle_controller.go:886] "Missing timestamp for Node. Assuming now as a timestamp" node="newest-cni-067566"
	I1229 07:17:58.258797       1 node_lifecycle_controller.go:1038] "Controller detected that all Nodes are not-Ready. Entering master disruption mode"
	I1229 07:17:58.263068       1 shared_informer.go:377] "Caches are synced"
	I1229 07:17:58.344737       1 shared_informer.go:377] "Caches are synced"
	
	
	==> kube-proxy [5daa627431c76b57f1e6d385202b31e64233812baaa0d52dbe7a0c9d048a5bf2] <==
	I1229 07:17:56.887455       1 server_linux.go:53] "Using iptables proxy"
	I1229 07:17:56.955301       1 shared_informer.go:370] "Waiting for caches to sync"
	I1229 07:17:57.056529       1 shared_informer.go:377] "Caches are synced"
	I1229 07:17:57.056571       1 server.go:218] "Successfully retrieved NodeIPs" NodeIPs=["192.168.94.2"]
	E1229 07:17:57.056675       1 server.go:255] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1229 07:17:57.081346       1 server.go:264] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1229 07:17:57.081427       1 server_linux.go:136] "Using iptables Proxier"
	I1229 07:17:57.088528       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1229 07:17:57.088905       1 server.go:529] "Version info" version="v1.35.0"
	I1229 07:17:57.088921       1 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1229 07:17:57.090557       1 config.go:403] "Starting serviceCIDR config controller"
	I1229 07:17:57.090600       1 config.go:309] "Starting node config controller"
	I1229 07:17:57.090604       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1229 07:17:57.090610       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1229 07:17:57.090652       1 config.go:200] "Starting service config controller"
	I1229 07:17:57.090662       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1229 07:17:57.090779       1 config.go:106] "Starting endpoint slice config controller"
	I1229 07:17:57.090798       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1229 07:17:57.191238       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1229 07:17:57.191348       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1229 07:17:57.191380       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1229 07:17:57.191402       1 shared_informer.go:356] "Caches are synced" controller="node config"
	
	
	==> kube-scheduler [96f6157381e219a2c12a140e62823a74b529c9ac0bb607ba91663a3e3b2c12ac] <==
	I1229 07:17:53.606718       1 serving.go:386] Generated self-signed cert in-memory
	W1229 07:17:55.041970       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1229 07:17:55.042041       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1229 07:17:55.042057       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1229 07:17:55.042086       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1229 07:17:55.069168       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.35.0"
	I1229 07:17:55.069301       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1229 07:17:55.075385       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1229 07:17:55.075516       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1229 07:17:55.075538       1 shared_informer.go:370] "Waiting for caches to sync"
	I1229 07:17:55.075559       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1229 07:17:55.175928       1 shared_informer.go:377] "Caches are synced"
	
	
	==> kubelet <==
	Dec 29 07:17:55 newest-cni-067566 kubelet[664]: I1229 07:17:55.866184     664 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-newest-cni-067566"
	Dec 29 07:17:55 newest-cni-067566 kubelet[664]: E1229 07:17:55.879689     664 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"etcd-newest-cni-067566\" already exists" pod="kube-system/etcd-newest-cni-067566"
	Dec 29 07:17:55 newest-cni-067566 kubelet[664]: I1229 07:17:55.879736     664 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-newest-cni-067566"
	Dec 29 07:17:55 newest-cni-067566 kubelet[664]: E1229 07:17:55.881114     664 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-apiserver-newest-cni-067566\" already exists" pod="kube-system/kube-apiserver-newest-cni-067566"
	Dec 29 07:17:55 newest-cni-067566 kubelet[664]: E1229 07:17:55.881764     664 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-newest-cni-067566" containerName="kube-apiserver"
	Dec 29 07:17:55 newest-cni-067566 kubelet[664]: E1229 07:17:55.893201     664 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-apiserver-newest-cni-067566\" already exists" pod="kube-system/kube-apiserver-newest-cni-067566"
	Dec 29 07:17:55 newest-cni-067566 kubelet[664]: I1229 07:17:55.893262     664 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-newest-cni-067566"
	Dec 29 07:17:55 newest-cni-067566 kubelet[664]: E1229 07:17:55.911775     664 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-controller-manager-newest-cni-067566\" already exists" pod="kube-system/kube-controller-manager-newest-cni-067566"
	Dec 29 07:17:55 newest-cni-067566 kubelet[664]: I1229 07:17:55.911835     664 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-newest-cni-067566"
	Dec 29 07:17:55 newest-cni-067566 kubelet[664]: E1229 07:17:55.920019     664 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-scheduler-newest-cni-067566\" already exists" pod="kube-system/kube-scheduler-newest-cni-067566"
	Dec 29 07:17:56 newest-cni-067566 kubelet[664]: I1229 07:17:56.454804     664 apiserver.go:52] "Watching apiserver"
	Dec 29 07:17:56 newest-cni-067566 kubelet[664]: E1229 07:17:56.460528     664 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-controller-manager-newest-cni-067566" containerName="kube-controller-manager"
	Dec 29 07:17:56 newest-cni-067566 kubelet[664]: E1229 07:17:56.460946     664 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-newest-cni-067566" containerName="kube-scheduler"
	Dec 29 07:17:56 newest-cni-067566 kubelet[664]: E1229 07:17:56.543197     664 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/etcd-newest-cni-067566" containerName="etcd"
	Dec 29 07:17:56 newest-cni-067566 kubelet[664]: E1229 07:17:56.543550     664 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-newest-cni-067566" containerName="kube-apiserver"
	Dec 29 07:17:56 newest-cni-067566 kubelet[664]: I1229 07:17:56.559567     664 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world"
	Dec 29 07:17:56 newest-cni-067566 kubelet[664]: E1229 07:17:56.617012     664 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-newest-cni-067566" containerName="kube-scheduler"
	Dec 29 07:17:56 newest-cni-067566 kubelet[664]: I1229 07:17:56.650835     664 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/8b8c4415-a221-4dfa-a159-aafc30482453-lib-modules\") pod \"kindnet-xsh5z\" (UID: \"8b8c4415-a221-4dfa-a159-aafc30482453\") " pod="kube-system/kindnet-xsh5z"
	Dec 29 07:17:56 newest-cni-067566 kubelet[664]: I1229 07:17:56.650912     664 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/8b8c4415-a221-4dfa-a159-aafc30482453-cni-cfg\") pod \"kindnet-xsh5z\" (UID: \"8b8c4415-a221-4dfa-a159-aafc30482453\") " pod="kube-system/kindnet-xsh5z"
	Dec 29 07:17:56 newest-cni-067566 kubelet[664]: I1229 07:17:56.650937     664 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/8b8c4415-a221-4dfa-a159-aafc30482453-xtables-lock\") pod \"kindnet-xsh5z\" (UID: \"8b8c4415-a221-4dfa-a159-aafc30482453\") " pod="kube-system/kindnet-xsh5z"
	Dec 29 07:17:56 newest-cni-067566 kubelet[664]: I1229 07:17:56.650996     664 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a08835fd-da4b-4946-8106-ef878654d316-lib-modules\") pod \"kube-proxy-bgwp5\" (UID: \"a08835fd-da4b-4946-8106-ef878654d316\") " pod="kube-system/kube-proxy-bgwp5"
	Dec 29 07:17:56 newest-cni-067566 kubelet[664]: I1229 07:17:56.651057     664 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a08835fd-da4b-4946-8106-ef878654d316-xtables-lock\") pod \"kube-proxy-bgwp5\" (UID: \"a08835fd-da4b-4946-8106-ef878654d316\") " pod="kube-system/kube-proxy-bgwp5"
	Dec 29 07:17:57 newest-cni-067566 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Dec 29 07:17:57 newest-cni-067566 systemd[1]: kubelet.service: Deactivated successfully.
	Dec 29 07:17:57 newest-cni-067566 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-067566 -n newest-cni-067566
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-067566 -n newest-cni-067566: exit status 2 (378.808747ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:270: (dbg) Run:  kubectl --context newest-cni-067566 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:281: non-running pods: coredns-7d764666f9-8z8sl storage-provisioner dashboard-metrics-scraper-867fb5f87b-8ttlf kubernetes-dashboard-b84665fb8-kgf6k
helpers_test.go:283: ======> post-mortem[TestStartStop/group/newest-cni/serial/Pause]: describe non-running pods <======
helpers_test.go:286: (dbg) Run:  kubectl --context newest-cni-067566 describe pod coredns-7d764666f9-8z8sl storage-provisioner dashboard-metrics-scraper-867fb5f87b-8ttlf kubernetes-dashboard-b84665fb8-kgf6k
helpers_test.go:286: (dbg) Non-zero exit: kubectl --context newest-cni-067566 describe pod coredns-7d764666f9-8z8sl storage-provisioner dashboard-metrics-scraper-867fb5f87b-8ttlf kubernetes-dashboard-b84665fb8-kgf6k: exit status 1 (88.951644ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "coredns-7d764666f9-8z8sl" not found
	Error from server (NotFound): pods "storage-provisioner" not found
	Error from server (NotFound): pods "dashboard-metrics-scraper-867fb5f87b-8ttlf" not found
	Error from server (NotFound): pods "kubernetes-dashboard-b84665fb8-kgf6k" not found

                                                
                                                
** /stderr **
helpers_test.go:288: kubectl --context newest-cni-067566 describe pod coredns-7d764666f9-8z8sl storage-provisioner dashboard-metrics-scraper-867fb5f87b-8ttlf kubernetes-dashboard-b84665fb8-kgf6k: exit status 1
--- FAIL: TestStartStop/group/newest-cni/serial/Pause (8.04s)

                                                
                                    

Test pass (279/332)

Order passed test Duration
3 TestDownloadOnly/v1.28.0/json-events 6.06
4 TestDownloadOnly/v1.28.0/preload-exists 0
8 TestDownloadOnly/v1.28.0/LogsDuration 0.07
9 TestDownloadOnly/v1.28.0/DeleteAll 0.23
10 TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds 0.14
12 TestDownloadOnly/v1.35.0/json-events 2.41
13 TestDownloadOnly/v1.35.0/preload-exists 0
17 TestDownloadOnly/v1.35.0/LogsDuration 0.07
18 TestDownloadOnly/v1.35.0/DeleteAll 0.22
19 TestDownloadOnly/v1.35.0/DeleteAlwaysSucceeds 0.14
20 TestDownloadOnlyKic 0.39
21 TestBinaryMirror 0.79
22 TestOffline 60.88
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.06
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.06
27 TestAddons/Setup 92.9
31 TestAddons/serial/GCPAuth/Namespaces 0.14
32 TestAddons/serial/GCPAuth/FakeCredentials 8.41
48 TestAddons/StoppedEnableDisable 16.65
49 TestCertOptions 25.21
50 TestCertExpiration 210.27
52 TestForceSystemdFlag 20.49
53 TestForceSystemdEnv 25.85
58 TestErrorSpam/setup 16.23
59 TestErrorSpam/start 0.66
60 TestErrorSpam/status 0.96
61 TestErrorSpam/pause 5.07
62 TestErrorSpam/unpause 5.42
63 TestErrorSpam/stop 8.1
66 TestFunctional/serial/CopySyncFile 0
67 TestFunctional/serial/StartWithProxy 38.84
68 TestFunctional/serial/AuditLog 0
69 TestFunctional/serial/SoftStart 6.1
70 TestFunctional/serial/KubeContext 0.04
71 TestFunctional/serial/KubectlGetPods 0.1
74 TestFunctional/serial/CacheCmd/cache/add_remote 2.47
75 TestFunctional/serial/CacheCmd/cache/add_local 0.88
76 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.06
77 TestFunctional/serial/CacheCmd/cache/list 0.06
78 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.29
79 TestFunctional/serial/CacheCmd/cache/cache_reload 1.52
80 TestFunctional/serial/CacheCmd/cache/delete 0.12
81 TestFunctional/serial/MinikubeKubectlCmd 0.11
82 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.11
83 TestFunctional/serial/ExtraConfig 36.23
84 TestFunctional/serial/ComponentHealth 0.06
85 TestFunctional/serial/LogsCmd 1.17
86 TestFunctional/serial/LogsFileCmd 1.18
87 TestFunctional/serial/InvalidService 4.36
89 TestFunctional/parallel/ConfigCmd 0.44
90 TestFunctional/parallel/DashboardCmd 7.2
91 TestFunctional/parallel/DryRun 0.42
92 TestFunctional/parallel/InternationalLanguage 0.18
93 TestFunctional/parallel/StatusCmd 1
97 TestFunctional/parallel/ServiceCmdConnect 7.67
98 TestFunctional/parallel/AddonsCmd 0.15
99 TestFunctional/parallel/PersistentVolumeClaim 20.38
101 TestFunctional/parallel/SSHCmd 0.79
102 TestFunctional/parallel/CpCmd 2.19
103 TestFunctional/parallel/MySQL 21.86
104 TestFunctional/parallel/FileSync 0.32
105 TestFunctional/parallel/CertSync 2.03
109 TestFunctional/parallel/NodeLabels 0.07
111 TestFunctional/parallel/NonActiveRuntimeDisabled 0.61
113 TestFunctional/parallel/License 0.29
114 TestFunctional/parallel/ImageCommands/ImageListShort 0.24
115 TestFunctional/parallel/Version/short 0.06
116 TestFunctional/parallel/ImageCommands/ImageListTable 0.28
117 TestFunctional/parallel/Version/components 0.53
118 TestFunctional/parallel/ImageCommands/ImageListJson 0.28
119 TestFunctional/parallel/ImageCommands/ImageListYaml 0.29
120 TestFunctional/parallel/ImageCommands/ImageBuild 3.93
121 TestFunctional/parallel/ImageCommands/Setup 0.42
122 TestFunctional/parallel/UpdateContextCmd/no_changes 0.19
123 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.18
124 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.18
125 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 1.55
126 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 3.09
128 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.52
129 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
131 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 12.26
132 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 3.57
133 TestFunctional/parallel/ImageCommands/ImageSaveToFile 1.09
134 TestFunctional/parallel/ImageCommands/ImageRemove 0.52
135 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.67
136 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.39
137 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.05
138 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0
142 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.12
143 TestFunctional/parallel/ServiceCmd/DeployApp 7.13
144 TestFunctional/parallel/ProfileCmd/profile_not_create 0.45
145 TestFunctional/parallel/ProfileCmd/profile_list 0.41
146 TestFunctional/parallel/ProfileCmd/profile_json_output 0.4
147 TestFunctional/parallel/MountCmd/any-port 6.97
148 TestFunctional/parallel/ServiceCmd/List 0.93
149 TestFunctional/parallel/ServiceCmd/JSONOutput 0.95
150 TestFunctional/parallel/ServiceCmd/HTTPS 0.58
151 TestFunctional/parallel/ServiceCmd/Format 0.62
152 TestFunctional/parallel/ServiceCmd/URL 0.65
153 TestFunctional/parallel/MountCmd/specific-port 1.91
154 TestFunctional/parallel/MountCmd/VerifyCleanup 1.7
155 TestFunctional/delete_echo-server_images 0.04
156 TestFunctional/delete_my-image_image 0.02
157 TestFunctional/delete_minikube_cached_images 0.02
162 TestMultiControlPlane/serial/StartCluster 110.64
163 TestMultiControlPlane/serial/DeployApp 4.33
164 TestMultiControlPlane/serial/PingHostFromPods 1.01
165 TestMultiControlPlane/serial/AddWorkerNode 23.09
166 TestMultiControlPlane/serial/NodeLabels 0.06
167 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.92
168 TestMultiControlPlane/serial/CopyFile 17.14
169 TestMultiControlPlane/serial/StopSecondaryNode 14.17
170 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.72
171 TestMultiControlPlane/serial/RestartSecondaryNode 8.63
172 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 0.93
173 TestMultiControlPlane/serial/RestartClusterKeepsNodes 103.91
174 TestMultiControlPlane/serial/DeleteSecondaryNode 11.62
175 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.7
176 TestMultiControlPlane/serial/StopCluster 43.11
177 TestMultiControlPlane/serial/RestartCluster 51.94
178 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.7
179 TestMultiControlPlane/serial/AddSecondaryNode 35.05
180 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.9
185 TestJSONOutput/start/Command 35.37
186 TestJSONOutput/start/Audit 0
188 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
189 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
192 TestJSONOutput/pause/Audit 0
194 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
195 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
198 TestJSONOutput/unpause/Audit 0
200 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
201 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
203 TestJSONOutput/stop/Command 7.97
204 TestJSONOutput/stop/Audit 0
206 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
207 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
208 TestErrorJSONOutput 0.23
210 TestKicCustomNetwork/create_custom_network 22.94
211 TestKicCustomNetwork/use_default_bridge_network 19.21
212 TestKicExistingNetwork 20.28
213 TestKicCustomSubnet 20.07
214 TestKicStaticIP 22.76
215 TestMainNoArgs 0.06
216 TestMinikubeProfile 42.93
219 TestMountStart/serial/StartWithMountFirst 4.71
220 TestMountStart/serial/VerifyMountFirst 0.27
221 TestMountStart/serial/StartWithMountSecond 7.75
222 TestMountStart/serial/VerifyMountSecond 0.27
223 TestMountStart/serial/DeleteFirst 1.67
224 TestMountStart/serial/VerifyMountPostDelete 0.26
225 TestMountStart/serial/Stop 1.24
226 TestMountStart/serial/RestartStopped 7.25
227 TestMountStart/serial/VerifyMountPostStop 0.27
230 TestMultiNode/serial/FreshStart2Nodes 59.03
231 TestMultiNode/serial/DeployApp2Nodes 3.39
232 TestMultiNode/serial/PingHostFrom2Pods 0.7
233 TestMultiNode/serial/AddNode 25.86
234 TestMultiNode/serial/MultiNodeLabels 0.06
235 TestMultiNode/serial/ProfileList 0.66
236 TestMultiNode/serial/CopyFile 9.7
237 TestMultiNode/serial/StopNode 2.27
238 TestMultiNode/serial/StartAfterStop 7.15
239 TestMultiNode/serial/RestartKeepsNodes 79.18
240 TestMultiNode/serial/DeleteNode 5.54
241 TestMultiNode/serial/StopMultiNode 28.63
242 TestMultiNode/serial/RestartMultiNode 45.11
243 TestMultiNode/serial/ValidateNameConflict 22.39
250 TestScheduledStopUnix 95.58
253 TestInsufficientStorage 11.62
254 TestRunningBinaryUpgrade 69.45
256 TestKubernetesUpgrade 342.21
257 TestMissingContainerUpgrade 63.82
258 TestStoppedBinaryUpgrade/Setup 0.61
260 TestPause/serial/Start 59.39
261 TestStoppedBinaryUpgrade/Upgrade 306.87
262 TestPause/serial/SecondStartNoReconfiguration 6.76
270 TestPreload/Start-NoPreload-PullImage 54.17
273 TestNoKubernetes/serial/StartNoK8sWithVersion 0.09
274 TestNoKubernetes/serial/StartWithK8s 19.24
278 TestNoKubernetes/serial/StartWithStopK8s 23.2
283 TestNetworkPlugins/group/false 3.36
287 TestPreload/Restart-With-Preload-Check-User-Image 43.93
288 TestNoKubernetes/serial/Start 4.57
289 TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads 0
290 TestNoKubernetes/serial/VerifyK8sNotRunning 0.33
291 TestNoKubernetes/serial/ProfileList 15.69
292 TestNoKubernetes/serial/Stop 1.25
293 TestNoKubernetes/serial/StartNoArgs 6.43
294 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.27
297 TestStartStop/group/old-k8s-version/serial/FirstStart 47
298 TestStartStop/group/old-k8s-version/serial/DeployApp 8.23
300 TestStartStop/group/old-k8s-version/serial/Stop 16
301 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.19
302 TestStartStop/group/old-k8s-version/serial/SecondStart 51.42
303 TestStoppedBinaryUpgrade/MinikubeLogs 0.97
305 TestStartStop/group/no-preload/serial/FirstStart 43.52
306 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 6.01
307 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.07
308 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.23
310 TestStartStop/group/no-preload/serial/DeployApp 9.27
312 TestStartStop/group/embed-certs/serial/FirstStart 42.7
314 TestStartStop/group/no-preload/serial/Stop 18.26
316 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 37.19
317 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.21
318 TestStartStop/group/no-preload/serial/SecondStart 50.56
319 TestStartStop/group/embed-certs/serial/DeployApp 8.27
320 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 8.23
322 TestStartStop/group/embed-certs/serial/Stop 18.14
324 TestStartStop/group/default-k8s-diff-port/serial/Stop 16.26
325 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.18
326 TestStartStop/group/embed-certs/serial/SecondStart 47.62
327 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 6.01
328 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.2
329 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 43.97
330 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.1
331 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.3
334 TestStartStop/group/newest-cni/serial/FirstStart 24.41
335 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 6
336 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 6.01
337 TestStartStop/group/newest-cni/serial/DeployApp 0
339 TestStartStop/group/newest-cni/serial/Stop 2.48
340 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.07
341 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.07
342 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.23
343 TestStartStop/group/newest-cni/serial/SecondStart 13.15
344 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.29
346 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.26
347 TestNetworkPlugins/group/auto/Start 39.24
349 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
350 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
351 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.29
353 TestNetworkPlugins/group/kindnet/Start 45.5
354 TestNetworkPlugins/group/calico/Start 56.72
355 TestNetworkPlugins/group/custom-flannel/Start 44.75
356 TestNetworkPlugins/group/auto/KubeletFlags 0.46
357 TestNetworkPlugins/group/auto/NetCatPod 9.3
358 TestNetworkPlugins/group/auto/DNS 0.14
359 TestNetworkPlugins/group/auto/Localhost 0.11
360 TestNetworkPlugins/group/auto/HairPin 0.09
361 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
362 TestNetworkPlugins/group/kindnet/KubeletFlags 0.29
363 TestNetworkPlugins/group/kindnet/NetCatPod 9.17
364 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.3
365 TestNetworkPlugins/group/custom-flannel/NetCatPod 8.21
366 TestNetworkPlugins/group/calico/ControllerPod 5.04
367 TestNetworkPlugins/group/enable-default-cni/Start 57.14
368 TestNetworkPlugins/group/kindnet/DNS 0.12
369 TestNetworkPlugins/group/kindnet/Localhost 0.09
370 TestNetworkPlugins/group/kindnet/HairPin 0.09
371 TestNetworkPlugins/group/custom-flannel/DNS 0.11
372 TestNetworkPlugins/group/calico/KubeletFlags 0.42
373 TestNetworkPlugins/group/custom-flannel/Localhost 0.1
374 TestNetworkPlugins/group/custom-flannel/HairPin 0.11
375 TestNetworkPlugins/group/calico/NetCatPod 9.68
376 TestNetworkPlugins/group/calico/DNS 0.13
377 TestNetworkPlugins/group/calico/Localhost 0.11
378 TestNetworkPlugins/group/calico/HairPin 0.12
379 TestNetworkPlugins/group/flannel/Start 45.38
380 TestNetworkPlugins/group/bridge/Start 59.18
381 TestPreload/PreloadSrc/gcs 3.88
382 TestPreload/PreloadSrc/github 13.92
383 TestPreload/PreloadSrc/gcs-cached 0.43
384 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.29
385 TestNetworkPlugins/group/enable-default-cni/NetCatPod 9.17
386 TestNetworkPlugins/group/enable-default-cni/DNS 0.15
387 TestNetworkPlugins/group/enable-default-cni/Localhost 0.14
388 TestNetworkPlugins/group/enable-default-cni/HairPin 0.09
389 TestNetworkPlugins/group/flannel/ControllerPod 6.01
390 TestNetworkPlugins/group/flannel/KubeletFlags 0.32
391 TestNetworkPlugins/group/flannel/NetCatPod 9.18
392 TestNetworkPlugins/group/flannel/DNS 0.11
393 TestNetworkPlugins/group/flannel/Localhost 0.08
394 TestNetworkPlugins/group/flannel/HairPin 0.09
395 TestNetworkPlugins/group/bridge/KubeletFlags 0.29
396 TestNetworkPlugins/group/bridge/NetCatPod 8.18
397 TestNetworkPlugins/group/bridge/DNS 0.12
398 TestNetworkPlugins/group/bridge/Localhost 0.09
399 TestNetworkPlugins/group/bridge/HairPin 0.09
x
+
TestDownloadOnly/v1.28.0/json-events (6.06s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-887932 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-887932 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio: (6.058666817s)
--- PASS: TestDownloadOnly/v1.28.0/json-events (6.06s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/preload-exists
I1229 06:46:00.088392   12733 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime crio
I1229 06:46:00.088473   12733 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22353-9207/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.28.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-887932
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-887932: exit status 85 (70.498992ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬──────────┐
	│ COMMAND │                                                                                   ARGS                                                                                    │       PROFILE        │  USER   │ VERSION │     START TIME      │ END TIME │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼──────────┤
	│ start   │ -o=json --download-only -p download-only-887932 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio │ download-only-887932 │ jenkins │ v1.37.0 │ 29 Dec 25 06:45 UTC │          │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴──────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/29 06:45:54
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1229 06:45:54.079779   12745 out.go:360] Setting OutFile to fd 1 ...
	I1229 06:45:54.079966   12745 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1229 06:45:54.079974   12745 out.go:374] Setting ErrFile to fd 2...
	I1229 06:45:54.079978   12745 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1229 06:45:54.080140   12745 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22353-9207/.minikube/bin
	W1229 06:45:54.080287   12745 root.go:314] Error reading config file at /home/jenkins/minikube-integration/22353-9207/.minikube/config/config.json: open /home/jenkins/minikube-integration/22353-9207/.minikube/config/config.json: no such file or directory
	I1229 06:45:54.080710   12745 out.go:368] Setting JSON to true
	I1229 06:45:54.081539   12745 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":1706,"bootTime":1766989048,"procs":203,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1229 06:45:54.081591   12745 start.go:143] virtualization: kvm guest
	I1229 06:45:54.085431   12745 out.go:99] [download-only-887932] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1229 06:45:54.085567   12745 notify.go:221] Checking for updates...
	W1229 06:45:54.085595   12745 preload.go:372] Failed to list preload files: open /home/jenkins/minikube-integration/22353-9207/.minikube/cache/preloaded-tarball: no such file or directory
	I1229 06:45:54.086580   12745 out.go:171] MINIKUBE_LOCATION=22353
	I1229 06:45:54.088029   12745 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1229 06:45:54.089945   12745 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/22353-9207/kubeconfig
	I1229 06:45:54.091086   12745 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/22353-9207/.minikube
	I1229 06:45:54.092038   12745 out.go:171] MINIKUBE_BIN=out/minikube-linux-amd64
	W1229 06:45:54.094308   12745 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1229 06:45:54.094512   12745 driver.go:422] Setting default libvirt URI to qemu:///system
	I1229 06:45:54.119199   12745 docker.go:124] docker version: linux-29.1.3:Docker Engine - Community
	I1229 06:45:54.119293   12745 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1229 06:45:54.328463   12745 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:30 OomKillDisable:false NGoroutines:63 SystemTime:2025-12-29 06:45:54.319514023 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:dea7da592f5d1d2b7755e3a161be07f43fad8f75 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1229 06:45:54.328560   12745 docker.go:319] overlay module found
	I1229 06:45:54.329939   12745 out.go:99] Using the docker driver based on user configuration
	I1229 06:45:54.329960   12745 start.go:309] selected driver: docker
	I1229 06:45:54.329966   12745 start.go:928] validating driver "docker" against <nil>
	I1229 06:45:54.330033   12745 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1229 06:45:54.388276   12745 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:30 OomKillDisable:false NGoroutines:63 SystemTime:2025-12-29 06:45:54.379277181 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:dea7da592f5d1d2b7755e3a161be07f43fad8f75 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1229 06:45:54.388418   12745 start_flags.go:333] no existing cluster config was found, will generate one from the flags 
	I1229 06:45:54.388843   12745 start_flags.go:417] Using suggested 8000MB memory alloc based on sys=32093MB, container=32093MB
	I1229 06:45:54.389004   12745 start_flags.go:1001] Wait components to verify : map[apiserver:true system_pods:true]
	I1229 06:45:54.390698   12745 out.go:171] Using Docker driver with root privileges
	I1229 06:45:54.391715   12745 cni.go:84] Creating CNI manager for ""
	I1229 06:45:54.391777   12745 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1229 06:45:54.391789   12745 start_flags.go:342] Found "CNI" CNI - setting NetworkPlugin=cni
	I1229 06:45:54.391844   12745 start.go:353] cluster config:
	{Name:download-only-887932 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766979815-22353@sha256:20dad5895b49b986a1253c0faab60865204843ac97fd3a6e6210da5896244409 Memory:8000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:download-only-887932 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1229 06:45:54.392931   12745 out.go:99] Starting "download-only-887932" primary control-plane node in "download-only-887932" cluster
	I1229 06:45:54.392947   12745 cache.go:134] Beginning downloading kic base image for docker with crio
	I1229 06:45:54.394045   12745 out.go:99] Pulling base image v0.0.48-1766979815-22353 ...
	I1229 06:45:54.394071   12745 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1229 06:45:54.394135   12745 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766979815-22353@sha256:20dad5895b49b986a1253c0faab60865204843ac97fd3a6e6210da5896244409 in local docker daemon
	I1229 06:45:54.410412   12745 cache.go:163] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766979815-22353@sha256:20dad5895b49b986a1253c0faab60865204843ac97fd3a6e6210da5896244409 to local cache
	I1229 06:45:54.410537   12745 preload.go:148] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4
	I1229 06:45:54.410566   12745 cache.go:65] Caching tarball of preloaded images
	I1229 06:45:54.410588   12745 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766979815-22353@sha256:20dad5895b49b986a1253c0faab60865204843ac97fd3a6e6210da5896244409 in local cache directory
	I1229 06:45:54.410698   12745 image.go:150] Writing gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766979815-22353@sha256:20dad5895b49b986a1253c0faab60865204843ac97fd3a6e6210da5896244409 to local cache
	I1229 06:45:54.410715   12745 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1229 06:45:54.412301   12745 out.go:99] Downloading Kubernetes v1.28.0 preload ...
	I1229 06:45:54.412317   12745 preload.go:269] Downloading preload from https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4
	I1229 06:45:54.412322   12745 preload.go:336] getting checksum for preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4 from gcs api...
	I1229 06:45:54.433117   12745 preload.go:313] Got checksum from GCS API "72bc7f8573f574c02d8c9a9b3496176b"
	I1229 06:45:54.433284   12745 download.go:114] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4?checksum=md5:72bc7f8573f574c02d8c9a9b3496176b -> /home/jenkins/minikube-integration/22353-9207/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4
	I1229 06:45:57.297902   12745 cache.go:68] Finished verifying existence of preloaded tar for v1.28.0 on crio
	I1229 06:45:57.298295   12745 profile.go:143] Saving config to /home/jenkins/minikube-integration/22353-9207/.minikube/profiles/download-only-887932/config.json ...
	I1229 06:45:57.298331   12745 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22353-9207/.minikube/profiles/download-only-887932/config.json: {Name:mk83e69039c5f9972d5be17f2e0dd5b18bf36621 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1229 06:45:57.298539   12745 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1229 06:45:57.298746   12745 download.go:114] Downloading: https://dl.k8s.io/release/v1.28.0/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.28.0/bin/linux/amd64/kubectl.sha256 -> /home/jenkins/minikube-integration/22353-9207/.minikube/cache/linux/amd64/v1.28.0/kubectl
	
	
	* The control-plane node download-only-887932 host does not exist
	  To start a cluster, run: "minikube start -p download-only-887932"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.28.0/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAll (0.23s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.28.0/DeleteAll (0.23s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-887932
--- PASS: TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.14s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0/json-events (2.41s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-722440 --force --alsologtostderr --kubernetes-version=v1.35.0 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-722440 --force --alsologtostderr --kubernetes-version=v1.35.0 --container-runtime=crio --driver=docker  --container-runtime=crio: (2.410639438s)
--- PASS: TestDownloadOnly/v1.35.0/json-events (2.41s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0/preload-exists
I1229 06:46:02.943722   12733 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime crio
I1229 06:46:02.943757   12733 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22353-9207/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-cri-o-overlay-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.35.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-722440
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-722440: exit status 85 (69.491247ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                   ARGS                                                                                    │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -o=json --download-only -p download-only-887932 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio │ download-only-887932 │ jenkins │ v1.37.0 │ 29 Dec 25 06:45 UTC │                     │
	│ delete  │ --all                                                                                                                                                                     │ minikube             │ jenkins │ v1.37.0 │ 29 Dec 25 06:46 UTC │ 29 Dec 25 06:46 UTC │
	│ delete  │ -p download-only-887932                                                                                                                                                   │ download-only-887932 │ jenkins │ v1.37.0 │ 29 Dec 25 06:46 UTC │ 29 Dec 25 06:46 UTC │
	│ start   │ -o=json --download-only -p download-only-722440 --force --alsologtostderr --kubernetes-version=v1.35.0 --container-runtime=crio --driver=docker  --container-runtime=crio │ download-only-722440 │ jenkins │ v1.37.0 │ 29 Dec 25 06:46 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/29 06:46:00
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1229 06:46:00.582025   13098 out.go:360] Setting OutFile to fd 1 ...
	I1229 06:46:00.582551   13098 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1229 06:46:00.582572   13098 out.go:374] Setting ErrFile to fd 2...
	I1229 06:46:00.582579   13098 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1229 06:46:00.583034   13098 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22353-9207/.minikube/bin
	I1229 06:46:00.583834   13098 out.go:368] Setting JSON to true
	I1229 06:46:00.584605   13098 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":1713,"bootTime":1766989048,"procs":173,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1229 06:46:00.584693   13098 start.go:143] virtualization: kvm guest
	I1229 06:46:00.586301   13098 out.go:99] [download-only-722440] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1229 06:46:00.586417   13098 notify.go:221] Checking for updates...
	I1229 06:46:00.587536   13098 out.go:171] MINIKUBE_LOCATION=22353
	I1229 06:46:00.588791   13098 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1229 06:46:00.589934   13098 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/22353-9207/kubeconfig
	I1229 06:46:00.591170   13098 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/22353-9207/.minikube
	I1229 06:46:00.592371   13098 out.go:171] MINIKUBE_BIN=out/minikube-linux-amd64
	W1229 06:46:00.594422   13098 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1229 06:46:00.594646   13098 driver.go:422] Setting default libvirt URI to qemu:///system
	I1229 06:46:00.617859   13098 docker.go:124] docker version: linux-29.1.3:Docker Engine - Community
	I1229 06:46:00.617960   13098 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1229 06:46:00.673001   13098 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:30 OomKillDisable:false NGoroutines:51 SystemTime:2025-12-29 06:46:00.663255816 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:dea7da592f5d1d2b7755e3a161be07f43fad8f75 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1229 06:46:00.673091   13098 docker.go:319] overlay module found
	I1229 06:46:00.674506   13098 out.go:99] Using the docker driver based on user configuration
	I1229 06:46:00.674533   13098 start.go:309] selected driver: docker
	I1229 06:46:00.674539   13098 start.go:928] validating driver "docker" against <nil>
	I1229 06:46:00.674638   13098 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1229 06:46:00.727458   13098 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:30 OomKillDisable:false NGoroutines:51 SystemTime:2025-12-29 06:46:00.718327417 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:dea7da592f5d1d2b7755e3a161be07f43fad8f75 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1229 06:46:00.727645   13098 start_flags.go:333] no existing cluster config was found, will generate one from the flags 
	I1229 06:46:00.728088   13098 start_flags.go:417] Using suggested 8000MB memory alloc based on sys=32093MB, container=32093MB
	I1229 06:46:00.728233   13098 start_flags.go:1001] Wait components to verify : map[apiserver:true system_pods:true]
	I1229 06:46:00.729923   13098 out.go:171] Using Docker driver with root privileges
	
	
	* The control-plane node download-only-722440 host does not exist
	  To start a cluster, run: "minikube start -p download-only-722440"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.35.0/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0/DeleteAll (0.22s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.35.0/DeleteAll (0.22s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0/DeleteAlwaysSucceeds (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-722440
--- PASS: TestDownloadOnly/v1.35.0/DeleteAlwaysSucceeds (0.14s)

                                                
                                    
x
+
TestDownloadOnlyKic (0.39s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:231: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p download-docker-517975 --alsologtostderr --driver=docker  --container-runtime=crio
helpers_test.go:176: Cleaning up "download-docker-517975" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p download-docker-517975
--- PASS: TestDownloadOnlyKic (0.39s)

                                                
                                    
x
+
TestBinaryMirror (0.79s)

                                                
                                                
=== RUN   TestBinaryMirror
I1229 06:46:04.033051   12733 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.35.0/bin/linux/amd64/kubectl.sha256
aaa_download_only_test.go:309: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p binary-mirror-518856 --alsologtostderr --binary-mirror http://127.0.0.1:40905 --driver=docker  --container-runtime=crio
helpers_test.go:176: Cleaning up "binary-mirror-518856" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p binary-mirror-518856
--- PASS: TestBinaryMirror (0.79s)

                                                
                                    
x
+
TestOffline (60.88s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p offline-crio-469438 --alsologtostderr -v=1 --memory=3072 --wait=true --driver=docker  --container-runtime=crio
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p offline-crio-469438 --alsologtostderr -v=1 --memory=3072 --wait=true --driver=docker  --container-runtime=crio: (58.46871458s)
helpers_test.go:176: Cleaning up "offline-crio-469438" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-crio-469438
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p offline-crio-469438: (2.410294268s)
--- PASS: TestOffline (60.88s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:1002: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-264018
addons_test.go:1002: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-264018: exit status 85 (60.839465ms)

                                                
                                                
-- stdout --
	* Profile "addons-264018" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-264018"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.06s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:1013: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-264018
addons_test.go:1013: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-264018: exit status 85 (59.036236ms)

                                                
                                                
-- stdout --
	* Profile "addons-264018" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-264018"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.06s)

                                                
                                    
x
+
TestAddons/Setup (92.9s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:110: (dbg) Run:  out/minikube-linux-amd64 start -p addons-264018 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher
addons_test.go:110: (dbg) Done: out/minikube-linux-amd64 start -p addons-264018 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher: (1m32.902183905s)
--- PASS: TestAddons/Setup (92.90s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.14s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:632: (dbg) Run:  kubectl --context addons-264018 create ns new-namespace
addons_test.go:646: (dbg) Run:  kubectl --context addons-264018 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.14s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/FakeCredentials (8.41s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/FakeCredentials
addons_test.go:677: (dbg) Run:  kubectl --context addons-264018 create -f testdata/busybox.yaml
addons_test.go:684: (dbg) Run:  kubectl --context addons-264018 create sa gcp-auth-test
addons_test.go:690: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:353: "busybox" [c34183a6-ab5e-44fd-811d-4ecfe518baf1] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:353: "busybox" [c34183a6-ab5e-44fd-811d-4ecfe518baf1] Running
addons_test.go:690: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: integration-test=busybox healthy within 8.002968674s
addons_test.go:696: (dbg) Run:  kubectl --context addons-264018 exec busybox -- /bin/sh -c "printenv GOOGLE_APPLICATION_CREDENTIALS"
addons_test.go:708: (dbg) Run:  kubectl --context addons-264018 describe sa gcp-auth-test
addons_test.go:746: (dbg) Run:  kubectl --context addons-264018 exec busybox -- /bin/sh -c "printenv GOOGLE_CLOUD_PROJECT"
--- PASS: TestAddons/serial/GCPAuth/FakeCredentials (8.41s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (16.65s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:174: (dbg) Run:  out/minikube-linux-amd64 stop -p addons-264018
addons_test.go:174: (dbg) Done: out/minikube-linux-amd64 stop -p addons-264018: (16.361130723s)
addons_test.go:178: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-264018
addons_test.go:182: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-264018
addons_test.go:187: (dbg) Run:  out/minikube-linux-amd64 addons disable gvisor -p addons-264018
--- PASS: TestAddons/StoppedEnableDisable (16.65s)

                                                
                                    
x
+
TestCertOptions (25.21s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-amd64 start -p cert-options-001954 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio
cert_options_test.go:49: (dbg) Done: out/minikube-linux-amd64 start -p cert-options-001954 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio: (21.289662533s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-amd64 -p cert-options-001954 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-001954 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-amd64 ssh -p cert-options-001954 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:176: Cleaning up "cert-options-001954" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-options-001954
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p cert-options-001954: (2.98180162s)
--- PASS: TestCertOptions (25.21s)

                                                
                                    
x
+
TestCertExpiration (210.27s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-452455 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio
cert_options_test.go:123: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-452455 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio: (22.548722088s)
E1229 07:12:42.184581   12733 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22353-9207/.minikube/profiles/functional-120775/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-452455 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio
cert_options_test.go:131: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-452455 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio: (5.318918376s)
helpers_test.go:176: Cleaning up "cert-expiration-452455" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-expiration-452455
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p cert-expiration-452455: (2.397652415s)
--- PASS: TestCertExpiration (210.27s)

                                                
                                    
x
+
TestForceSystemdFlag (20.49s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-flag-074338 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
docker_test.go:91: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-flag-074338 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (17.554993832s)
docker_test.go:132: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-flag-074338 ssh "cat /etc/crio/crio.conf.d/02-crio.conf"
helpers_test.go:176: Cleaning up "force-systemd-flag-074338" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-flag-074338
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-flag-074338: (2.622161292s)
--- PASS: TestForceSystemdFlag (20.49s)

                                                
                                    
x
+
TestForceSystemdEnv (25.85s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-env-879774 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
docker_test.go:155: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-env-879774 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (22.993580708s)
helpers_test.go:176: Cleaning up "force-systemd-env-879774" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-env-879774
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-env-879774: (2.852322601s)
--- PASS: TestForceSystemdEnv (25.85s)

                                                
                                    
x
+
TestErrorSpam/setup (16.23s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-089213 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-089213 --driver=docker  --container-runtime=crio
error_spam_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -p nospam-089213 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-089213 --driver=docker  --container-runtime=crio: (16.23346685s)
--- PASS: TestErrorSpam/setup (16.23s)

                                                
                                    
x
+
TestErrorSpam/start (0.66s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:206: Cleaning up 1 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-089213 --log_dir /tmp/nospam-089213 start --dry-run
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-089213 --log_dir /tmp/nospam-089213 start --dry-run
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-089213 --log_dir /tmp/nospam-089213 start --dry-run
--- PASS: TestErrorSpam/start (0.66s)

                                                
                                    
x
+
TestErrorSpam/status (0.96s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-089213 --log_dir /tmp/nospam-089213 status
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-089213 --log_dir /tmp/nospam-089213 status
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-089213 --log_dir /tmp/nospam-089213 status
--- PASS: TestErrorSpam/status (0.96s)

                                                
                                    
x
+
TestErrorSpam/pause (5.07s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-089213 --log_dir /tmp/nospam-089213 pause
error_spam_test.go:149: (dbg) Non-zero exit: out/minikube-linux-amd64 -p nospam-089213 --log_dir /tmp/nospam-089213 pause: exit status 80 (1.542770745s)

                                                
                                                
-- stdout --
	* Pausing node nospam-089213 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-29T06:49:23Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:151: "out/minikube-linux-amd64 -p nospam-089213 --log_dir /tmp/nospam-089213 pause" failed: exit status 80
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-089213 --log_dir /tmp/nospam-089213 pause
error_spam_test.go:149: (dbg) Non-zero exit: out/minikube-linux-amd64 -p nospam-089213 --log_dir /tmp/nospam-089213 pause: exit status 80 (1.802757343s)

                                                
                                                
-- stdout --
	* Pausing node nospam-089213 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-29T06:49:25Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:151: "out/minikube-linux-amd64 -p nospam-089213 --log_dir /tmp/nospam-089213 pause" failed: exit status 80
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-089213 --log_dir /tmp/nospam-089213 pause
error_spam_test.go:172: (dbg) Non-zero exit: out/minikube-linux-amd64 -p nospam-089213 --log_dir /tmp/nospam-089213 pause: exit status 80 (1.719359425s)

                                                
                                                
-- stdout --
	* Pausing node nospam-089213 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-29T06:49:27Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:174: "out/minikube-linux-amd64 -p nospam-089213 --log_dir /tmp/nospam-089213 pause" failed: exit status 80
--- PASS: TestErrorSpam/pause (5.07s)

                                                
                                    
x
+
TestErrorSpam/unpause (5.42s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-089213 --log_dir /tmp/nospam-089213 unpause
error_spam_test.go:149: (dbg) Non-zero exit: out/minikube-linux-amd64 -p nospam-089213 --log_dir /tmp/nospam-089213 unpause: exit status 80 (1.675640108s)

                                                
                                                
-- stdout --
	* Unpausing node nospam-089213 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_UNPAUSE: Pause: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-29T06:49:28Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:151: "out/minikube-linux-amd64 -p nospam-089213 --log_dir /tmp/nospam-089213 unpause" failed: exit status 80
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-089213 --log_dir /tmp/nospam-089213 unpause
error_spam_test.go:149: (dbg) Non-zero exit: out/minikube-linux-amd64 -p nospam-089213 --log_dir /tmp/nospam-089213 unpause: exit status 80 (2.093638183s)

                                                
                                                
-- stdout --
	* Unpausing node nospam-089213 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_UNPAUSE: Pause: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-29T06:49:30Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:151: "out/minikube-linux-amd64 -p nospam-089213 --log_dir /tmp/nospam-089213 unpause" failed: exit status 80
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-089213 --log_dir /tmp/nospam-089213 unpause
error_spam_test.go:172: (dbg) Non-zero exit: out/minikube-linux-amd64 -p nospam-089213 --log_dir /tmp/nospam-089213 unpause: exit status 80 (1.653903112s)

                                                
                                                
-- stdout --
	* Unpausing node nospam-089213 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_UNPAUSE: Pause: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-29T06:49:32Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:174: "out/minikube-linux-amd64 -p nospam-089213 --log_dir /tmp/nospam-089213 unpause" failed: exit status 80
--- PASS: TestErrorSpam/unpause (5.42s)

                                                
                                    
x
+
TestErrorSpam/stop (8.1s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-089213 --log_dir /tmp/nospam-089213 stop
error_spam_test.go:149: (dbg) Done: out/minikube-linux-amd64 -p nospam-089213 --log_dir /tmp/nospam-089213 stop: (7.895640637s)
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-089213 --log_dir /tmp/nospam-089213 stop
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-089213 --log_dir /tmp/nospam-089213 stop
--- PASS: TestErrorSpam/stop (8.10s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1865: local sync path: /home/jenkins/minikube-integration/22353-9207/.minikube/files/etc/test/nested/copy/12733/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (38.84s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2244: (dbg) Run:  out/minikube-linux-amd64 start -p functional-120775 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio
functional_test.go:2244: (dbg) Done: out/minikube-linux-amd64 start -p functional-120775 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio: (38.841869719s)
--- PASS: TestFunctional/serial/StartWithProxy (38.84s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (6.1s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
I1229 06:50:23.629755   12733 config.go:182] Loaded profile config "functional-120775": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
functional_test.go:674: (dbg) Run:  out/minikube-linux-amd64 start -p functional-120775 --alsologtostderr -v=8
functional_test.go:674: (dbg) Done: out/minikube-linux-amd64 start -p functional-120775 --alsologtostderr -v=8: (6.096458351s)
functional_test.go:678: soft start took 6.097169301s for "functional-120775" cluster.
I1229 06:50:29.726578   12733 config.go:182] Loaded profile config "functional-120775": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
--- PASS: TestFunctional/serial/SoftStart (6.10s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:696: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.04s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:711: (dbg) Run:  kubectl --context functional-120775 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.10s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (2.47s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1069: (dbg) Run:  out/minikube-linux-amd64 -p functional-120775 cache add registry.k8s.io/pause:3.1
functional_test.go:1069: (dbg) Run:  out/minikube-linux-amd64 -p functional-120775 cache add registry.k8s.io/pause:3.3
functional_test.go:1069: (dbg) Run:  out/minikube-linux-amd64 -p functional-120775 cache add registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (2.47s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (0.88s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1097: (dbg) Run:  docker build -t minikube-local-cache-test:functional-120775 /tmp/TestFunctionalserialCacheCmdcacheadd_local4080127914/001
functional_test.go:1109: (dbg) Run:  out/minikube-linux-amd64 -p functional-120775 cache add minikube-local-cache-test:functional-120775
functional_test.go:1114: (dbg) Run:  out/minikube-linux-amd64 -p functional-120775 cache delete minikube-local-cache-test:functional-120775
functional_test.go:1103: (dbg) Run:  docker rmi minikube-local-cache-test:functional-120775
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (0.88s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1122: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1130: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.29s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1144: (dbg) Run:  out/minikube-linux-amd64 -p functional-120775 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.29s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.52s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1167: (dbg) Run:  out/minikube-linux-amd64 -p functional-120775 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1173: (dbg) Run:  out/minikube-linux-amd64 -p functional-120775 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1173: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-120775 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (276.746381ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1178: (dbg) Run:  out/minikube-linux-amd64 -p functional-120775 cache reload
functional_test.go:1183: (dbg) Run:  out/minikube-linux-amd64 -p functional-120775 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.52s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1192: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1192: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.12s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:731: (dbg) Run:  out/minikube-linux-amd64 -p functional-120775 kubectl -- --context functional-120775 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.11s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:756: (dbg) Run:  out/kubectl --context functional-120775 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.11s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (36.23s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:772: (dbg) Run:  out/minikube-linux-amd64 start -p functional-120775 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:772: (dbg) Done: out/minikube-linux-amd64 start -p functional-120775 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (36.228724917s)
functional_test.go:776: restart took 36.228842115s for "functional-120775" cluster.
I1229 06:51:11.727747   12733 config.go:182] Loaded profile config "functional-120775": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
--- PASS: TestFunctional/serial/ExtraConfig (36.23s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:825: (dbg) Run:  kubectl --context functional-120775 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:840: etcd phase: Running
functional_test.go:850: etcd status: Ready
functional_test.go:840: kube-apiserver phase: Running
functional_test.go:850: kube-apiserver status: Ready
functional_test.go:840: kube-controller-manager phase: Running
functional_test.go:850: kube-controller-manager status: Ready
functional_test.go:840: kube-scheduler phase: Running
functional_test.go:850: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.17s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1256: (dbg) Run:  out/minikube-linux-amd64 -p functional-120775 logs
functional_test.go:1256: (dbg) Done: out/minikube-linux-amd64 -p functional-120775 logs: (1.169625101s)
--- PASS: TestFunctional/serial/LogsCmd (1.17s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.18s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1270: (dbg) Run:  out/minikube-linux-amd64 -p functional-120775 logs --file /tmp/TestFunctionalserialLogsFileCmd844946816/001/logs.txt
functional_test.go:1270: (dbg) Done: out/minikube-linux-amd64 -p functional-120775 logs --file /tmp/TestFunctionalserialLogsFileCmd844946816/001/logs.txt: (1.174905152s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.18s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.36s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2331: (dbg) Run:  kubectl --context functional-120775 apply -f testdata/invalidsvc.yaml
functional_test.go:2345: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p functional-120775
functional_test.go:2345: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p functional-120775: exit status 115 (344.524015ms)

                                                
                                                
-- stdout --
	┌───────────┬─────────────┬─────────────┬───────────────────────────┐
	│ NAMESPACE │    NAME     │ TARGET PORT │            URL            │
	├───────────┼─────────────┼─────────────┼───────────────────────────┤
	│ default   │ invalid-svc │ 80          │ http://192.168.49.2:32191 │
	└───────────┴─────────────┴─────────────┴───────────────────────────┘
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2337: (dbg) Run:  kubectl --context functional-120775 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (4.36s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1219: (dbg) Run:  out/minikube-linux-amd64 -p functional-120775 config unset cpus
functional_test.go:1219: (dbg) Run:  out/minikube-linux-amd64 -p functional-120775 config get cpus
functional_test.go:1219: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-120775 config get cpus: exit status 14 (60.132617ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1219: (dbg) Run:  out/minikube-linux-amd64 -p functional-120775 config set cpus 2
functional_test.go:1219: (dbg) Run:  out/minikube-linux-amd64 -p functional-120775 config get cpus
functional_test.go:1219: (dbg) Run:  out/minikube-linux-amd64 -p functional-120775 config unset cpus
functional_test.go:1219: (dbg) Run:  out/minikube-linux-amd64 -p functional-120775 config get cpus
functional_test.go:1219: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-120775 config get cpus: exit status 14 (79.36822ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.44s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (7.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:920: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 0 -p functional-120775 --alsologtostderr -v=1]
functional_test.go:925: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 0 -p functional-120775 --alsologtostderr -v=1] ...
helpers_test.go:526: unable to kill pid 49128: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (7.20s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:994: (dbg) Run:  out/minikube-linux-amd64 start -p functional-120775 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio
functional_test.go:994: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-120775 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio: exit status 23 (184.360275ms)

                                                
                                                
-- stdout --
	* [functional-120775] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=22353
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22353-9207/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22353-9207/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1229 06:51:42.009435   48467 out.go:360] Setting OutFile to fd 1 ...
	I1229 06:51:42.009736   48467 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1229 06:51:42.009745   48467 out.go:374] Setting ErrFile to fd 2...
	I1229 06:51:42.009752   48467 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1229 06:51:42.010078   48467 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22353-9207/.minikube/bin
	I1229 06:51:42.010647   48467 out.go:368] Setting JSON to false
	I1229 06:51:42.011928   48467 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":2054,"bootTime":1766989048,"procs":251,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1229 06:51:42.012008   48467 start.go:143] virtualization: kvm guest
	I1229 06:51:42.013907   48467 out.go:179] * [functional-120775] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1229 06:51:42.015079   48467 out.go:179]   - MINIKUBE_LOCATION=22353
	I1229 06:51:42.015154   48467 notify.go:221] Checking for updates...
	I1229 06:51:42.019372   48467 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1229 06:51:42.020768   48467 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22353-9207/kubeconfig
	I1229 06:51:42.021858   48467 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22353-9207/.minikube
	I1229 06:51:42.022903   48467 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1229 06:51:42.024035   48467 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1229 06:51:42.025808   48467 config.go:182] Loaded profile config "functional-120775": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1229 06:51:42.026786   48467 driver.go:422] Setting default libvirt URI to qemu:///system
	I1229 06:51:42.053946   48467 docker.go:124] docker version: linux-29.1.3:Docker Engine - Community
	I1229 06:51:42.054128   48467 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1229 06:51:42.118120   48467 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:55 SystemTime:2025-12-29 06:51:42.105589586 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:dea7da592f5d1d2b7755e3a161be07f43fad8f75 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1229 06:51:42.118290   48467 docker.go:319] overlay module found
	I1229 06:51:42.120855   48467 out.go:179] * Using the docker driver based on existing profile
	I1229 06:51:42.124643   48467 start.go:309] selected driver: docker
	I1229 06:51:42.124658   48467 start.go:928] validating driver "docker" against &{Name:functional-120775 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766979815-22353@sha256:20dad5895b49b986a1253c0faab60865204843ac97fd3a6e6210da5896244409 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:functional-120775 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Moun
tPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1229 06:51:42.124749   48467 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1229 06:51:42.127471   48467 out.go:203] 
	W1229 06:51:42.132409   48467 out.go:285] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I1229 06:51:42.134065   48467 out.go:203] 

                                                
                                                
** /stderr **
functional_test.go:1011: (dbg) Run:  out/minikube-linux-amd64 start -p functional-120775 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
--- PASS: TestFunctional/parallel/DryRun (0.42s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1040: (dbg) Run:  out/minikube-linux-amd64 start -p functional-120775 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio
functional_test.go:1040: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-120775 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio: exit status 23 (175.11397ms)

                                                
                                                
-- stdout --
	* [functional-120775] minikube v1.37.0 sur Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=22353
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22353-9207/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22353-9207/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1229 06:51:41.835420   48312 out.go:360] Setting OutFile to fd 1 ...
	I1229 06:51:41.835527   48312 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1229 06:51:41.835539   48312 out.go:374] Setting ErrFile to fd 2...
	I1229 06:51:41.835543   48312 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1229 06:51:41.835853   48312 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22353-9207/.minikube/bin
	I1229 06:51:41.836318   48312 out.go:368] Setting JSON to false
	I1229 06:51:41.837387   48312 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":2054,"bootTime":1766989048,"procs":248,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1229 06:51:41.837452   48312 start.go:143] virtualization: kvm guest
	I1229 06:51:41.839975   48312 out.go:179] * [functional-120775] minikube v1.37.0 sur Ubuntu 22.04 (kvm/amd64)
	I1229 06:51:41.841117   48312 notify.go:221] Checking for updates...
	I1229 06:51:41.841157   48312 out.go:179]   - MINIKUBE_LOCATION=22353
	I1229 06:51:41.842384   48312 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1229 06:51:41.843482   48312 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22353-9207/kubeconfig
	I1229 06:51:41.844533   48312 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22353-9207/.minikube
	I1229 06:51:41.845596   48312 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1229 06:51:41.849551   48312 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1229 06:51:41.851392   48312 config.go:182] Loaded profile config "functional-120775": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1229 06:51:41.852098   48312 driver.go:422] Setting default libvirt URI to qemu:///system
	I1229 06:51:41.877726   48312 docker.go:124] docker version: linux-29.1.3:Docker Engine - Community
	I1229 06:51:41.877827   48312 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1229 06:51:41.940842   48312 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:55 SystemTime:2025-12-29 06:51:41.930108679 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:dea7da592f5d1d2b7755e3a161be07f43fad8f75 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1229 06:51:41.940937   48312 docker.go:319] overlay module found
	I1229 06:51:41.943837   48312 out.go:179] * Utilisation du pilote docker basé sur le profil existant
	I1229 06:51:41.945130   48312 start.go:309] selected driver: docker
	I1229 06:51:41.945151   48312 start.go:928] validating driver "docker" against &{Name:functional-120775 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766979815-22353@sha256:20dad5895b49b986a1253c0faab60865204843ac97fd3a6e6210da5896244409 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:functional-120775 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Moun
tPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1229 06:51:41.945303   48312 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1229 06:51:41.946988   48312 out.go:203] 
	W1229 06:51:41.948069   48312 out.go:285] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1229 06:51:41.949279   48312 out.go:203] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.18s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:869: (dbg) Run:  out/minikube-linux-amd64 -p functional-120775 status
functional_test.go:875: (dbg) Run:  out/minikube-linux-amd64 -p functional-120775 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:887: (dbg) Run:  out/minikube-linux-amd64 -p functional-120775 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.00s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (7.67s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1641: (dbg) Run:  kubectl --context functional-120775 create deployment hello-node-connect --image ghcr.io/medyagh/image-mirrors/kicbase/echo-server
functional_test.go:1645: (dbg) Run:  kubectl --context functional-120775 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1650: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:353: "hello-node-connect-5d95464fd4-958m7" [b4eac91a-3a86-4d76-9a8e-983b3a8216aa] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
helpers_test.go:353: "hello-node-connect-5d95464fd4-958m7" [b4eac91a-3a86-4d76-9a8e-983b3a8216aa] Running
functional_test.go:1650: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 7.002959629s
functional_test.go:1659: (dbg) Run:  out/minikube-linux-amd64 -p functional-120775 service hello-node-connect --url
functional_test.go:1665: found endpoint for hello-node-connect: http://192.168.49.2:31781
functional_test.go:1685: http://192.168.49.2:31781: success! body:
Request served by hello-node-connect-5d95464fd4-958m7

                                                
                                                
HTTP/1.1 GET /

                                                
                                                
Host: 192.168.49.2:31781
Accept-Encoding: gzip
User-Agent: Go-http-client/1.1
--- PASS: TestFunctional/parallel/ServiceCmdConnect (7.67s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1700: (dbg) Run:  out/minikube-linux-amd64 -p functional-120775 addons list
functional_test.go:1712: (dbg) Run:  out/minikube-linux-amd64 -p functional-120775 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (20.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:353: "storage-provisioner" [e2a43d4f-00b6-44cb-b080-5a8957f1c174] Running
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 6.004895402s
functional_test_pvc_test.go:55: (dbg) Run:  kubectl --context functional-120775 get storageclass -o=json
functional_test_pvc_test.go:75: (dbg) Run:  kubectl --context functional-120775 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:82: (dbg) Run:  kubectl --context functional-120775 get pvc myclaim -o=json
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-120775 apply -f testdata/storage-provisioner/pod.yaml
I1229 06:51:27.852133   12733 detect.go:223] nested VM detected
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 6m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:353: "sp-pod" [27776587-b9e0-4883-8b7d-caf5392c6c27] Pending
helpers_test.go:353: "sp-pod" [27776587-b9e0-4883-8b7d-caf5392c6c27] Running
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 7.00301544s
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-120775 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:112: (dbg) Run:  kubectl --context functional-120775 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-120775 apply -f testdata/storage-provisioner/pod.yaml
I1229 06:51:35.685059   12733 detect.go:223] nested VM detected
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 6m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:353: "sp-pod" [3e42e4ef-7312-4184-8ec1-43e4b588f674] Pending
helpers_test.go:353: "sp-pod" [3e42e4ef-7312-4184-8ec1-43e4b588f674] Running
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 6.003786207s
functional_test_pvc_test.go:120: (dbg) Run:  kubectl --context functional-120775 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (20.38s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.79s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1735: (dbg) Run:  out/minikube-linux-amd64 -p functional-120775 ssh "echo hello"
functional_test.go:1752: (dbg) Run:  out/minikube-linux-amd64 -p functional-120775 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.79s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (2.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p functional-120775 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p functional-120775 ssh -n functional-120775 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p functional-120775 cp functional-120775:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd273498890/001/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p functional-120775 ssh -n functional-120775 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p functional-120775 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p functional-120775 ssh -n functional-120775 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (2.19s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (21.86s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1803: (dbg) Run:  kubectl --context functional-120775 replace --force -f testdata/mysql.yaml
functional_test.go:1809: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:353: "mysql-7d7b65bc95-dfk8b" [3ed71129-5935-404d-bd8c-fc82968c85b8] Pending
helpers_test.go:353: "mysql-7d7b65bc95-dfk8b" [3ed71129-5935-404d-bd8c-fc82968c85b8] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:353: "mysql-7d7b65bc95-dfk8b" [3ed71129-5935-404d-bd8c-fc82968c85b8] Running
functional_test.go:1809: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 15.003333401s
functional_test.go:1817: (dbg) Run:  kubectl --context functional-120775 exec mysql-7d7b65bc95-dfk8b -- mysql -ppassword -e "show databases;"
functional_test.go:1817: (dbg) Non-zero exit: kubectl --context functional-120775 exec mysql-7d7b65bc95-dfk8b -- mysql -ppassword -e "show databases;": exit status 1 (96.393107ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I1229 06:51:34.240180   12733 retry.go:84] will retry after 1.1s: exit status 1
functional_test.go:1817: (dbg) Run:  kubectl --context functional-120775 exec mysql-7d7b65bc95-dfk8b -- mysql -ppassword -e "show databases;"
functional_test.go:1817: (dbg) Non-zero exit: kubectl --context functional-120775 exec mysql-7d7b65bc95-dfk8b -- mysql -ppassword -e "show databases;": exit status 1 (85.937542ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1817: (dbg) Run:  kubectl --context functional-120775 exec mysql-7d7b65bc95-dfk8b -- mysql -ppassword -e "show databases;"
functional_test.go:1817: (dbg) Non-zero exit: kubectl --context functional-120775 exec mysql-7d7b65bc95-dfk8b -- mysql -ppassword -e "show databases;": exit status 1 (87.608388ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1817: (dbg) Run:  kubectl --context functional-120775 exec mysql-7d7b65bc95-dfk8b -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (21.86s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1939: Checking for existence of /etc/test/nested/copy/12733/hosts within VM
functional_test.go:1941: (dbg) Run:  out/minikube-linux-amd64 -p functional-120775 ssh "sudo cat /etc/test/nested/copy/12733/hosts"
functional_test.go:1946: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.32s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (2.03s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1982: Checking for existence of /etc/ssl/certs/12733.pem within VM
functional_test.go:1983: (dbg) Run:  out/minikube-linux-amd64 -p functional-120775 ssh "sudo cat /etc/ssl/certs/12733.pem"
functional_test.go:1982: Checking for existence of /usr/share/ca-certificates/12733.pem within VM
functional_test.go:1983: (dbg) Run:  out/minikube-linux-amd64 -p functional-120775 ssh "sudo cat /usr/share/ca-certificates/12733.pem"
functional_test.go:1982: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1983: (dbg) Run:  out/minikube-linux-amd64 -p functional-120775 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:2009: Checking for existence of /etc/ssl/certs/127332.pem within VM
functional_test.go:2010: (dbg) Run:  out/minikube-linux-amd64 -p functional-120775 ssh "sudo cat /etc/ssl/certs/127332.pem"
functional_test.go:2009: Checking for existence of /usr/share/ca-certificates/127332.pem within VM
functional_test.go:2010: (dbg) Run:  out/minikube-linux-amd64 -p functional-120775 ssh "sudo cat /usr/share/ca-certificates/127332.pem"
functional_test.go:2009: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2010: (dbg) Run:  out/minikube-linux-amd64 -p functional-120775 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (2.03s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:234: (dbg) Run:  kubectl --context functional-120775 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.61s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2037: (dbg) Run:  out/minikube-linux-amd64 -p functional-120775 ssh "sudo systemctl is-active docker"
functional_test.go:2037: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-120775 ssh "sudo systemctl is-active docker": exit status 1 (308.51347ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2037: (dbg) Run:  out/minikube-linux-amd64 -p functional-120775 ssh "sudo systemctl is-active containerd"
functional_test.go:2037: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-120775 ssh "sudo systemctl is-active containerd": exit status 1 (305.264417ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.61s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2298: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctional/parallel/License (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-120775 image ls --format short --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-120775 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10.1
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.35.0
registry.k8s.io/kube-proxy:v1.35.0
registry.k8s.io/kube-controller-manager:v1.35.0
registry.k8s.io/kube-apiserver:v1.35.0
registry.k8s.io/etcd:3.6.6-0
registry.k8s.io/coredns/coredns:v1.13.1
public.ecr.aws/nginx/nginx:alpine
public.ecr.aws/docker/library/mysql:8.4
localhost/minikube-local-cache-test:functional-120775
ghcr.io/medyagh/image-mirrors/kicbase/echo-server:latest
ghcr.io/medyagh/image-mirrors/kicbase/echo-server:functional-120775
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/kindest/kindnetd:v20251212-v0.29.0-alpha-105-g20ccfc88
docker.io/kindest/kindnetd:v20250512-df8de77b
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-120775 image ls --format short --alsologtostderr:
I1229 06:51:44.181475   49883 out.go:360] Setting OutFile to fd 1 ...
I1229 06:51:44.181779   49883 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1229 06:51:44.181790   49883 out.go:374] Setting ErrFile to fd 2...
I1229 06:51:44.181796   49883 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1229 06:51:44.182059   49883 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22353-9207/.minikube/bin
I1229 06:51:44.182688   49883 config.go:182] Loaded profile config "functional-120775": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
I1229 06:51:44.182803   49883 config.go:182] Loaded profile config "functional-120775": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
I1229 06:51:44.183291   49883 cli_runner.go:164] Run: docker container inspect functional-120775 --format={{.State.Status}}
I1229 06:51:44.205730   49883 ssh_runner.go:195] Run: systemctl --version
I1229 06:51:44.205785   49883 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-120775
I1229 06:51:44.225355   49883 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/22353-9207/.minikube/machines/functional-120775/id_rsa Username:docker}
I1229 06:51:44.325981   49883 ssh_runner.go:195] Run: sudo crictl --timeout=10s images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2266: (dbg) Run:  out/minikube-linux-amd64 -p functional-120775 version --short
--- PASS: TestFunctional/parallel/Version/short (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-120775 image ls --format table --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-120775 image ls --format table --alsologtostderr:
┌───────────────────────────────────────────────────┬───────────────────────────────────────┬───────────────┬────────┐
│                       IMAGE                       │                  TAG                  │   IMAGE ID    │  SIZE  │
├───────────────────────────────────────────────────┼───────────────────────────────────────┼───────────────┼────────┤
│ docker.io/kindest/kindnetd                        │ v20250512-df8de77b                    │ 409467f978b4a │ 109MB  │
│ registry.k8s.io/kube-proxy                        │ v1.35.0                               │ 32652ff1bbe6b │ 72MB   │
│ registry.k8s.io/pause                             │ 3.3                                   │ 0184c1613d929 │ 686kB  │
│ registry.k8s.io/pause                             │ latest                                │ 350b164e7ae1d │ 247kB  │
│ gcr.io/k8s-minikube/storage-provisioner           │ v5                                    │ 6e38f40d628db │ 31.5MB │
│ public.ecr.aws/docker/library/mysql               │ 8.4                                   │ 5e3dcc4ab5604 │ 804MB  │
│ registry.k8s.io/etcd                              │ 3.6.6-0                               │ 0a108f7189562 │ 63.6MB │
│ registry.k8s.io/pause                             │ 3.1                                   │ da86e6ba6ca19 │ 747kB  │
│ gcr.io/k8s-minikube/busybox                       │ 1.28.4-glibc                          │ 56cc512116c8f │ 4.63MB │
│ registry.k8s.io/kube-apiserver                    │ v1.35.0                               │ 5c6acd67e9cd1 │ 90.8MB │
│ registry.k8s.io/kube-controller-manager           │ v1.35.0                               │ 2c9a4b058bd7e │ 76.9MB │
│ docker.io/kindest/kindnetd                        │ v20251212-v0.29.0-alpha-105-g20ccfc88 │ 4921d7a6dffa9 │ 108MB  │
│ ghcr.io/medyagh/image-mirrors/kicbase/echo-server │ functional-120775                     │ 9056ab77afb8e │ 4.94MB │
│ ghcr.io/medyagh/image-mirrors/kicbase/echo-server │ latest                                │ 9056ab77afb8e │ 4.94MB │
│ localhost/minikube-local-cache-test               │ functional-120775                     │ dfbfdf3f00f4a │ 3.33kB │
│ public.ecr.aws/nginx/nginx                        │ alpine                                │ 04da2b0513cd7 │ 55.2MB │
│ registry.k8s.io/coredns/coredns                   │ v1.13.1                               │ aa5e3ebc0dfed │ 79.2MB │
│ registry.k8s.io/kube-scheduler                    │ v1.35.0                               │ 550794e3b12ac │ 52.8MB │
│ registry.k8s.io/pause                             │ 3.10.1                                │ cd073f4c5f6a8 │ 742kB  │
└───────────────────────────────────────────────────┴───────────────────────────────────────┴───────────────┴────────┘
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-120775 image ls --format table --alsologtostderr:
I1229 06:51:45.475761   50582 out.go:360] Setting OutFile to fd 1 ...
I1229 06:51:45.476036   50582 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1229 06:51:45.476047   50582 out.go:374] Setting ErrFile to fd 2...
I1229 06:51:45.476054   50582 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1229 06:51:45.476380   50582 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22353-9207/.minikube/bin
I1229 06:51:45.477117   50582 config.go:182] Loaded profile config "functional-120775": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
I1229 06:51:45.477286   50582 config.go:182] Loaded profile config "functional-120775": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
I1229 06:51:45.477904   50582 cli_runner.go:164] Run: docker container inspect functional-120775 --format={{.State.Status}}
I1229 06:51:45.500683   50582 ssh_runner.go:195] Run: systemctl --version
I1229 06:51:45.500741   50582 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-120775
I1229 06:51:45.526362   50582 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/22353-9207/.minikube/machines/functional-120775/id_rsa Username:docker}
I1229 06:51:45.633969   50582 ssh_runner.go:195] Run: sudo crictl --timeout=10s images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2280: (dbg) Run:  out/minikube-linux-amd64 -p functional-120775 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.53s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-120775 image ls --format json --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-120775 image ls --format json --alsologtostderr:
[{"id":"0a108f7189562e99793bdecab61fdf1a7c9d913af3385de9da17fb9d6ff430e2","repoDigests":["registry.k8s.io/etcd@sha256:5279f56db4f32772bb41e47ca44c553f5c87a08fdf339d74c23a4cdc3c388d6a","registry.k8s.io/etcd@sha256:60a30b5d81b2217555e2cfb9537f655b7ba97220b99c39ee2e162a7127225890"],"repoTags":["registry.k8s.io/etcd:3.6.6-0"],"size":"63582405"},{"id":"32652ff1bbe6b6df16a3dc621fcad4e6e185a2852bffc5847aab79e36c54bfb8","repoDigests":["registry.k8s.io/kube-proxy@sha256:ad87ae17f92f26144bd5a35fc86a73f2fae6effd1666db51bc03f8e9213de532","registry.k8s.io/kube-proxy@sha256:c818ca1eff765e35348b77e484da915175cdf483f298e1f9885ed706fcbcb34c"],"repoTags":["registry.k8s.io/kube-proxy:v1.35.0"],"size":"71986585"},{"id":"4921d7a6dffa922dd679732ba4797085c4f39e9a53bee8b6fdb1d463e8571251","repoDigests":["docker.io/kindest/kindnetd@sha256:7c22558dc06a570d46ea6e8a73b23cdc754eb81f7c08d3441a3171ad359ffc27","docker.io/kindest/kindnetd@sha256:377e2e7a513148f7c942b51cd57bdce1589940df856105384ac7f753a1ab43ae"],"repoTags":["docker.io
/kindest/kindnetd:v20251212-v0.29.0-alpha-105-g20ccfc88"],"size":"107598204"},{"id":"56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998","gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"4631262"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944","gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31470524"},{"id":"5e3dcc4ab5604ab9bdf1054833d4f0ac396465de830ccac42d4f59131db9ba23","repoDigests":["public.ecr.aws/docker/library/mysql@sha256:eaf64e87ae0d1136d46405ad56c9010de509fd5b949b9c8ede45c
94f47804d21","public.ecr.aws/docker/library/mysql@sha256:1f5b0aca09cfa06d9a7b89b28d349c1e01ba0d31339a4786fbcb3d5927070130"],"repoTags":["public.ecr.aws/docker/library/mysql:8.4"],"size":"803760263"},{"id":"04da2b0513cd78d8d29d60575cef80813c5496c15a801921e47efdf0feba39e5","repoDigests":["public.ecr.aws/nginx/nginx@sha256:00e053577693e0ee5f7f8b433cdb249624af188622d0da5df20eef4e25a0881c","public.ecr.aws/nginx/nginx@sha256:a411c634df4374901a4a9370626801998f159652f627b1cdfbbbe012adcd6c76"],"repoTags":["public.ecr.aws/nginx/nginx:alpine"],"size":"55157106"},{"id":"115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7","repoDigests":["docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a","docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"],"repoTags":[],"size":"43824855"},{"id":"aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139","repoDigests":["registry.k8s.io/coredns/cored
ns@sha256:246e7333fde10251c693b68f13d21d6d64c7dbad866bbfa11bd49315e3f725a7","registry.k8s.io/coredns/coredns@sha256:9b9128672209474da07c91439bf15ed704ae05ad918dd6454e5b6ae14e35fee6"],"repoTags":["registry.k8s.io/coredns/coredns:v1.13.1"],"size":"79193994"},{"id":"2c9a4b058bd7e6ec479c38f9e8a1dac2f5ee5b0a3ebda6dfac968e8720229508","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:3e343fd915d2e214b9a68c045b94017832927edb89aafa471324f8d05a191111","registry.k8s.io/kube-controller-manager@sha256:e0ce4c7d278a001734bbd8020ed1b7e535ae9d2412c700032eb3df190ea91a62"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.35.0"],"size":"76893520"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":["registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e"],"repoTags":["registry.k8s.io/pause:3.1"],"size":"746911"},{"id":"cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f","repoDigests":["registry.k8s.io/pause@sha256:278fb9d
bcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c","registry.k8s.io/pause@sha256:e5b941ef8f71de54dc3a13398226c269ba217d06650a21bd3afcf9d890cf1f41"],"repoTags":["registry.k8s.io/pause:3.10.1"],"size":"742092"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":["registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9"],"repoTags":["registry.k8s.io/pause:latest"],"size":"247077"},{"id":"5c6acd67e9cd10eb60a246cd233db251ef62ea97e6572f897e873f0cb648f499","repoDigests":["registry.k8s.io/kube-apiserver@sha256:32f98b308862e1cf98c900927d84630fb86a836a480f02752a779eb85c1489f3","registry.k8s.io/kube-apiserver@sha256:50e01ce089b6b6508e2f68ba0da943a3bc4134596e7e2afaac27dd26f71aca7a"],"repoTags":["registry.k8s.io/kube-apiserver:v1.35.0"],"size":"90844140"},{"id":"409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c","repoDigests":["docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a",
"docker.io/kindest/kindnetd@sha256:7a9c9fa59dd517cdc2c82eef1e51392524dd285e9cf7cb5a851c49f294d6cd11"],"repoTags":["docker.io/kindest/kindnetd:v20250512-df8de77b"],"size":"109379124"},{"id":"9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30","repoDigests":["ghcr.io/medyagh/image-mirrors/kicbase/echo-server@sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86","ghcr.io/medyagh/image-mirrors/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6","ghcr.io/medyagh/image-mirrors/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf"],"repoTags":["ghcr.io/medyagh/image-mirrors/kicbase/echo-server:functional-120775","ghcr.io/medyagh/image-mirrors/kicbase/echo-server:latest"],"size":"4943877"},{"id":"dfbfdf3f00f4a7abdf586bc664cef4301b8327cae2c18593834d13c9f63ac812","repoDigests":["localhost/minikube-local-cache-test@sha256:10ef032a32c95b303c32cbbe645d5c6b4b0888f073129de94315f1322ce0691a"],"repoTags":["localhos
t/minikube-local-cache-test:functional-120775"],"size":"3330"},{"id":"550794e3b12ac21ec7fd940bdfb45f7c8dae3c52acd8c70e580b882de20c3dcc","repoDigests":["registry.k8s.io/kube-scheduler@sha256:0ab622491a82532e01876d55e365c08c5bac01bcd5444a8ed58c1127ab47819f","registry.k8s.io/kube-scheduler@sha256:dd2b6a420b171e83748166a66372f43384b3142fc4f6f56a6240a9e152cccd69"],"repoTags":["registry.k8s.io/kube-scheduler:v1.35.0"],"size":"52763986"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":["registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04"],"repoTags":["registry.k8s.io/pause:3.3"],"size":"686139"}]
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-120775 image ls --format json --alsologtostderr:
I1229 06:51:45.197332   50404 out.go:360] Setting OutFile to fd 1 ...
I1229 06:51:45.197586   50404 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1229 06:51:45.197595   50404 out.go:374] Setting ErrFile to fd 2...
I1229 06:51:45.197602   50404 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1229 06:51:45.197949   50404 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22353-9207/.minikube/bin
I1229 06:51:45.198861   50404 config.go:182] Loaded profile config "functional-120775": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
I1229 06:51:45.199087   50404 config.go:182] Loaded profile config "functional-120775": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
I1229 06:51:45.199729   50404 cli_runner.go:164] Run: docker container inspect functional-120775 --format={{.State.Status}}
I1229 06:51:45.226187   50404 ssh_runner.go:195] Run: systemctl --version
I1229 06:51:45.226263   50404 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-120775
I1229 06:51:45.250391   50404 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/22353-9207/.minikube/machines/functional-120775/id_rsa Username:docker}
I1229 06:51:45.360126   50404 ssh_runner.go:195] Run: sudo crictl --timeout=10s images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-120775 image ls --format yaml --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-120775 image ls --format yaml --alsologtostderr:
- id: dfbfdf3f00f4a7abdf586bc664cef4301b8327cae2c18593834d13c9f63ac812
repoDigests:
- localhost/minikube-local-cache-test@sha256:10ef032a32c95b303c32cbbe645d5c6b4b0888f073129de94315f1322ce0691a
repoTags:
- localhost/minikube-local-cache-test:functional-120775
size: "3330"
- id: 04da2b0513cd78d8d29d60575cef80813c5496c15a801921e47efdf0feba39e5
repoDigests:
- public.ecr.aws/nginx/nginx@sha256:00e053577693e0ee5f7f8b433cdb249624af188622d0da5df20eef4e25a0881c
- public.ecr.aws/nginx/nginx@sha256:a411c634df4374901a4a9370626801998f159652f627b1cdfbbbe012adcd6c76
repoTags:
- public.ecr.aws/nginx/nginx:alpine
size: "55157106"
- id: 5c6acd67e9cd10eb60a246cd233db251ef62ea97e6572f897e873f0cb648f499
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:32f98b308862e1cf98c900927d84630fb86a836a480f02752a779eb85c1489f3
- registry.k8s.io/kube-apiserver@sha256:50e01ce089b6b6508e2f68ba0da943a3bc4134596e7e2afaac27dd26f71aca7a
repoTags:
- registry.k8s.io/kube-apiserver:v1.35.0
size: "90844140"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests:
- registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04
repoTags:
- registry.k8s.io/pause:3.3
size: "686139"
- id: 115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7
repoDigests:
- docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a
- docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c
repoTags: []
size: "43824855"
- id: aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:246e7333fde10251c693b68f13d21d6d64c7dbad866bbfa11bd49315e3f725a7
- registry.k8s.io/coredns/coredns@sha256:9b9128672209474da07c91439bf15ed704ae05ad918dd6454e5b6ae14e35fee6
repoTags:
- registry.k8s.io/coredns/coredns:v1.13.1
size: "79193994"
- id: 32652ff1bbe6b6df16a3dc621fcad4e6e185a2852bffc5847aab79e36c54bfb8
repoDigests:
- registry.k8s.io/kube-proxy@sha256:ad87ae17f92f26144bd5a35fc86a73f2fae6effd1666db51bc03f8e9213de532
- registry.k8s.io/kube-proxy@sha256:c818ca1eff765e35348b77e484da915175cdf483f298e1f9885ed706fcbcb34c
repoTags:
- registry.k8s.io/kube-proxy:v1.35.0
size: "71986585"
- id: 550794e3b12ac21ec7fd940bdfb45f7c8dae3c52acd8c70e580b882de20c3dcc
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:0ab622491a82532e01876d55e365c08c5bac01bcd5444a8ed58c1127ab47819f
- registry.k8s.io/kube-scheduler@sha256:dd2b6a420b171e83748166a66372f43384b3142fc4f6f56a6240a9e152cccd69
repoTags:
- registry.k8s.io/kube-scheduler:v1.35.0
size: "52763986"
- id: 4921d7a6dffa922dd679732ba4797085c4f39e9a53bee8b6fdb1d463e8571251
repoDigests:
- docker.io/kindest/kindnetd@sha256:7c22558dc06a570d46ea6e8a73b23cdc754eb81f7c08d3441a3171ad359ffc27
- docker.io/kindest/kindnetd@sha256:377e2e7a513148f7c942b51cd57bdce1589940df856105384ac7f753a1ab43ae
repoTags:
- docker.io/kindest/kindnetd:v20251212-v0.29.0-alpha-105-g20ccfc88
size: "107598204"
- id: 56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "4631262"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
- gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31470524"
- id: 2c9a4b058bd7e6ec479c38f9e8a1dac2f5ee5b0a3ebda6dfac968e8720229508
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:3e343fd915d2e214b9a68c045b94017832927edb89aafa471324f8d05a191111
- registry.k8s.io/kube-controller-manager@sha256:e0ce4c7d278a001734bbd8020ed1b7e535ae9d2412c700032eb3df190ea91a62
repoTags:
- registry.k8s.io/kube-controller-manager:v1.35.0
size: "76893520"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests:
- registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e
repoTags:
- registry.k8s.io/pause:3.1
size: "746911"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests:
- registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9
repoTags:
- registry.k8s.io/pause:latest
size: "247077"
- id: 9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30
repoDigests:
- ghcr.io/medyagh/image-mirrors/kicbase/echo-server@sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86
- ghcr.io/medyagh/image-mirrors/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6
- ghcr.io/medyagh/image-mirrors/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf
repoTags:
- ghcr.io/medyagh/image-mirrors/kicbase/echo-server:functional-120775
- ghcr.io/medyagh/image-mirrors/kicbase/echo-server:latest
size: "4943877"
- id: 5e3dcc4ab5604ab9bdf1054833d4f0ac396465de830ccac42d4f59131db9ba23
repoDigests:
- public.ecr.aws/docker/library/mysql@sha256:eaf64e87ae0d1136d46405ad56c9010de509fd5b949b9c8ede45c94f47804d21
- public.ecr.aws/docker/library/mysql@sha256:1f5b0aca09cfa06d9a7b89b28d349c1e01ba0d31339a4786fbcb3d5927070130
repoTags:
- public.ecr.aws/docker/library/mysql:8.4
size: "803760263"
- id: 0a108f7189562e99793bdecab61fdf1a7c9d913af3385de9da17fb9d6ff430e2
repoDigests:
- registry.k8s.io/etcd@sha256:5279f56db4f32772bb41e47ca44c553f5c87a08fdf339d74c23a4cdc3c388d6a
- registry.k8s.io/etcd@sha256:60a30b5d81b2217555e2cfb9537f655b7ba97220b99c39ee2e162a7127225890
repoTags:
- registry.k8s.io/etcd:3.6.6-0
size: "63582405"
- id: cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f
repoDigests:
- registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c
- registry.k8s.io/pause@sha256:e5b941ef8f71de54dc3a13398226c269ba217d06650a21bd3afcf9d890cf1f41
repoTags:
- registry.k8s.io/pause:3.10.1
size: "742092"
- id: 409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c
repoDigests:
- docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a
- docker.io/kindest/kindnetd@sha256:7a9c9fa59dd517cdc2c82eef1e51392524dd285e9cf7cb5a851c49f294d6cd11
repoTags:
- docker.io/kindest/kindnetd:v20250512-df8de77b
size: "109379124"

                                                
                                                
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-120775 image ls --format yaml --alsologtostderr:
I1229 06:51:44.435082   49974 out.go:360] Setting OutFile to fd 1 ...
I1229 06:51:44.435240   49974 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1229 06:51:44.435252   49974 out.go:374] Setting ErrFile to fd 2...
I1229 06:51:44.435260   49974 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1229 06:51:44.435561   49974 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22353-9207/.minikube/bin
I1229 06:51:44.436330   49974 config.go:182] Loaded profile config "functional-120775": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
I1229 06:51:44.436472   49974 config.go:182] Loaded profile config "functional-120775": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
I1229 06:51:44.437097   49974 cli_runner.go:164] Run: docker container inspect functional-120775 --format={{.State.Status}}
I1229 06:51:44.461721   49974 ssh_runner.go:195] Run: systemctl --version
I1229 06:51:44.461778   49974 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-120775
I1229 06:51:44.487513   49974 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/22353-9207/.minikube/machines/functional-120775/id_rsa Username:docker}
I1229 06:51:44.599130   49974 ssh_runner.go:195] Run: sudo crictl --timeout=10s images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (3.93s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:323: (dbg) Run:  out/minikube-linux-amd64 -p functional-120775 ssh pgrep buildkitd
functional_test.go:323: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-120775 ssh pgrep buildkitd: exit status 1 (349.272369ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:330: (dbg) Run:  out/minikube-linux-amd64 -p functional-120775 image build -t localhost/my-image:functional-120775 testdata/build --alsologtostderr
functional_test.go:330: (dbg) Done: out/minikube-linux-amd64 -p functional-120775 image build -t localhost/my-image:functional-120775 testdata/build --alsologtostderr: (3.350867587s)
functional_test.go:335: (dbg) Stdout: out/minikube-linux-amd64 -p functional-120775 image build -t localhost/my-image:functional-120775 testdata/build --alsologtostderr:
STEP 1/3: FROM gcr.io/k8s-minikube/busybox
STEP 2/3: RUN true
--> 88ded6afaa3
STEP 3/3: ADD content.txt /
COMMIT localhost/my-image:functional-120775
--> 0cf9b4159e4
Successfully tagged localhost/my-image:functional-120775
0cf9b4159e4094e4824ee5f70c67165de0c2d9df0612819139fa6d8e29ddd734
functional_test.go:338: (dbg) Stderr: out/minikube-linux-amd64 -p functional-120775 image build -t localhost/my-image:functional-120775 testdata/build --alsologtostderr:
I1229 06:51:45.075172   50352 out.go:360] Setting OutFile to fd 1 ...
I1229 06:51:45.075338   50352 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1229 06:51:45.075351   50352 out.go:374] Setting ErrFile to fd 2...
I1229 06:51:45.075357   50352 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1229 06:51:45.075709   50352 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22353-9207/.minikube/bin
I1229 06:51:45.076398   50352 config.go:182] Loaded profile config "functional-120775": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
I1229 06:51:45.077327   50352 config.go:182] Loaded profile config "functional-120775": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
I1229 06:51:45.077970   50352 cli_runner.go:164] Run: docker container inspect functional-120775 --format={{.State.Status}}
I1229 06:51:45.102151   50352 ssh_runner.go:195] Run: systemctl --version
I1229 06:51:45.102215   50352 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-120775
I1229 06:51:45.131441   50352 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/22353-9207/.minikube/machines/functional-120775/id_rsa Username:docker}
I1229 06:51:45.243742   50352 build_images.go:162] Building image from path: /tmp/build.3286021174.tar
I1229 06:51:45.243828   50352 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I1229 06:51:45.255211   50352 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.3286021174.tar
I1229 06:51:45.260198   50352 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.3286021174.tar: stat -c "%s %y" /var/lib/minikube/build/build.3286021174.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.3286021174.tar': No such file or directory
I1229 06:51:45.260265   50352 ssh_runner.go:362] scp /tmp/build.3286021174.tar --> /var/lib/minikube/build/build.3286021174.tar (3072 bytes)
I1229 06:51:45.283659   50352 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.3286021174
I1229 06:51:45.293815   50352 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.3286021174 -xf /var/lib/minikube/build/build.3286021174.tar
I1229 06:51:45.304864   50352 crio.go:315] Building image: /var/lib/minikube/build/build.3286021174
I1229 06:51:45.304968   50352 ssh_runner.go:195] Run: sudo podman build -t localhost/my-image:functional-120775 /var/lib/minikube/build/build.3286021174 --cgroup-manager=cgroupfs
Trying to pull gcr.io/k8s-minikube/busybox:latest...
Getting image source signatures
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying config sha256:beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a
Writing manifest to image destination
Storing signatures
I1229 06:51:48.331906   50352 ssh_runner.go:235] Completed: sudo podman build -t localhost/my-image:functional-120775 /var/lib/minikube/build/build.3286021174 --cgroup-manager=cgroupfs: (3.026900673s)
I1229 06:51:48.331997   50352 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.3286021174
I1229 06:51:48.340798   50352 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.3286021174.tar
I1229 06:51:48.348411   50352 build_images.go:218] Built localhost/my-image:functional-120775 from /tmp/build.3286021174.tar
I1229 06:51:48.348449   50352 build_images.go:134] succeeded building to: functional-120775
I1229 06:51:48.348455   50352 build_images.go:135] failed building to: 
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-120775 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (3.93s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (0.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:357: (dbg) Run:  docker pull ghcr.io/medyagh/image-mirrors/kicbase/echo-server:1.0
functional_test.go:362: (dbg) Run:  docker tag ghcr.io/medyagh/image-mirrors/kicbase/echo-server:1.0 ghcr.io/medyagh/image-mirrors/kicbase/echo-server:functional-120775
--- PASS: TestFunctional/parallel/ImageCommands/Setup (0.42s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2129: (dbg) Run:  out/minikube-linux-amd64 -p functional-120775 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.19s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2129: (dbg) Run:  out/minikube-linux-amd64 -p functional-120775 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.18s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2129: (dbg) Run:  out/minikube-linux-amd64 -p functional-120775 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.18s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.55s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:370: (dbg) Run:  out/minikube-linux-amd64 -p functional-120775 image load --daemon ghcr.io/medyagh/image-mirrors/kicbase/echo-server:functional-120775 --alsologtostderr
functional_test.go:370: (dbg) Done: out/minikube-linux-amd64 -p functional-120775 image load --daemon ghcr.io/medyagh/image-mirrors/kicbase/echo-server:functional-120775 --alsologtostderr: (1.180986615s)
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-120775 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.55s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (3.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:380: (dbg) Run:  out/minikube-linux-amd64 -p functional-120775 image load --daemon ghcr.io/medyagh/image-mirrors/kicbase/echo-server:functional-120775 --alsologtostderr
functional_test.go:380: (dbg) Done: out/minikube-linux-amd64 -p functional-120775 image load --daemon ghcr.io/medyagh/image-mirrors/kicbase/echo-server:functional-120775 --alsologtostderr: (2.835044177s)
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-120775 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (3.09s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-amd64 -p functional-120775 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-amd64 -p functional-120775 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-amd64 -p functional-120775 tunnel --alsologtostderr] ...
helpers_test.go:526: unable to kill pid 44602: os: process already finished
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-amd64 -p functional-120775 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.52s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-amd64 -p functional-120775 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (12.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-120775 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:353: "nginx-svc" [6be23702-cea7-4d6b-b433-8f80a9358c34] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:353: "nginx-svc" [6be23702-cea7-4d6b-b433-8f80a9358c34] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 12.003803573s
I1229 06:51:34.088626   12733 kapi.go:150] Service nginx-svc in namespace default found.
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (12.26s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (3.57s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:250: (dbg) Run:  docker pull ghcr.io/medyagh/image-mirrors/kicbase/echo-server:latest
functional_test.go:255: (dbg) Run:  docker tag ghcr.io/medyagh/image-mirrors/kicbase/echo-server:latest ghcr.io/medyagh/image-mirrors/kicbase/echo-server:functional-120775
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-120775 image load --daemon ghcr.io/medyagh/image-mirrors/kicbase/echo-server:functional-120775 --alsologtostderr
functional_test.go:260: (dbg) Done: out/minikube-linux-amd64 -p functional-120775 image load --daemon ghcr.io/medyagh/image-mirrors/kicbase/echo-server:functional-120775 --alsologtostderr: (3.189112096s)
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-120775 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (3.57s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (1.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:395: (dbg) Run:  out/minikube-linux-amd64 -p functional-120775 image save ghcr.io/medyagh/image-mirrors/kicbase/echo-server:functional-120775 /home/jenkins/workspace/Docker_Linux_crio_integration/echo-server-save.tar --alsologtostderr
functional_test.go:395: (dbg) Done: out/minikube-linux-amd64 -p functional-120775 image save ghcr.io/medyagh/image-mirrors/kicbase/echo-server:functional-120775 /home/jenkins/workspace/Docker_Linux_crio_integration/echo-server-save.tar --alsologtostderr: (1.093656261s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (1.09s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:407: (dbg) Run:  out/minikube-linux-amd64 -p functional-120775 image rm ghcr.io/medyagh/image-mirrors/kicbase/echo-server:functional-120775 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-120775 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.52s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.67s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:424: (dbg) Run:  out/minikube-linux-amd64 -p functional-120775 image load /home/jenkins/workspace/Docker_Linux_crio_integration/echo-server-save.tar --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-120775 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.67s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:434: (dbg) Run:  docker rmi ghcr.io/medyagh/image-mirrors/kicbase/echo-server:functional-120775
functional_test.go:439: (dbg) Run:  out/minikube-linux-amd64 -p functional-120775 image save --daemon ghcr.io/medyagh/image-mirrors/kicbase/echo-server:functional-120775 --alsologtostderr
functional_test.go:447: (dbg) Run:  docker image inspect ghcr.io/medyagh/image-mirrors/kicbase/echo-server:functional-120775
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.39s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-120775 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.97.214.181 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-amd64 -p functional-120775 tunnel --alsologtostderr] ...
functional_test_tunnel_test.go:437: failed to stop process: signal: terminated
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (7.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1456: (dbg) Run:  kubectl --context functional-120775 create deployment hello-node --image ghcr.io/medyagh/image-mirrors/kicbase/echo-server
functional_test.go:1460: (dbg) Run:  kubectl --context functional-120775 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1465: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:353: "hello-node-684ffdf98c-zkbs8" [7ef448d3-2759-4646-87fd-9df840892a19] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
helpers_test.go:353: "hello-node-684ffdf98c-zkbs8" [7ef448d3-2759-4646-87fd-9df840892a19] Running
functional_test.go:1465: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 7.004009717s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (7.13s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1290: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1295: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.45s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1330: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1335: Took "353.220945ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1344: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1349: Took "59.392242ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.41s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1381: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1386: Took "339.386676ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1394: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1399: Took "57.678796ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.40s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (6.97s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:74: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-120775 /tmp/TestFunctionalparallelMountCmdany-port2642394145/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:108: wrote "test-1766991098887984431" to /tmp/TestFunctionalparallelMountCmdany-port2642394145/001/created-by-test
functional_test_mount_test.go:108: wrote "test-1766991098887984431" to /tmp/TestFunctionalparallelMountCmdany-port2642394145/001/created-by-test-removed-by-pod
functional_test_mount_test.go:108: wrote "test-1766991098887984431" to /tmp/TestFunctionalparallelMountCmdany-port2642394145/001/test-1766991098887984431
functional_test_mount_test.go:116: (dbg) Run:  out/minikube-linux-amd64 -p functional-120775 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:116: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-120775 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (272.550732ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:116: (dbg) Run:  out/minikube-linux-amd64 -p functional-120775 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:130: (dbg) Run:  out/minikube-linux-amd64 -p functional-120775 ssh -- ls -la /mount-9p
functional_test_mount_test.go:134: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Dec 29 06:51 created-by-test
-rw-r--r-- 1 docker docker 24 Dec 29 06:51 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Dec 29 06:51 test-1766991098887984431
functional_test_mount_test.go:138: (dbg) Run:  out/minikube-linux-amd64 -p functional-120775 ssh cat /mount-9p/test-1766991098887984431
functional_test_mount_test.go:149: (dbg) Run:  kubectl --context functional-120775 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:154: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:353: "busybox-mount" [ca25d7fb-96e5-4205-9062-2b0bcbf342fe] Pending
helpers_test.go:353: "busybox-mount" [ca25d7fb-96e5-4205-9062-2b0bcbf342fe] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:353: "busybox-mount" [ca25d7fb-96e5-4205-9062-2b0bcbf342fe] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:353: "busybox-mount" [ca25d7fb-96e5-4205-9062-2b0bcbf342fe] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:154: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 4.008714776s
functional_test_mount_test.go:170: (dbg) Run:  kubectl --context functional-120775 logs busybox-mount
functional_test_mount_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p functional-120775 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p functional-120775 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:91: (dbg) Run:  out/minikube-linux-amd64 -p functional-120775 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:95: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-120775 /tmp/TestFunctionalparallelMountCmdany-port2642394145/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (6.97s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.93s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1474: (dbg) Run:  out/minikube-linux-amd64 -p functional-120775 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.93s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.95s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1504: (dbg) Run:  out/minikube-linux-amd64 -p functional-120775 service list -o json
functional_test.go:1509: Took "951.754459ms" to run "out/minikube-linux-amd64 -p functional-120775 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.95s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.58s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1524: (dbg) Run:  out/minikube-linux-amd64 -p functional-120775 service --namespace=default --https --url hello-node
functional_test.go:1537: found endpoint: https://192.168.49.2:31762
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.58s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.62s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1555: (dbg) Run:  out/minikube-linux-amd64 -p functional-120775 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.62s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.65s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1574: (dbg) Run:  out/minikube-linux-amd64 -p functional-120775 service hello-node --url
functional_test.go:1580: found endpoint for hello-node: http://192.168.49.2:31762
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.65s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (1.91s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:219: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-120775 /tmp/TestFunctionalparallelMountCmdspecific-port1066677422/001:/mount-9p --alsologtostderr -v=1 --port 35285]
functional_test_mount_test.go:249: (dbg) Run:  out/minikube-linux-amd64 -p functional-120775 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:249: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-120775 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (332.362394ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1229 06:51:46.189567   12733 retry.go:84] will retry after 400ms: exit status 1
functional_test_mount_test.go:249: (dbg) Run:  out/minikube-linux-amd64 -p functional-120775 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:263: (dbg) Run:  out/minikube-linux-amd64 -p functional-120775 ssh -- ls -la /mount-9p
functional_test_mount_test.go:267: guest mount directory contents
total 0
functional_test_mount_test.go:269: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-120775 /tmp/TestFunctionalparallelMountCmdspecific-port1066677422/001:/mount-9p --alsologtostderr -v=1 --port 35285] ...
functional_test_mount_test.go:270: reading mount text
functional_test_mount_test.go:284: done reading mount text
functional_test_mount_test.go:236: (dbg) Run:  out/minikube-linux-amd64 -p functional-120775 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:236: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-120775 ssh "sudo umount -f /mount-9p": exit status 1 (282.489252ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:238: "out/minikube-linux-amd64 -p functional-120775 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:240: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-120775 /tmp/TestFunctionalparallelMountCmdspecific-port1066677422/001:/mount-9p --alsologtostderr -v=1 --port 35285] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (1.91s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.7s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:304: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-120775 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2733379918/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:304: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-120775 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2733379918/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:304: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-120775 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2733379918/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:331: (dbg) Run:  out/minikube-linux-amd64 -p functional-120775 ssh "findmnt -T" /mount1
functional_test_mount_test.go:331: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-120775 ssh "findmnt -T" /mount1: exit status 1 (340.231797ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:331: (dbg) Run:  out/minikube-linux-amd64 -p functional-120775 ssh "findmnt -T" /mount1
2025/12/29 06:51:48 [DEBUG] GET http://127.0.0.1:40595/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
functional_test_mount_test.go:331: (dbg) Run:  out/minikube-linux-amd64 -p functional-120775 ssh "findmnt -T" /mount2
functional_test_mount_test.go:331: (dbg) Run:  out/minikube-linux-amd64 -p functional-120775 ssh "findmnt -T" /mount3
functional_test_mount_test.go:376: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-120775 --kill=true
functional_test_mount_test.go:319: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-120775 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2733379918/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:319: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-120775 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2733379918/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:319: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-120775 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2733379918/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.70s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:205: (dbg) Run:  docker rmi -f ghcr.io/medyagh/image-mirrors/kicbase/echo-server:1.0
functional_test.go:205: (dbg) Run:  docker rmi -f ghcr.io/medyagh/image-mirrors/kicbase/echo-server:functional-120775
--- PASS: TestFunctional/delete_echo-server_images (0.04s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:213: (dbg) Run:  docker rmi -f localhost/my-image:functional-120775
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:221: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-120775
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (110.64s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-amd64 -p ha-459765 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio
E1229 06:52:38.451294   12733 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22353-9207/.minikube/profiles/addons-264018/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1229 06:52:38.456616   12733 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22353-9207/.minikube/profiles/addons-264018/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1229 06:52:38.466882   12733 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22353-9207/.minikube/profiles/addons-264018/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1229 06:52:38.487211   12733 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22353-9207/.minikube/profiles/addons-264018/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1229 06:52:38.527519   12733 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22353-9207/.minikube/profiles/addons-264018/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1229 06:52:38.607781   12733 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22353-9207/.minikube/profiles/addons-264018/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1229 06:52:38.768181   12733 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22353-9207/.minikube/profiles/addons-264018/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1229 06:52:39.088704   12733 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22353-9207/.minikube/profiles/addons-264018/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1229 06:52:39.729690   12733 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22353-9207/.minikube/profiles/addons-264018/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1229 06:52:41.010165   12733 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22353-9207/.minikube/profiles/addons-264018/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1229 06:52:43.570893   12733 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22353-9207/.minikube/profiles/addons-264018/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1229 06:52:48.691862   12733 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22353-9207/.minikube/profiles/addons-264018/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1229 06:52:58.933049   12733 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22353-9207/.minikube/profiles/addons-264018/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1229 06:53:19.413319   12733 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22353-9207/.minikube/profiles/addons-264018/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:101: (dbg) Done: out/minikube-linux-amd64 -p ha-459765 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio: (1m49.870820829s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-amd64 -p ha-459765 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/StartCluster (110.64s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (4.33s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-amd64 -p ha-459765 kubectl -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-amd64 -p ha-459765 kubectl -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-amd64 -p ha-459765 kubectl -- rollout status deployment/busybox: (2.522214453s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 -p ha-459765 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-amd64 -p ha-459765 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p ha-459765 kubectl -- exec busybox-769dd8b7dd-4pggd -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p ha-459765 kubectl -- exec busybox-769dd8b7dd-qnvw2 -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p ha-459765 kubectl -- exec busybox-769dd8b7dd-vf62l -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p ha-459765 kubectl -- exec busybox-769dd8b7dd-4pggd -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p ha-459765 kubectl -- exec busybox-769dd8b7dd-qnvw2 -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p ha-459765 kubectl -- exec busybox-769dd8b7dd-vf62l -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 -p ha-459765 kubectl -- exec busybox-769dd8b7dd-4pggd -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 -p ha-459765 kubectl -- exec busybox-769dd8b7dd-qnvw2 -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 -p ha-459765 kubectl -- exec busybox-769dd8b7dd-vf62l -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (4.33s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.01s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-amd64 -p ha-459765 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 -p ha-459765 kubectl -- exec busybox-769dd8b7dd-4pggd -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 -p ha-459765 kubectl -- exec busybox-769dd8b7dd-4pggd -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 -p ha-459765 kubectl -- exec busybox-769dd8b7dd-qnvw2 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 -p ha-459765 kubectl -- exec busybox-769dd8b7dd-qnvw2 -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 -p ha-459765 kubectl -- exec busybox-769dd8b7dd-vf62l -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 -p ha-459765 kubectl -- exec busybox-769dd8b7dd-vf62l -- sh -c "ping -c 1 192.168.49.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.01s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (23.09s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-amd64 -p ha-459765 node add --alsologtostderr -v 5
E1229 06:54:00.373766   12733 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22353-9207/.minikube/profiles/addons-264018/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:228: (dbg) Done: out/minikube-linux-amd64 -p ha-459765 node add --alsologtostderr -v 5: (22.188281414s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-amd64 -p ha-459765 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (23.09s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-459765 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.06s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.92s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.92s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (17.14s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:328: (dbg) Run:  out/minikube-linux-amd64 -p ha-459765 status --output json --alsologtostderr -v 5
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-459765 cp testdata/cp-test.txt ha-459765:/home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-459765 ssh -n ha-459765 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-459765 cp ha-459765:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile887577020/001/cp-test_ha-459765.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-459765 ssh -n ha-459765 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-459765 cp ha-459765:/home/docker/cp-test.txt ha-459765-m02:/home/docker/cp-test_ha-459765_ha-459765-m02.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-459765 ssh -n ha-459765 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-459765 ssh -n ha-459765-m02 "sudo cat /home/docker/cp-test_ha-459765_ha-459765-m02.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-459765 cp ha-459765:/home/docker/cp-test.txt ha-459765-m03:/home/docker/cp-test_ha-459765_ha-459765-m03.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-459765 ssh -n ha-459765 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-459765 ssh -n ha-459765-m03 "sudo cat /home/docker/cp-test_ha-459765_ha-459765-m03.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-459765 cp ha-459765:/home/docker/cp-test.txt ha-459765-m04:/home/docker/cp-test_ha-459765_ha-459765-m04.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-459765 ssh -n ha-459765 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-459765 ssh -n ha-459765-m04 "sudo cat /home/docker/cp-test_ha-459765_ha-459765-m04.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-459765 cp testdata/cp-test.txt ha-459765-m02:/home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-459765 ssh -n ha-459765-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-459765 cp ha-459765-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile887577020/001/cp-test_ha-459765-m02.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-459765 ssh -n ha-459765-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-459765 cp ha-459765-m02:/home/docker/cp-test.txt ha-459765:/home/docker/cp-test_ha-459765-m02_ha-459765.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-459765 ssh -n ha-459765-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-459765 ssh -n ha-459765 "sudo cat /home/docker/cp-test_ha-459765-m02_ha-459765.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-459765 cp ha-459765-m02:/home/docker/cp-test.txt ha-459765-m03:/home/docker/cp-test_ha-459765-m02_ha-459765-m03.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-459765 ssh -n ha-459765-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-459765 ssh -n ha-459765-m03 "sudo cat /home/docker/cp-test_ha-459765-m02_ha-459765-m03.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-459765 cp ha-459765-m02:/home/docker/cp-test.txt ha-459765-m04:/home/docker/cp-test_ha-459765-m02_ha-459765-m04.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-459765 ssh -n ha-459765-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-459765 ssh -n ha-459765-m04 "sudo cat /home/docker/cp-test_ha-459765-m02_ha-459765-m04.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-459765 cp testdata/cp-test.txt ha-459765-m03:/home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-459765 ssh -n ha-459765-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-459765 cp ha-459765-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile887577020/001/cp-test_ha-459765-m03.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-459765 ssh -n ha-459765-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-459765 cp ha-459765-m03:/home/docker/cp-test.txt ha-459765:/home/docker/cp-test_ha-459765-m03_ha-459765.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-459765 ssh -n ha-459765-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-459765 ssh -n ha-459765 "sudo cat /home/docker/cp-test_ha-459765-m03_ha-459765.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-459765 cp ha-459765-m03:/home/docker/cp-test.txt ha-459765-m02:/home/docker/cp-test_ha-459765-m03_ha-459765-m02.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-459765 ssh -n ha-459765-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-459765 ssh -n ha-459765-m02 "sudo cat /home/docker/cp-test_ha-459765-m03_ha-459765-m02.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-459765 cp ha-459765-m03:/home/docker/cp-test.txt ha-459765-m04:/home/docker/cp-test_ha-459765-m03_ha-459765-m04.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-459765 ssh -n ha-459765-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-459765 ssh -n ha-459765-m04 "sudo cat /home/docker/cp-test_ha-459765-m03_ha-459765-m04.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-459765 cp testdata/cp-test.txt ha-459765-m04:/home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-459765 ssh -n ha-459765-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-459765 cp ha-459765-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile887577020/001/cp-test_ha-459765-m04.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-459765 ssh -n ha-459765-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-459765 cp ha-459765-m04:/home/docker/cp-test.txt ha-459765:/home/docker/cp-test_ha-459765-m04_ha-459765.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-459765 ssh -n ha-459765-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-459765 ssh -n ha-459765 "sudo cat /home/docker/cp-test_ha-459765-m04_ha-459765.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-459765 cp ha-459765-m04:/home/docker/cp-test.txt ha-459765-m02:/home/docker/cp-test_ha-459765-m04_ha-459765-m02.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-459765 ssh -n ha-459765-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-459765 ssh -n ha-459765-m02 "sudo cat /home/docker/cp-test_ha-459765-m04_ha-459765-m02.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-459765 cp ha-459765-m04:/home/docker/cp-test.txt ha-459765-m03:/home/docker/cp-test_ha-459765-m04_ha-459765-m03.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-459765 ssh -n ha-459765-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-459765 ssh -n ha-459765-m03 "sudo cat /home/docker/cp-test_ha-459765-m04_ha-459765-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (17.14s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (14.17s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:365: (dbg) Run:  out/minikube-linux-amd64 -p ha-459765 node stop m02 --alsologtostderr -v 5
ha_test.go:365: (dbg) Done: out/minikube-linux-amd64 -p ha-459765 node stop m02 --alsologtostderr -v 5: (13.475097754s)
ha_test.go:371: (dbg) Run:  out/minikube-linux-amd64 -p ha-459765 status --alsologtostderr -v 5
ha_test.go:371: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-459765 status --alsologtostderr -v 5: exit status 7 (691.699327ms)

                                                
                                                
-- stdout --
	ha-459765
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-459765-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-459765-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-459765-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1229 06:54:43.819871   72104 out.go:360] Setting OutFile to fd 1 ...
	I1229 06:54:43.820318   72104 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1229 06:54:43.820327   72104 out.go:374] Setting ErrFile to fd 2...
	I1229 06:54:43.820332   72104 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1229 06:54:43.820534   72104 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22353-9207/.minikube/bin
	I1229 06:54:43.820701   72104 out.go:368] Setting JSON to false
	I1229 06:54:43.820723   72104 mustload.go:66] Loading cluster: ha-459765
	I1229 06:54:43.820796   72104 notify.go:221] Checking for updates...
	I1229 06:54:43.821085   72104 config.go:182] Loaded profile config "ha-459765": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1229 06:54:43.821104   72104 status.go:174] checking status of ha-459765 ...
	I1229 06:54:43.821552   72104 cli_runner.go:164] Run: docker container inspect ha-459765 --format={{.State.Status}}
	I1229 06:54:43.840167   72104 status.go:371] ha-459765 host status = "Running" (err=<nil>)
	I1229 06:54:43.840192   72104 host.go:66] Checking if "ha-459765" exists ...
	I1229 06:54:43.840535   72104 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-459765
	I1229 06:54:43.859491   72104 host.go:66] Checking if "ha-459765" exists ...
	I1229 06:54:43.859706   72104 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1229 06:54:43.859740   72104 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-459765
	I1229 06:54:43.877086   72104 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/22353-9207/.minikube/machines/ha-459765/id_rsa Username:docker}
	I1229 06:54:43.970425   72104 ssh_runner.go:195] Run: systemctl --version
	I1229 06:54:43.976273   72104 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1229 06:54:43.987543   72104 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1229 06:54:44.040270   72104 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:65 OomKillDisable:false NGoroutines:74 SystemTime:2025-12-29 06:54:44.031015005 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:dea7da592f5d1d2b7755e3a161be07f43fad8f75 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1229 06:54:44.040809   72104 kubeconfig.go:125] found "ha-459765" server: "https://192.168.49.254:8443"
	I1229 06:54:44.040837   72104 api_server.go:166] Checking apiserver status ...
	I1229 06:54:44.040871   72104 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1229 06:54:44.052116   72104 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1238/cgroup
	I1229 06:54:44.060160   72104 ssh_runner.go:195] Run: sudo grep ^0:: /proc/1238/cgroup
	I1229 06:54:44.067675   72104 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/system.slice/crio-decba5fb061da6d77e56bbcc4dbe8c17174952d146ad449ae9d7dae226eb1e56.scope/container/cgroup.freeze
	I1229 06:54:44.074631   72104 api_server.go:299] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I1229 06:54:44.079924   72104 api_server.go:325] https://192.168.49.254:8443/healthz returned 200:
	ok
	I1229 06:54:44.079944   72104 status.go:463] ha-459765 apiserver status = Running (err=<nil>)
	I1229 06:54:44.079956   72104 status.go:176] ha-459765 status: &{Name:ha-459765 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1229 06:54:44.079977   72104 status.go:174] checking status of ha-459765-m02 ...
	I1229 06:54:44.080207   72104 cli_runner.go:164] Run: docker container inspect ha-459765-m02 --format={{.State.Status}}
	I1229 06:54:44.097711   72104 status.go:371] ha-459765-m02 host status = "Stopped" (err=<nil>)
	I1229 06:54:44.097729   72104 status.go:384] host is not running, skipping remaining checks
	I1229 06:54:44.097734   72104 status.go:176] ha-459765-m02 status: &{Name:ha-459765-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1229 06:54:44.097751   72104 status.go:174] checking status of ha-459765-m03 ...
	I1229 06:54:44.098008   72104 cli_runner.go:164] Run: docker container inspect ha-459765-m03 --format={{.State.Status}}
	I1229 06:54:44.115007   72104 status.go:371] ha-459765-m03 host status = "Running" (err=<nil>)
	I1229 06:54:44.115044   72104 host.go:66] Checking if "ha-459765-m03" exists ...
	I1229 06:54:44.115389   72104 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-459765-m03
	I1229 06:54:44.133602   72104 host.go:66] Checking if "ha-459765-m03" exists ...
	I1229 06:54:44.133827   72104 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1229 06:54:44.133857   72104 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-459765-m03
	I1229 06:54:44.150964   72104 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32793 SSHKeyPath:/home/jenkins/minikube-integration/22353-9207/.minikube/machines/ha-459765-m03/id_rsa Username:docker}
	I1229 06:54:44.245377   72104 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1229 06:54:44.257498   72104 kubeconfig.go:125] found "ha-459765" server: "https://192.168.49.254:8443"
	I1229 06:54:44.257524   72104 api_server.go:166] Checking apiserver status ...
	I1229 06:54:44.257560   72104 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1229 06:54:44.268060   72104 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1170/cgroup
	I1229 06:54:44.276398   72104 ssh_runner.go:195] Run: sudo grep ^0:: /proc/1170/cgroup
	I1229 06:54:44.283567   72104 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/system.slice/crio-08a38b1d67e4f49c9b86eedeb2da18b03d871ff5041aa152022c40845f20834c.scope/container/cgroup.freeze
	I1229 06:54:44.290722   72104 api_server.go:299] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I1229 06:54:44.294791   72104 api_server.go:325] https://192.168.49.254:8443/healthz returned 200:
	ok
	I1229 06:54:44.294810   72104 status.go:463] ha-459765-m03 apiserver status = Running (err=<nil>)
	I1229 06:54:44.294817   72104 status.go:176] ha-459765-m03 status: &{Name:ha-459765-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1229 06:54:44.294830   72104 status.go:174] checking status of ha-459765-m04 ...
	I1229 06:54:44.295096   72104 cli_runner.go:164] Run: docker container inspect ha-459765-m04 --format={{.State.Status}}
	I1229 06:54:44.313381   72104 status.go:371] ha-459765-m04 host status = "Running" (err=<nil>)
	I1229 06:54:44.313400   72104 host.go:66] Checking if "ha-459765-m04" exists ...
	I1229 06:54:44.313663   72104 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-459765-m04
	I1229 06:54:44.330830   72104 host.go:66] Checking if "ha-459765-m04" exists ...
	I1229 06:54:44.331147   72104 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1229 06:54:44.331193   72104 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-459765-m04
	I1229 06:54:44.349302   72104 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32798 SSHKeyPath:/home/jenkins/minikube-integration/22353-9207/.minikube/machines/ha-459765-m04/id_rsa Username:docker}
	I1229 06:54:44.443132   72104 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1229 06:54:44.454788   72104 status.go:176] ha-459765-m04 status: &{Name:ha-459765-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (14.17s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.72s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.72s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (8.63s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p ha-459765 node start m02 --alsologtostderr -v 5
ha_test.go:422: (dbg) Done: out/minikube-linux-amd64 -p ha-459765 node start m02 --alsologtostderr -v 5: (7.69743315s)
ha_test.go:430: (dbg) Run:  out/minikube-linux-amd64 -p ha-459765 status --alsologtostderr -v 5
ha_test.go:450: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (8.63s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.93s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.93s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (103.91s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:458: (dbg) Run:  out/minikube-linux-amd64 -p ha-459765 node list --alsologtostderr -v 5
ha_test.go:464: (dbg) Run:  out/minikube-linux-amd64 -p ha-459765 stop --alsologtostderr -v 5
E1229 06:55:22.296412   12733 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22353-9207/.minikube/profiles/addons-264018/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:464: (dbg) Done: out/minikube-linux-amd64 -p ha-459765 stop --alsologtostderr -v 5: (46.37442466s)
ha_test.go:469: (dbg) Run:  out/minikube-linux-amd64 -p ha-459765 start --wait true --alsologtostderr -v 5
E1229 06:56:19.140166   12733 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22353-9207/.minikube/profiles/functional-120775/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1229 06:56:19.145476   12733 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22353-9207/.minikube/profiles/functional-120775/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1229 06:56:19.155793   12733 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22353-9207/.minikube/profiles/functional-120775/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1229 06:56:19.176048   12733 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22353-9207/.minikube/profiles/functional-120775/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1229 06:56:19.216359   12733 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22353-9207/.minikube/profiles/functional-120775/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1229 06:56:19.296681   12733 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22353-9207/.minikube/profiles/functional-120775/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1229 06:56:19.457123   12733 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22353-9207/.minikube/profiles/functional-120775/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1229 06:56:19.777462   12733 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22353-9207/.minikube/profiles/functional-120775/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1229 06:56:20.418447   12733 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22353-9207/.minikube/profiles/functional-120775/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1229 06:56:21.699275   12733 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22353-9207/.minikube/profiles/functional-120775/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1229 06:56:24.259750   12733 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22353-9207/.minikube/profiles/functional-120775/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1229 06:56:29.380560   12733 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22353-9207/.minikube/profiles/functional-120775/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:469: (dbg) Done: out/minikube-linux-amd64 -p ha-459765 start --wait true --alsologtostderr -v 5: (57.414807424s)
ha_test.go:474: (dbg) Run:  out/minikube-linux-amd64 -p ha-459765 node list --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/RestartClusterKeepsNodes (103.91s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (11.62s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:489: (dbg) Run:  out/minikube-linux-amd64 -p ha-459765 node delete m03 --alsologtostderr -v 5
E1229 06:56:39.621776   12733 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22353-9207/.minikube/profiles/functional-120775/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:489: (dbg) Done: out/minikube-linux-amd64 -p ha-459765 node delete m03 --alsologtostderr -v 5: (10.737060946s)
ha_test.go:495: (dbg) Run:  out/minikube-linux-amd64 -p ha-459765 status --alsologtostderr -v 5
ha_test.go:513: (dbg) Run:  kubectl get nodes
ha_test.go:521: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (11.62s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.7s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.70s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (43.11s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:533: (dbg) Run:  out/minikube-linux-amd64 -p ha-459765 stop --alsologtostderr -v 5
E1229 06:57:00.102095   12733 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22353-9207/.minikube/profiles/functional-120775/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:533: (dbg) Done: out/minikube-linux-amd64 -p ha-459765 stop --alsologtostderr -v 5: (42.998386049s)
ha_test.go:539: (dbg) Run:  out/minikube-linux-amd64 -p ha-459765 status --alsologtostderr -v 5
ha_test.go:539: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-459765 status --alsologtostderr -v 5: exit status 7 (115.390165ms)

                                                
                                                
-- stdout --
	ha-459765
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-459765-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-459765-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1229 06:57:34.020780   86443 out.go:360] Setting OutFile to fd 1 ...
	I1229 06:57:34.021077   86443 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1229 06:57:34.021088   86443 out.go:374] Setting ErrFile to fd 2...
	I1229 06:57:34.021091   86443 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1229 06:57:34.021330   86443 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22353-9207/.minikube/bin
	I1229 06:57:34.021509   86443 out.go:368] Setting JSON to false
	I1229 06:57:34.021532   86443 mustload.go:66] Loading cluster: ha-459765
	I1229 06:57:34.021659   86443 notify.go:221] Checking for updates...
	I1229 06:57:34.021891   86443 config.go:182] Loaded profile config "ha-459765": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1229 06:57:34.021908   86443 status.go:174] checking status of ha-459765 ...
	I1229 06:57:34.022401   86443 cli_runner.go:164] Run: docker container inspect ha-459765 --format={{.State.Status}}
	I1229 06:57:34.042234   86443 status.go:371] ha-459765 host status = "Stopped" (err=<nil>)
	I1229 06:57:34.042257   86443 status.go:384] host is not running, skipping remaining checks
	I1229 06:57:34.042264   86443 status.go:176] ha-459765 status: &{Name:ha-459765 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1229 06:57:34.042303   86443 status.go:174] checking status of ha-459765-m02 ...
	I1229 06:57:34.042632   86443 cli_runner.go:164] Run: docker container inspect ha-459765-m02 --format={{.State.Status}}
	I1229 06:57:34.061985   86443 status.go:371] ha-459765-m02 host status = "Stopped" (err=<nil>)
	I1229 06:57:34.062004   86443 status.go:384] host is not running, skipping remaining checks
	I1229 06:57:34.062010   86443 status.go:176] ha-459765-m02 status: &{Name:ha-459765-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1229 06:57:34.062025   86443 status.go:174] checking status of ha-459765-m04 ...
	I1229 06:57:34.062279   86443 cli_runner.go:164] Run: docker container inspect ha-459765-m04 --format={{.State.Status}}
	I1229 06:57:34.079599   86443 status.go:371] ha-459765-m04 host status = "Stopped" (err=<nil>)
	I1229 06:57:34.079619   86443 status.go:384] host is not running, skipping remaining checks
	I1229 06:57:34.079635   86443 status.go:176] ha-459765-m04 status: &{Name:ha-459765-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (43.11s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (51.94s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:562: (dbg) Run:  out/minikube-linux-amd64 -p ha-459765 start --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio
E1229 06:57:38.450306   12733 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22353-9207/.minikube/profiles/addons-264018/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1229 06:57:41.062675   12733 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22353-9207/.minikube/profiles/functional-120775/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1229 06:58:06.136829   12733 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22353-9207/.minikube/profiles/addons-264018/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:562: (dbg) Done: out/minikube-linux-amd64 -p ha-459765 start --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio: (51.148731669s)
ha_test.go:568: (dbg) Run:  out/minikube-linux-amd64 -p ha-459765 status --alsologtostderr -v 5
ha_test.go:586: (dbg) Run:  kubectl get nodes
ha_test.go:594: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (51.94s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.7s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.70s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (35.05s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:607: (dbg) Run:  out/minikube-linux-amd64 -p ha-459765 node add --control-plane --alsologtostderr -v 5
ha_test.go:607: (dbg) Done: out/minikube-linux-amd64 -p ha-459765 node add --control-plane --alsologtostderr -v 5: (34.121242547s)
ha_test.go:613: (dbg) Run:  out/minikube-linux-amd64 -p ha-459765 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (35.05s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.9s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.90s)

                                                
                                    
x
+
TestJSONOutput/start/Command (35.37s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-100345 --output=json --user=testUser --memory=3072 --wait=true --driver=docker  --container-runtime=crio
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p json-output-100345 --output=json --user=testUser --memory=3072 --wait=true --driver=docker  --container-runtime=crio: (35.36768043s)
--- PASS: TestJSONOutput/start/Command (35.37s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (7.97s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p json-output-100345 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p json-output-100345 --output=json --user=testUser: (7.971861154s)
--- PASS: TestJSONOutput/stop/Command (7.97s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.23s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-439108 --memory=3072 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-439108 --memory=3072 --output=json --wait=true --driver=fail: exit status 56 (76.524172ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"0b35b1d9-808b-4983-b5fb-3666cae1ebb0","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-439108] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"eb944123-8a25-4c2d-b670-989af1426e70","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=22353"}}
	{"specversion":"1.0","id":"44ae6801-c142-4bd7-84f8-db15306fb602","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"fb606f2e-b769-4628-bc92-b968729e8d98","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/22353-9207/kubeconfig"}}
	{"specversion":"1.0","id":"81c9bc62-31a1-45f1-a567-fac53ef70081","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/22353-9207/.minikube"}}
	{"specversion":"1.0","id":"96ce7d09-b2c0-4d61-8d56-74bda7be5f0b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"269d859d-f49c-4904-b44f-cf7e965d26fd","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"c2136769-5333-46f7-b415-5c01bb315109","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:176: Cleaning up "json-output-error-439108" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-439108
--- PASS: TestErrorJSONOutput (0.23s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (22.94s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-amd64 start -p docker-network-123053 --network=
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-amd64 start -p docker-network-123053 --network=: (20.833417596s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:176: Cleaning up "docker-network-123053" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p docker-network-123053
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p docker-network-123053: (2.084145736s)
--- PASS: TestKicCustomNetwork/create_custom_network (22.94s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (19.21s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-amd64 start -p docker-network-341722 --network=bridge
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-amd64 start -p docker-network-341722 --network=bridge: (17.245990864s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:176: Cleaning up "docker-network-341722" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p docker-network-341722
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p docker-network-341722: (1.945531429s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (19.21s)

                                                
                                    
x
+
TestKicExistingNetwork (20.28s)

                                                
                                                
=== RUN   TestKicExistingNetwork
I1229 07:00:44.963870   12733 cli_runner.go:164] Run: docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
W1229 07:00:44.981239   12733 cli_runner.go:211] docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
I1229 07:00:44.981315   12733 network_create.go:284] running [docker network inspect existing-network] to gather additional debugging logs...
I1229 07:00:44.981336   12733 cli_runner.go:164] Run: docker network inspect existing-network
W1229 07:00:44.998918   12733 cli_runner.go:211] docker network inspect existing-network returned with exit code 1
I1229 07:00:44.998952   12733 network_create.go:287] error running [docker network inspect existing-network]: docker network inspect existing-network: exit status 1
stdout:
[]

                                                
                                                
stderr:
Error response from daemon: network existing-network not found
I1229 07:00:44.998974   12733 network_create.go:289] output of [docker network inspect existing-network]: -- stdout --
[]

                                                
                                                
-- /stdout --
** stderr ** 
Error response from daemon: network existing-network not found

                                                
                                                
** /stderr **
I1229 07:00:44.999106   12733 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I1229 07:00:45.016803   12733 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-0cdc02b57a9c IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:d2:92:f5:d8:8c:53} reservation:<nil>}
I1229 07:00:45.017137   12733 network.go:206] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001be8de0}
I1229 07:00:45.017162   12733 network_create.go:124] attempt to create docker network existing-network 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
I1229 07:00:45.017234   12733 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=existing-network existing-network
I1229 07:00:45.063737   12733 network_create.go:108] docker network existing-network 192.168.58.0/24 created
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-linux-amd64 start -p existing-network-761205 --network=existing-network
kic_custom_network_test.go:93: (dbg) Done: out/minikube-linux-amd64 start -p existing-network-761205 --network=existing-network: (18.180341599s)
helpers_test.go:176: Cleaning up "existing-network-761205" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p existing-network-761205
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p existing-network-761205: (1.961741465s)
I1229 07:01:05.222856   12733 cli_runner.go:164] Run: docker network ls --filter=label=existing-network --format {{.Name}}
--- PASS: TestKicExistingNetwork (20.28s)

                                                
                                    
x
+
TestKicCustomSubnet (20.07s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-subnet-439811 --subnet=192.168.60.0/24
E1229 07:01:19.140455   12733 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22353-9207/.minikube/profiles/functional-120775/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
kic_custom_network_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-subnet-439811 --subnet=192.168.60.0/24: (17.956160778s)
kic_custom_network_test.go:161: (dbg) Run:  docker network inspect custom-subnet-439811 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:176: Cleaning up "custom-subnet-439811" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p custom-subnet-439811
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p custom-subnet-439811: (2.098412006s)
--- PASS: TestKicCustomSubnet (20.07s)

                                                
                                    
x
+
TestKicStaticIP (22.76s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:132: (dbg) Run:  out/minikube-linux-amd64 start -p static-ip-929497 --static-ip=192.168.200.200
kic_custom_network_test.go:132: (dbg) Done: out/minikube-linux-amd64 start -p static-ip-929497 --static-ip=192.168.200.200: (20.536003877s)
kic_custom_network_test.go:138: (dbg) Run:  out/minikube-linux-amd64 -p static-ip-929497 ip
helpers_test.go:176: Cleaning up "static-ip-929497" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p static-ip-929497
E1229 07:01:46.824367   12733 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22353-9207/.minikube/profiles/functional-120775/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p static-ip-929497: (2.081078459s)
--- PASS: TestKicStaticIP (22.76s)

                                                
                                    
x
+
TestMainNoArgs (0.06s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:70: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.06s)

                                                
                                    
x
+
TestMinikubeProfile (42.93s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p first-270717 --driver=docker  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p first-270717 --driver=docker  --container-runtime=crio: (19.728613942s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p second-272763 --driver=docker  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p second-272763 --driver=docker  --container-runtime=crio: (17.231167348s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile first-270717
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile second-272763
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
helpers_test.go:176: Cleaning up "second-272763" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p second-272763
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p second-272763: (2.342597508s)
helpers_test.go:176: Cleaning up "first-270717" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p first-270717
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p first-270717: (2.342914866s)
--- PASS: TestMinikubeProfile (42.93s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (4.71s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-1-399737 --memory=3072 --mount-string /tmp/TestMountStartserial1936880673/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio
mount_start_test.go:118: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-1-399737 --memory=3072 --mount-string /tmp/TestMountStartserial1936880673/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio: (3.708802825s)
--- PASS: TestMountStart/serial/StartWithMountFirst (4.71s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.27s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-399737 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.27s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (7.75s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-414649 --memory=3072 --mount-string /tmp/TestMountStartserial1936880673/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio
E1229 07:02:38.451652   12733 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22353-9207/.minikube/profiles/addons-264018/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
mount_start_test.go:118: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-414649 --memory=3072 --mount-string /tmp/TestMountStartserial1936880673/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio: (6.745199351s)
--- PASS: TestMountStart/serial/StartWithMountSecond (7.75s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.27s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-414649 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.27s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (1.67s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p mount-start-1-399737 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-amd64 delete -p mount-start-1-399737 --alsologtostderr -v=5: (1.666298863s)
--- PASS: TestMountStart/serial/DeleteFirst (1.67s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.26s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-414649 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.26s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.24s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:196: (dbg) Run:  out/minikube-linux-amd64 stop -p mount-start-2-414649
mount_start_test.go:196: (dbg) Done: out/minikube-linux-amd64 stop -p mount-start-2-414649: (1.244751875s)
--- PASS: TestMountStart/serial/Stop (1.24s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (7.25s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:207: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-414649
mount_start_test.go:207: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-414649: (6.250944112s)
--- PASS: TestMountStart/serial/RestartStopped (7.25s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.27s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-414649 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.27s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (59.03s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-167190 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=docker  --container-runtime=crio
multinode_test.go:96: (dbg) Done: out/minikube-linux-amd64 start -p multinode-167190 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=docker  --container-runtime=crio: (58.554329981s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-amd64 -p multinode-167190 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (59.03s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (3.39s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-167190 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-167190 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-167190 -- rollout status deployment/busybox: (2.077526503s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-167190 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-167190 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-167190 -- exec busybox-769dd8b7dd-rwq67 -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-167190 -- exec busybox-769dd8b7dd-xt48x -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-167190 -- exec busybox-769dd8b7dd-rwq67 -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-167190 -- exec busybox-769dd8b7dd-xt48x -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-167190 -- exec busybox-769dd8b7dd-rwq67 -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-167190 -- exec busybox-769dd8b7dd-xt48x -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (3.39s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.7s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-167190 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-167190 -- exec busybox-769dd8b7dd-rwq67 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-167190 -- exec busybox-769dd8b7dd-rwq67 -- sh -c "ping -c 1 192.168.67.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-167190 -- exec busybox-769dd8b7dd-xt48x -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-167190 -- exec busybox-769dd8b7dd-xt48x -- sh -c "ping -c 1 192.168.67.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.70s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (25.86s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-167190 -v=5 --alsologtostderr
multinode_test.go:121: (dbg) Done: out/minikube-linux-amd64 node add -p multinode-167190 -v=5 --alsologtostderr: (25.209835767s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p multinode-167190 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (25.86s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-167190 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.66s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.66s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (9.7s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-amd64 -p multinode-167190 status --output json --alsologtostderr
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p multinode-167190 cp testdata/cp-test.txt multinode-167190:/home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-167190 ssh -n multinode-167190 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p multinode-167190 cp multinode-167190:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile1982535021/001/cp-test_multinode-167190.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-167190 ssh -n multinode-167190 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p multinode-167190 cp multinode-167190:/home/docker/cp-test.txt multinode-167190-m02:/home/docker/cp-test_multinode-167190_multinode-167190-m02.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-167190 ssh -n multinode-167190 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-167190 ssh -n multinode-167190-m02 "sudo cat /home/docker/cp-test_multinode-167190_multinode-167190-m02.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p multinode-167190 cp multinode-167190:/home/docker/cp-test.txt multinode-167190-m03:/home/docker/cp-test_multinode-167190_multinode-167190-m03.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-167190 ssh -n multinode-167190 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-167190 ssh -n multinode-167190-m03 "sudo cat /home/docker/cp-test_multinode-167190_multinode-167190-m03.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p multinode-167190 cp testdata/cp-test.txt multinode-167190-m02:/home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-167190 ssh -n multinode-167190-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p multinode-167190 cp multinode-167190-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile1982535021/001/cp-test_multinode-167190-m02.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-167190 ssh -n multinode-167190-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p multinode-167190 cp multinode-167190-m02:/home/docker/cp-test.txt multinode-167190:/home/docker/cp-test_multinode-167190-m02_multinode-167190.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-167190 ssh -n multinode-167190-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-167190 ssh -n multinode-167190 "sudo cat /home/docker/cp-test_multinode-167190-m02_multinode-167190.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p multinode-167190 cp multinode-167190-m02:/home/docker/cp-test.txt multinode-167190-m03:/home/docker/cp-test_multinode-167190-m02_multinode-167190-m03.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-167190 ssh -n multinode-167190-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-167190 ssh -n multinode-167190-m03 "sudo cat /home/docker/cp-test_multinode-167190-m02_multinode-167190-m03.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p multinode-167190 cp testdata/cp-test.txt multinode-167190-m03:/home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-167190 ssh -n multinode-167190-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p multinode-167190 cp multinode-167190-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile1982535021/001/cp-test_multinode-167190-m03.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-167190 ssh -n multinode-167190-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p multinode-167190 cp multinode-167190-m03:/home/docker/cp-test.txt multinode-167190:/home/docker/cp-test_multinode-167190-m03_multinode-167190.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-167190 ssh -n multinode-167190-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-167190 ssh -n multinode-167190 "sudo cat /home/docker/cp-test_multinode-167190-m03_multinode-167190.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p multinode-167190 cp multinode-167190-m03:/home/docker/cp-test.txt multinode-167190-m02:/home/docker/cp-test_multinode-167190-m03_multinode-167190-m02.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-167190 ssh -n multinode-167190-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-167190 ssh -n multinode-167190-m02 "sudo cat /home/docker/cp-test_multinode-167190-m03_multinode-167190-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (9.70s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.27s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p multinode-167190 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-amd64 -p multinode-167190 node stop m03: (1.269464816s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-amd64 -p multinode-167190 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-167190 status: exit status 7 (501.152407ms)

                                                
                                                
-- stdout --
	multinode-167190
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-167190-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-167190-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p multinode-167190 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-167190 status --alsologtostderr: exit status 7 (497.962844ms)

                                                
                                                
-- stdout --
	multinode-167190
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-167190-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-167190-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1229 07:04:37.804155  146261 out.go:360] Setting OutFile to fd 1 ...
	I1229 07:04:37.804290  146261 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1229 07:04:37.804302  146261 out.go:374] Setting ErrFile to fd 2...
	I1229 07:04:37.804309  146261 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1229 07:04:37.804519  146261 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22353-9207/.minikube/bin
	I1229 07:04:37.804691  146261 out.go:368] Setting JSON to false
	I1229 07:04:37.804720  146261 mustload.go:66] Loading cluster: multinode-167190
	I1229 07:04:37.804782  146261 notify.go:221] Checking for updates...
	I1229 07:04:37.805233  146261 config.go:182] Loaded profile config "multinode-167190": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1229 07:04:37.805257  146261 status.go:174] checking status of multinode-167190 ...
	I1229 07:04:37.805800  146261 cli_runner.go:164] Run: docker container inspect multinode-167190 --format={{.State.Status}}
	I1229 07:04:37.825735  146261 status.go:371] multinode-167190 host status = "Running" (err=<nil>)
	I1229 07:04:37.825757  146261 host.go:66] Checking if "multinode-167190" exists ...
	I1229 07:04:37.826005  146261 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-167190
	I1229 07:04:37.843124  146261 host.go:66] Checking if "multinode-167190" exists ...
	I1229 07:04:37.843440  146261 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1229 07:04:37.843488  146261 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-167190
	I1229 07:04:37.861553  146261 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32903 SSHKeyPath:/home/jenkins/minikube-integration/22353-9207/.minikube/machines/multinode-167190/id_rsa Username:docker}
	I1229 07:04:37.955343  146261 ssh_runner.go:195] Run: systemctl --version
	I1229 07:04:37.961400  146261 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1229 07:04:37.973112  146261 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1229 07:04:38.029367  146261 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:50 OomKillDisable:false NGoroutines:64 SystemTime:2025-12-29 07:04:38.01952808 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x8
6_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:dea7da592f5d1d2b7755e3a161be07f43fad8f75 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[ma
p[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1229 07:04:38.029845  146261 kubeconfig.go:125] found "multinode-167190" server: "https://192.168.67.2:8443"
	I1229 07:04:38.029874  146261 api_server.go:166] Checking apiserver status ...
	I1229 07:04:38.029907  146261 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1229 07:04:38.040982  146261 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1227/cgroup
	I1229 07:04:38.049149  146261 ssh_runner.go:195] Run: sudo grep ^0:: /proc/1227/cgroup
	I1229 07:04:38.056681  146261 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/system.slice/crio-d44757ea90cbbea58dd7d6fcd3733a4ce12cc5695af311add60bb777e8653a35.scope/container/cgroup.freeze
	I1229 07:04:38.063866  146261 api_server.go:299] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I1229 07:04:38.067859  146261 api_server.go:325] https://192.168.67.2:8443/healthz returned 200:
	ok
	I1229 07:04:38.067880  146261 status.go:463] multinode-167190 apiserver status = Running (err=<nil>)
	I1229 07:04:38.067892  146261 status.go:176] multinode-167190 status: &{Name:multinode-167190 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1229 07:04:38.067913  146261 status.go:174] checking status of multinode-167190-m02 ...
	I1229 07:04:38.068140  146261 cli_runner.go:164] Run: docker container inspect multinode-167190-m02 --format={{.State.Status}}
	I1229 07:04:38.086502  146261 status.go:371] multinode-167190-m02 host status = "Running" (err=<nil>)
	I1229 07:04:38.086525  146261 host.go:66] Checking if "multinode-167190-m02" exists ...
	I1229 07:04:38.086758  146261 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-167190-m02
	I1229 07:04:38.103610  146261 host.go:66] Checking if "multinode-167190-m02" exists ...
	I1229 07:04:38.103857  146261 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1229 07:04:38.103889  146261 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-167190-m02
	I1229 07:04:38.120758  146261 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32908 SSHKeyPath:/home/jenkins/minikube-integration/22353-9207/.minikube/machines/multinode-167190-m02/id_rsa Username:docker}
	I1229 07:04:38.214455  146261 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1229 07:04:38.226158  146261 status.go:176] multinode-167190-m02 status: &{Name:multinode-167190-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I1229 07:04:38.226200  146261 status.go:174] checking status of multinode-167190-m03 ...
	I1229 07:04:38.226484  146261 cli_runner.go:164] Run: docker container inspect multinode-167190-m03 --format={{.State.Status}}
	I1229 07:04:38.243788  146261 status.go:371] multinode-167190-m03 host status = "Stopped" (err=<nil>)
	I1229 07:04:38.243808  146261 status.go:384] host is not running, skipping remaining checks
	I1229 07:04:38.243816  146261 status.go:176] multinode-167190-m03 status: &{Name:multinode-167190-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.27s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (7.15s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-amd64 -p multinode-167190 node start m03 -v=5 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-amd64 -p multinode-167190 node start m03 -v=5 --alsologtostderr: (6.459319926s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-amd64 -p multinode-167190 status -v=5 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (7.15s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (79.18s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-167190
multinode_test.go:321: (dbg) Run:  out/minikube-linux-amd64 stop -p multinode-167190
multinode_test.go:321: (dbg) Done: out/minikube-linux-amd64 stop -p multinode-167190: (29.42202279s)
multinode_test.go:326: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-167190 --wait=true -v=5 --alsologtostderr
multinode_test.go:326: (dbg) Done: out/minikube-linux-amd64 start -p multinode-167190 --wait=true -v=5 --alsologtostderr: (49.63923078s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-167190
--- PASS: TestMultiNode/serial/RestartKeepsNodes (79.18s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (5.54s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-amd64 -p multinode-167190 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-amd64 -p multinode-167190 node delete m03: (4.941201434s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p multinode-167190 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (5.54s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (28.63s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-amd64 -p multinode-167190 stop
E1229 07:06:19.140152   12733 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22353-9207/.minikube/profiles/functional-120775/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:345: (dbg) Done: out/minikube-linux-amd64 -p multinode-167190 stop: (28.429773271s)
multinode_test.go:351: (dbg) Run:  out/minikube-linux-amd64 -p multinode-167190 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-167190 status: exit status 7 (96.860822ms)

                                                
                                                
-- stdout --
	multinode-167190
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-167190-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-linux-amd64 -p multinode-167190 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-167190 status --alsologtostderr: exit status 7 (99.179373ms)

                                                
                                                
-- stdout --
	multinode-167190
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-167190-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1229 07:06:38.702733  156141 out.go:360] Setting OutFile to fd 1 ...
	I1229 07:06:38.703021  156141 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1229 07:06:38.703031  156141 out.go:374] Setting ErrFile to fd 2...
	I1229 07:06:38.703035  156141 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1229 07:06:38.703262  156141 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22353-9207/.minikube/bin
	I1229 07:06:38.703445  156141 out.go:368] Setting JSON to false
	I1229 07:06:38.703468  156141 mustload.go:66] Loading cluster: multinode-167190
	I1229 07:06:38.703595  156141 notify.go:221] Checking for updates...
	I1229 07:06:38.703851  156141 config.go:182] Loaded profile config "multinode-167190": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1229 07:06:38.703871  156141 status.go:174] checking status of multinode-167190 ...
	I1229 07:06:38.704455  156141 cli_runner.go:164] Run: docker container inspect multinode-167190 --format={{.State.Status}}
	I1229 07:06:38.726642  156141 status.go:371] multinode-167190 host status = "Stopped" (err=<nil>)
	I1229 07:06:38.726662  156141 status.go:384] host is not running, skipping remaining checks
	I1229 07:06:38.726668  156141 status.go:176] multinode-167190 status: &{Name:multinode-167190 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1229 07:06:38.726723  156141 status.go:174] checking status of multinode-167190-m02 ...
	I1229 07:06:38.727002  156141 cli_runner.go:164] Run: docker container inspect multinode-167190-m02 --format={{.State.Status}}
	I1229 07:06:38.745969  156141 status.go:371] multinode-167190-m02 host status = "Stopped" (err=<nil>)
	I1229 07:06:38.745993  156141 status.go:384] host is not running, skipping remaining checks
	I1229 07:06:38.746001  156141 status.go:176] multinode-167190-m02 status: &{Name:multinode-167190-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (28.63s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (45.11s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-167190 --wait=true -v=5 --alsologtostderr --driver=docker  --container-runtime=crio
multinode_test.go:376: (dbg) Done: out/minikube-linux-amd64 start -p multinode-167190 --wait=true -v=5 --alsologtostderr --driver=docker  --container-runtime=crio: (44.509343031s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-amd64 -p multinode-167190 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (45.11s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (22.39s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-167190
multinode_test.go:464: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-167190-m02 --driver=docker  --container-runtime=crio
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-167190-m02 --driver=docker  --container-runtime=crio: exit status 14 (73.62221ms)

                                                
                                                
-- stdout --
	* [multinode-167190-m02] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=22353
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22353-9207/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22353-9207/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-167190-m02' is duplicated with machine name 'multinode-167190-m02' in profile 'multinode-167190'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-167190-m03 --driver=docker  --container-runtime=crio
E1229 07:07:38.454044   12733 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22353-9207/.minikube/profiles/addons-264018/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:472: (dbg) Done: out/minikube-linux-amd64 start -p multinode-167190-m03 --driver=docker  --container-runtime=crio: (19.612174906s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-167190
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-167190: exit status 80 (295.521351ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-167190 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-167190-m03 already exists in multinode-167190-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-amd64 delete -p multinode-167190-m03
multinode_test.go:484: (dbg) Done: out/minikube-linux-amd64 delete -p multinode-167190-m03: (2.350160667s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (22.39s)

                                                
                                    
x
+
TestScheduledStopUnix (95.58s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-amd64 start -p scheduled-stop-891761 --memory=3072 --driver=docker  --container-runtime=crio
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-amd64 start -p scheduled-stop-891761 --memory=3072 --driver=docker  --container-runtime=crio: (19.850334706s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-891761 --schedule 5m -v=5 --alsologtostderr
minikube stop output:

                                                
                                                
** stderr ** 
	I1229 07:08:10.323020  166049 out.go:360] Setting OutFile to fd 1 ...
	I1229 07:08:10.323288  166049 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1229 07:08:10.323298  166049 out.go:374] Setting ErrFile to fd 2...
	I1229 07:08:10.323305  166049 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1229 07:08:10.323500  166049 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22353-9207/.minikube/bin
	I1229 07:08:10.323757  166049 out.go:368] Setting JSON to false
	I1229 07:08:10.323867  166049 mustload.go:66] Loading cluster: scheduled-stop-891761
	I1229 07:08:10.324174  166049 config.go:182] Loaded profile config "scheduled-stop-891761": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1229 07:08:10.324287  166049 profile.go:143] Saving config to /home/jenkins/minikube-integration/22353-9207/.minikube/profiles/scheduled-stop-891761/config.json ...
	I1229 07:08:10.324483  166049 mustload.go:66] Loading cluster: scheduled-stop-891761
	I1229 07:08:10.324609  166049 config.go:182] Loaded profile config "scheduled-stop-891761": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0

                                                
                                                
** /stderr **
scheduled_stop_test.go:204: (dbg) Run:  out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-891761 -n scheduled-stop-891761
scheduled_stop_test.go:172: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-891761 --schedule 15s -v=5 --alsologtostderr
minikube stop output:

                                                
                                                
** stderr ** 
	I1229 07:08:10.708480  166205 out.go:360] Setting OutFile to fd 1 ...
	I1229 07:08:10.708616  166205 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1229 07:08:10.708627  166205 out.go:374] Setting ErrFile to fd 2...
	I1229 07:08:10.708634  166205 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1229 07:08:10.708822  166205 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22353-9207/.minikube/bin
	I1229 07:08:10.709086  166205 out.go:368] Setting JSON to false
	I1229 07:08:10.709312  166205 daemonize_unix.go:73] killing process 166086 as it is an old scheduled stop
	I1229 07:08:10.709434  166205 mustload.go:66] Loading cluster: scheduled-stop-891761
	I1229 07:08:10.709752  166205 config.go:182] Loaded profile config "scheduled-stop-891761": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1229 07:08:10.709832  166205 profile.go:143] Saving config to /home/jenkins/minikube-integration/22353-9207/.minikube/profiles/scheduled-stop-891761/config.json ...
	I1229 07:08:10.710030  166205 mustload.go:66] Loading cluster: scheduled-stop-891761
	I1229 07:08:10.710150  166205 config.go:182] Loaded profile config "scheduled-stop-891761": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0

                                                
                                                
** /stderr **
scheduled_stop_test.go:172: signal error was:  os: process already finished
I1229 07:08:10.714560   12733 retry.go:84] will retry after 0s: open /home/jenkins/minikube-integration/22353-9207/.minikube/profiles/scheduled-stop-891761/pid: no such file or directory
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-891761 --cancel-scheduled
minikube stop output:

                                                
                                                
-- stdout --
	* All existing scheduled stops cancelled

                                                
                                                
-- /stdout --
scheduled_stop_test.go:189: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-891761 -n scheduled-stop-891761
scheduled_stop_test.go:218: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-891761
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-891761 --schedule 15s -v=5 --alsologtostderr
minikube stop output:

                                                
                                                
** stderr ** 
	I1229 07:08:36.584983  166911 out.go:360] Setting OutFile to fd 1 ...
	I1229 07:08:36.585244  166911 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1229 07:08:36.585255  166911 out.go:374] Setting ErrFile to fd 2...
	I1229 07:08:36.585259  166911 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1229 07:08:36.585430  166911 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22353-9207/.minikube/bin
	I1229 07:08:36.585653  166911 out.go:368] Setting JSON to false
	I1229 07:08:36.585724  166911 mustload.go:66] Loading cluster: scheduled-stop-891761
	I1229 07:08:36.586024  166911 config.go:182] Loaded profile config "scheduled-stop-891761": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1229 07:08:36.586093  166911 profile.go:143] Saving config to /home/jenkins/minikube-integration/22353-9207/.minikube/profiles/scheduled-stop-891761/config.json ...
	I1229 07:08:36.586302  166911 mustload.go:66] Loading cluster: scheduled-stop-891761
	I1229 07:08:36.586406  166911 config.go:182] Loaded profile config "scheduled-stop-891761": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0

                                                
                                                
** /stderr **
scheduled_stop_test.go:172: signal error was:  os: process already finished
E1229 07:09:01.499192   12733 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22353-9207/.minikube/profiles/addons-264018/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
scheduled_stop_test.go:218: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-891761
scheduled_stop_test.go:218: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p scheduled-stop-891761: exit status 7 (75.777685ms)

                                                
                                                
-- stdout --
	scheduled-stop-891761
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:189: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-891761 -n scheduled-stop-891761
scheduled_stop_test.go:189: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-891761 -n scheduled-stop-891761: exit status 7 (75.428192ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:189: status error: exit status 7 (may be ok)
helpers_test.go:176: Cleaning up "scheduled-stop-891761" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p scheduled-stop-891761
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p scheduled-stop-891761: (4.247017325s)
--- PASS: TestScheduledStopUnix (95.58s)

                                                
                                    
x
+
TestInsufficientStorage (11.62s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-linux-amd64 start -p insufficient-storage-899672 --memory=3072 --output=json --wait=true --driver=docker  --container-runtime=crio
status_test.go:50: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p insufficient-storage-899672 --memory=3072 --output=json --wait=true --driver=docker  --container-runtime=crio: exit status 26 (9.178743501s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"5bf9bc63-e596-4afb-b52f-6c0ef8a629a6","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-899672] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"4df2166b-26b5-4521-b6ee-818b1484b953","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=22353"}}
	{"specversion":"1.0","id":"ebd503af-9d7f-4c53-a584-e81893fb3f32","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"db9afbee-fa18-4c01-a4b1-9131eede37d3","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/22353-9207/kubeconfig"}}
	{"specversion":"1.0","id":"734674ef-a4ad-450b-a0f1-0c5235fd8902","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/22353-9207/.minikube"}}
	{"specversion":"1.0","id":"04738348-d5fa-42ae-8c34-185e3fcd80ca","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"75bd0515-fea4-4d7f-ab21-31dc6c1d2bf5","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"3a14dfc6-1ab6-4a3e-b413-4764f63312b0","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"943c91ac-b09d-41e7-bdc9-783e0aba852d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"21207c29-54a0-40da-a8b7-edc4d9788983","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"eef61bc6-91ac-4f6e-8152-333e687c7760","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker driver with root privileges"}}
	{"specversion":"1.0","id":"4160542d-dab6-46bf-b0f2-aa2603ef3b0a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"insufficient-storage-899672\" primary control-plane node in \"insufficient-storage-899672\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"b7b1d754-7680-4f73-8d38-97cd0b0181a9","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image v0.0.48-1766979815-22353 ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"67827348-19b2-417a-91fb-447dc3d394cd","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=3072MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"bc82a827-aa92-4b2e-96fa-8f8b4bcf4b43","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100% of capacity). You can pass '--force' to skip this check.","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p insufficient-storage-899672 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p insufficient-storage-899672 --output=json --layout=cluster: exit status 7 (286.502582ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-899672","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=3072MB) ...","BinaryVersion":"v1.37.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-899672","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E1229 07:09:35.442070  169443 status.go:458] kubeconfig endpoint: get endpoint: "insufficient-storage-899672" does not appear in /home/jenkins/minikube-integration/22353-9207/kubeconfig

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p insufficient-storage-899672 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p insufficient-storage-899672 --output=json --layout=cluster: exit status 7 (282.178374ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-899672","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.37.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-899672","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E1229 07:09:35.725168  169553 status.go:458] kubeconfig endpoint: get endpoint: "insufficient-storage-899672" does not appear in /home/jenkins/minikube-integration/22353-9207/kubeconfig
	E1229 07:09:35.736046  169553 status.go:258] unable to read event log: stat: stat /home/jenkins/minikube-integration/22353-9207/.minikube/profiles/insufficient-storage-899672/events.json: no such file or directory

                                                
                                                
** /stderr **
helpers_test.go:176: Cleaning up "insufficient-storage-899672" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p insufficient-storage-899672
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p insufficient-storage-899672: (1.868939804s)
--- PASS: TestInsufficientStorage (11.62s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (69.45s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.35.0.1239606039 start -p running-upgrade-796549 --memory=3072 --vm-driver=docker  --container-runtime=crio
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.35.0.1239606039 start -p running-upgrade-796549 --memory=3072 --vm-driver=docker  --container-runtime=crio: (42.531513804s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-amd64 start -p running-upgrade-796549 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-amd64 start -p running-upgrade-796549 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (23.446368172s)
helpers_test.go:176: Cleaning up "running-upgrade-796549" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p running-upgrade-796549
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p running-upgrade-796549: (2.882262273s)
--- PASS: TestRunningBinaryUpgrade (69.45s)

                                                
                                    
x
+
TestKubernetesUpgrade (342.21s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-174577 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:222: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-174577 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (24.791759506s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-amd64 stop -p kubernetes-upgrade-174577 --alsologtostderr
E1229 07:12:38.450329   12733 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22353-9207/.minikube/profiles/addons-264018/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-amd64 stop -p kubernetes-upgrade-174577 --alsologtostderr: (12.640909147s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-174577 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-amd64 -p kubernetes-upgrade-174577 status --format={{.Host}}: exit status 7 (100.685194ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-174577 --memory=3072 --kubernetes-version=v1.35.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-174577 --memory=3072 --kubernetes-version=v1.35.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (4m56.227072822s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-174577 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-174577 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-174577 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio: exit status 106 (87.700084ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-174577] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=22353
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22353-9207/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22353-9207/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.35.0 cluster to v1.28.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.28.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-174577
	    minikube start -p kubernetes-upgrade-174577 --kubernetes-version=v1.28.0
	    
	    2) Create a second cluster with Kubernetes 1.28.0, by running:
	    
	    minikube start -p kubernetes-upgrade-1745772 --kubernetes-version=v1.28.0
	    
	    3) Use the existing cluster at version Kubernetes 1.35.0, by running:
	    
	    minikube start -p kubernetes-upgrade-174577 --kubernetes-version=v1.35.0
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-174577 --memory=3072 --kubernetes-version=v1.35.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-174577 --memory=3072 --kubernetes-version=v1.35.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (5.412298873s)
helpers_test.go:176: Cleaning up "kubernetes-upgrade-174577" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p kubernetes-upgrade-174577
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p kubernetes-upgrade-174577: (2.877645078s)
--- PASS: TestKubernetesUpgrade (342.21s)

                                                
                                    
x
+
TestMissingContainerUpgrade (63.82s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:309: (dbg) Run:  /tmp/minikube-v1.35.0.3357687618 start -p missing-upgrade-967138 --memory=3072 --driver=docker  --container-runtime=crio
version_upgrade_test.go:309: (dbg) Done: /tmp/minikube-v1.35.0.3357687618 start -p missing-upgrade-967138 --memory=3072 --driver=docker  --container-runtime=crio: (20.403188437s)
version_upgrade_test.go:318: (dbg) Run:  docker stop missing-upgrade-967138
version_upgrade_test.go:318: (dbg) Done: docker stop missing-upgrade-967138: (1.73354737s)
version_upgrade_test.go:323: (dbg) Run:  docker rm missing-upgrade-967138
version_upgrade_test.go:329: (dbg) Run:  out/minikube-linux-amd64 start -p missing-upgrade-967138 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:329: (dbg) Done: out/minikube-linux-amd64 start -p missing-upgrade-967138 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (38.46842476s)
helpers_test.go:176: Cleaning up "missing-upgrade-967138" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p missing-upgrade-967138
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p missing-upgrade-967138: (2.505501328s)
--- PASS: TestMissingContainerUpgrade (63.82s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (0.61s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (0.61s)

                                                
                                    
x
+
TestPause/serial/Start (59.39s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p pause-481637 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=crio
pause_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p pause-481637 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=crio: (59.392574452s)
--- PASS: TestPause/serial/Start (59.39s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (306.87s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.35.0.3409309406 start -p stopped-upgrade-518014 --memory=3072 --vm-driver=docker  --container-runtime=crio
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.35.0.3409309406 start -p stopped-upgrade-518014 --memory=3072 --vm-driver=docker  --container-runtime=crio: (45.699000042s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.35.0.3409309406 -p stopped-upgrade-518014 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.35.0.3409309406 -p stopped-upgrade-518014 stop: (1.939051607s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-amd64 start -p stopped-upgrade-518014 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-amd64 start -p stopped-upgrade-518014 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (4m19.231942438s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (306.87s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (6.76s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-amd64 start -p pause-481637 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
pause_test.go:92: (dbg) Done: out/minikube-linux-amd64 start -p pause-481637 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (6.738467425s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (6.76s)

                                                
                                    
x
+
TestPreload/Start-NoPreload-PullImage (54.17s)

                                                
                                                
=== RUN   TestPreload/Start-NoPreload-PullImage
preload_test.go:49: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-457393 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio
preload_test.go:49: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-457393 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio: (47.366082411s)
preload_test.go:56: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-457393 image pull ghcr.io/medyagh/image-mirrors/busybox:latest
preload_test.go:62: (dbg) Run:  out/minikube-linux-amd64 stop -p test-preload-457393
preload_test.go:62: (dbg) Done: out/minikube-linux-amd64 stop -p test-preload-457393: (6.22524372s)
--- PASS: TestPreload/Start-NoPreload-PullImage (54.17s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.09s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:108: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-868221 --no-kubernetes --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:108: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p NoKubernetes-868221 --no-kubernetes --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio: exit status 14 (91.680269ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-868221] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=22353
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22353-9207/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22353-9207/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.09s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (19.24s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:120: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-868221 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:120: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-868221 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (18.874176107s)
no_kubernetes_test.go:225: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-868221 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (19.24s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (23.2s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:137: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-868221 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:137: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-868221 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (20.914311353s)
no_kubernetes_test.go:225: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-868221 status -o json
no_kubernetes_test.go:225: (dbg) Non-zero exit: out/minikube-linux-amd64 -p NoKubernetes-868221 status -o json: exit status 2 (315.579864ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-868221","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:149: (dbg) Run:  out/minikube-linux-amd64 delete -p NoKubernetes-868221
no_kubernetes_test.go:149: (dbg) Done: out/minikube-linux-amd64 delete -p NoKubernetes-868221: (1.972523689s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (23.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (3.36s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-amd64 start -p false-619064 --memory=3072 --alsologtostderr --cni=false --driver=docker  --container-runtime=crio
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p false-619064 --memory=3072 --alsologtostderr --cni=false --driver=docker  --container-runtime=crio: exit status 14 (156.48704ms)

                                                
                                                
-- stdout --
	* [false-619064] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=22353
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22353-9207/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22353-9207/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1229 07:11:16.351986  199155 out.go:360] Setting OutFile to fd 1 ...
	I1229 07:11:16.352271  199155 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1229 07:11:16.352281  199155 out.go:374] Setting ErrFile to fd 2...
	I1229 07:11:16.352284  199155 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1229 07:11:16.352482  199155 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22353-9207/.minikube/bin
	I1229 07:11:16.352938  199155 out.go:368] Setting JSON to false
	I1229 07:11:16.354012  199155 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":3228,"bootTime":1766989048,"procs":338,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1229 07:11:16.354072  199155 start.go:143] virtualization: kvm guest
	I1229 07:11:16.355756  199155 out.go:179] * [false-619064] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1229 07:11:16.356888  199155 notify.go:221] Checking for updates...
	I1229 07:11:16.356920  199155 out.go:179]   - MINIKUBE_LOCATION=22353
	I1229 07:11:16.358134  199155 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1229 07:11:16.359482  199155 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22353-9207/kubeconfig
	I1229 07:11:16.360609  199155 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22353-9207/.minikube
	I1229 07:11:16.361689  199155 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1229 07:11:16.362919  199155 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1229 07:11:16.364567  199155 config.go:182] Loaded profile config "NoKubernetes-868221": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v0.0.0
	I1229 07:11:16.364732  199155 config.go:182] Loaded profile config "stopped-upgrade-518014": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.0
	I1229 07:11:16.364864  199155 config.go:182] Loaded profile config "test-preload-457393": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1229 07:11:16.364988  199155 driver.go:422] Setting default libvirt URI to qemu:///system
	I1229 07:11:16.390198  199155 docker.go:124] docker version: linux-29.1.3:Docker Engine - Community
	I1229 07:11:16.390294  199155 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1229 07:11:16.444403  199155 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:64 OomKillDisable:false NGoroutines:75 SystemTime:2025-12-29 07:11:16.435005927 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:dea7da592f5d1d2b7755e3a161be07f43fad8f75 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1229 07:11:16.444512  199155 docker.go:319] overlay module found
	I1229 07:11:16.446003  199155 out.go:179] * Using the docker driver based on user configuration
	I1229 07:11:16.446973  199155 start.go:309] selected driver: docker
	I1229 07:11:16.446987  199155 start.go:928] validating driver "docker" against <nil>
	I1229 07:11:16.446997  199155 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1229 07:11:16.448527  199155 out.go:203] 
	W1229 07:11:16.449508  199155 out.go:285] X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	I1229 07:11:16.450428  199155 out.go:203] 

                                                
                                                
** /stderr **
E1229 07:11:19.140096   12733 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22353-9207/.minikube/profiles/functional-120775/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:88: 
----------------------- debugLogs start: false-619064 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-619064

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-619064

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-619064

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-619064

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-619064

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-619064

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-619064

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-619064

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-619064

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-619064

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-619064" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-619064"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-619064" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-619064"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-619064" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-619064"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-619064

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-619064" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-619064"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-619064" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-619064"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-619064" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-619064" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-619064" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-619064" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-619064" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-619064" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-619064" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-619064" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-619064" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-619064"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-619064" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-619064"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-619064" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-619064"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-619064" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-619064"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-619064" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-619064"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-619064" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-619064" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-619064" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-619064" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-619064"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-619064" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-619064"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-619064" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-619064"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-619064" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-619064"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-619064" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-619064"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/22353-9207/.minikube/ca.crt
extensions:
- extension:
last-update: Mon, 29 Dec 2025 07:11:11 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.76.2:8443
name: NoKubernetes-868221
- cluster:
certificate-authority: /home/jenkins/minikube-integration/22353-9207/.minikube/ca.crt
extensions:
- extension:
last-update: Mon, 29 Dec 2025 07:10:32 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.94.2:8443
name: stopped-upgrade-518014
- cluster:
certificate-authority: /home/jenkins/minikube-integration/22353-9207/.minikube/ca.crt
extensions:
- extension:
last-update: Mon, 29 Dec 2025 07:11:10 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.85.2:8443
name: test-preload-457393
contexts:
- context:
cluster: NoKubernetes-868221
extensions:
- extension:
last-update: Mon, 29 Dec 2025 07:11:11 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: context_info
namespace: default
user: NoKubernetes-868221
name: NoKubernetes-868221
- context:
cluster: stopped-upgrade-518014
user: stopped-upgrade-518014
name: stopped-upgrade-518014
- context:
cluster: test-preload-457393
extensions:
- extension:
last-update: Mon, 29 Dec 2025 07:11:10 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: context_info
namespace: default
user: test-preload-457393
name: test-preload-457393
current-context: NoKubernetes-868221
kind: Config
users:
- name: NoKubernetes-868221
user:
client-certificate: /home/jenkins/minikube-integration/22353-9207/.minikube/profiles/NoKubernetes-868221/client.crt
client-key: /home/jenkins/minikube-integration/22353-9207/.minikube/profiles/NoKubernetes-868221/client.key
- name: stopped-upgrade-518014
user:
client-certificate: /home/jenkins/minikube-integration/22353-9207/.minikube/profiles/stopped-upgrade-518014/client.crt
client-key: /home/jenkins/minikube-integration/22353-9207/.minikube/profiles/stopped-upgrade-518014/client.key
- name: test-preload-457393
user:
client-certificate: /home/jenkins/minikube-integration/22353-9207/.minikube/profiles/test-preload-457393/client.crt
client-key: /home/jenkins/minikube-integration/22353-9207/.minikube/profiles/test-preload-457393/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-619064

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-619064" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-619064"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-619064" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-619064"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-619064" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-619064"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-619064" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-619064"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-619064" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-619064"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-619064" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-619064"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-619064" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-619064"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-619064" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-619064"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-619064" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-619064"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-619064" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-619064"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-619064" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-619064"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-619064" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-619064"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-619064" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-619064"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-619064" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-619064"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-619064" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-619064"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-619064" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-619064"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-619064" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-619064"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-619064" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-619064"

                                                
                                                
----------------------- debugLogs end: false-619064 [took: 3.038260422s] --------------------------------
helpers_test.go:176: Cleaning up "false-619064" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p false-619064
--- PASS: TestNetworkPlugins/group/false (3.36s)

                                                
                                    
x
+
TestPreload/Restart-With-Preload-Check-User-Image (43.93s)

                                                
                                                
=== RUN   TestPreload/Restart-With-Preload-Check-User-Image
preload_test.go:71: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-457393 --preload=true --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=crio
preload_test.go:71: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-457393 --preload=true --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=crio: (43.672391815s)
preload_test.go:76: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-457393 image list
--- PASS: TestPreload/Restart-With-Preload-Check-User-Image (43.93s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (4.57s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:161: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-868221 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:161: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-868221 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (4.572116588s)
--- PASS: TestNoKubernetes/serial/Start (4.57s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads (0s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads
no_kubernetes_test.go:89: Checking cache directory: /home/jenkins/minikube-integration/22353-9207/.minikube/cache/linux/amd64/v0.0.0
--- PASS: TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads (0.00s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.33s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:172: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-868221 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:172: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-868221 "sudo systemctl is-active --quiet service kubelet": exit status 1 (326.610592ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.33s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (15.69s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:194: (dbg) Run:  out/minikube-linux-amd64 profile list
no_kubernetes_test.go:194: (dbg) Done: out/minikube-linux-amd64 profile list: (14.975203994s)
no_kubernetes_test.go:204: (dbg) Run:  out/minikube-linux-amd64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (15.69s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.25s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:183: (dbg) Run:  out/minikube-linux-amd64 stop -p NoKubernetes-868221
no_kubernetes_test.go:183: (dbg) Done: out/minikube-linux-amd64 stop -p NoKubernetes-868221: (1.252075062s)
--- PASS: TestNoKubernetes/serial/Stop (1.25s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (6.43s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:216: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-868221 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:216: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-868221 --driver=docker  --container-runtime=crio: (6.434373846s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (6.43s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.27s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:172: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-868221 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:172: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-868221 "sudo systemctl is-active --quiet service kubelet": exit status 1 (271.025194ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.27s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (47s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-876718 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-876718 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0: (47.004186853s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (47.00s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (8.23s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-876718 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:353: "busybox" [cb3bcbb9-b40d-499b-89b5-6b34baf24e5b] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:353: "busybox" [cb3bcbb9-b40d-499b-89b5-6b34baf24e5b] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 8.002790313s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-876718 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (8.23s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (16s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p old-k8s-version-876718 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p old-k8s-version-876718 --alsologtostderr -v=3: (16.000163093s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (16.00s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-876718 -n old-k8s-version-876718
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-876718 -n old-k8s-version-876718: exit status 7 (76.088556ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-876718 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.19s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (51.42s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-876718 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-876718 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0: (51.07108461s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-876718 -n old-k8s-version-876718
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (51.42s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (0.97s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-amd64 logs -p stopped-upgrade-518014
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (0.97s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (43.52s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-122332 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-122332 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0: (43.523871194s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (43.52s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:353: "kubernetes-dashboard-8694d4445c-bfg2s" [afa0a6d0-35c1-415f-837e-8217b89f54fc] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004045545s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.07s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:353: "kubernetes-dashboard-8694d4445c-bfg2s" [afa0a6d0-35c1-415f-837e-8217b89f54fc] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003794854s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context old-k8s-version-876718 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.07s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-876718 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20230511-dc714da8
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20251212-v0.29.0-alpha-105-g20ccfc88
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.23s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (9.27s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-122332 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:353: "busybox" [64807d8c-0a89-4e9c-a816-fff1e31fce8f] Pending
helpers_test.go:353: "busybox" [64807d8c-0a89-4e9c-a816-fff1e31fce8f] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:353: "busybox" [64807d8c-0a89-4e9c-a816-fff1e31fce8f] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 9.003879381s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-122332 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (9.27s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (42.7s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-739827 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-739827 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0: (42.702445735s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (42.70s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (18.26s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p no-preload-122332 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p no-preload-122332 --alsologtostderr -v=3: (18.26435418s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (18.26s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (37.19s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-798607 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-798607 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0: (37.186469192s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (37.19s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-122332 -n no-preload-122332
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-122332 -n no-preload-122332: exit status 7 (86.378502ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p no-preload-122332 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.21s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (50.56s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-122332 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0
E1229 07:16:19.140626   12733 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22353-9207/.minikube/profiles/functional-120775/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-122332 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0: (50.216614909s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-122332 -n no-preload-122332
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (50.56s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (8.27s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-739827 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:353: "busybox" [fec130f6-04f7-4f99-8723-932ebe4f8b00] Pending
helpers_test.go:353: "busybox" [fec130f6-04f7-4f99-8723-932ebe4f8b00] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:353: "busybox" [fec130f6-04f7-4f99-8723-932ebe4f8b00] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 8.004199489s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-739827 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (8.27s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (8.23s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-798607 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:353: "busybox" [3c0d7004-d857-4d4d-847d-4122d3514fc2] Pending
helpers_test.go:353: "busybox" [3c0d7004-d857-4d4d-847d-4122d3514fc2] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:353: "busybox" [3c0d7004-d857-4d4d-847d-4122d3514fc2] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 8.004293381s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-798607 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (8.23s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (18.14s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p embed-certs-739827 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p embed-certs-739827 --alsologtostderr -v=3: (18.137422306s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (18.14s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (16.26s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p default-k8s-diff-port-798607 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p default-k8s-diff-port-798607 --alsologtostderr -v=3: (16.262993792s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (16.26s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.18s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-739827 -n embed-certs-739827
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-739827 -n embed-certs-739827: exit status 7 (74.249236ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p embed-certs-739827 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.18s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (47.62s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-739827 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-739827 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0: (47.271771941s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-739827 -n embed-certs-739827
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (47.62s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:353: "kubernetes-dashboard-b84665fb8-vrx7d" [4fbfdcd6-40d5-4737-a785-ddca883ff87e] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003834068s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-798607 -n default-k8s-diff-port-798607
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-798607 -n default-k8s-diff-port-798607: exit status 7 (82.877306ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-798607 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.20s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (43.97s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-798607 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-798607 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0: (43.566710844s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-798607 -n default-k8s-diff-port-798607
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (43.97s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.1s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:353: "kubernetes-dashboard-b84665fb8-vrx7d" [4fbfdcd6-40d5-4737-a785-ddca883ff87e] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004254656s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context no-preload-122332 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.10s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.3s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-122332 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20251212-v0.29.0-alpha-105-g20ccfc88
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.30s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (24.41s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-067566 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-067566 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0: (24.413187618s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (24.41s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:353: "kubernetes-dashboard-b84665fb8-rdq2m" [5c386b28-09c8-40fb-8930-78a721215bbd] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.00327266s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:353: "kubernetes-dashboard-b84665fb8-mj5lz" [15c14142-c7db-4c3e-891d-c5688b9dd92e] Running
E1229 07:17:38.450215   12733 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22353-9207/.minikube/profiles/addons-264018/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003803927s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (2.48s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p newest-cni-067566 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p newest-cni-067566 --alsologtostderr -v=3: (2.481129416s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (2.48s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.07s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:353: "kubernetes-dashboard-b84665fb8-rdq2m" [5c386b28-09c8-40fb-8930-78a721215bbd] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.00323048s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context embed-certs-739827 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.07s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.07s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:353: "kubernetes-dashboard-b84665fb8-mj5lz" [15c14142-c7db-4c3e-891d-c5688b9dd92e] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003239279s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context default-k8s-diff-port-798607 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.07s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-067566 -n newest-cni-067566
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-067566 -n newest-cni-067566: exit status 7 (88.167855ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p newest-cni-067566 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.23s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (13.15s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-067566 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-067566 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0: (12.727919332s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-067566 -n newest-cni-067566
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (13.15s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.29s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-739827 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20251212-v0.29.0-alpha-105-g20ccfc88
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.29s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.26s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-798607 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20251212-v0.29.0-alpha-105-g20ccfc88
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (39.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p auto-619064 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p auto-619064 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio: (39.240839187s)
--- PASS: TestNetworkPlugins/group/auto/Start (39.24s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:271: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:282: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.29s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-067566 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20251212-v0.29.0-alpha-105-g20ccfc88
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (45.5s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p kindnet-619064 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p kindnet-619064 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=crio: (45.496487943s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (45.50s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (56.72s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p calico-619064 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p calico-619064 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=crio: (56.717744933s)
--- PASS: TestNetworkPlugins/group/calico/Start (56.72s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (44.75s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-flannel-619064 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-flannel-619064 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=crio: (44.750889371s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (44.75s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.46s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p auto-619064 "pgrep -a kubelet"
I1229 07:18:28.484423   12733 config.go:182] Loaded profile config "auto-619064": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.46s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (9.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-619064 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:353: "netcat-5dd4ccdc4b-clwt4" [7d01decc-e90f-4e46-8895-8807ae426584] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:353: "netcat-5dd4ccdc4b-clwt4" [7d01decc-e90f-4e46-8895-8807ae426584] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 9.00536833s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (9.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-619064 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-619064 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.09s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-619064 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.09s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:353: "kindnet-lxslh" [3877643f-3f8d-4214-ab7b-042c9e521752] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.004126386s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p kindnet-619064 "pgrep -a kubelet"
I1229 07:18:50.326567   12733 config.go:182] Loaded profile config "kindnet-619064": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (9.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-619064 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:353: "netcat-5dd4ccdc4b-b29n5" [5801a991-f0c3-43b9-abd6-da07b6dba55b] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:353: "netcat-5dd4ccdc4b-b29n5" [5801a991-f0c3-43b9-abd6-da07b6dba55b] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 9.003367463s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (9.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p custom-flannel-619064 "pgrep -a kubelet"
I1229 07:18:53.412617   12733 config.go:182] Loaded profile config "custom-flannel-619064": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (8.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-619064 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:353: "netcat-5dd4ccdc4b-cqn7s" [95cacf75-ae99-418e-b655-4d5064937e70] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:353: "netcat-5dd4ccdc4b-cqn7s" [95cacf75-ae99-418e-b655-4d5064937e70] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 8.009359337s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (8.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (5.04s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:353: "calico-node-bwfhr" [cbcd2ea1-b8cc-4bf8-8ebb-adb535d248bf] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 5.041170026s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (5.04s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (57.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p enable-default-cni-619064 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p enable-default-cni-619064 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=crio: (57.142165507s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (57.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-619064 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.09s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-619064 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.09s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.09s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-619064 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.09s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-619064 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.42s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p calico-619064 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.42s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-619064 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-619064 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (9.68s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-619064 replace --force -f testdata/netcat-deployment.yaml
E1229 07:19:02.371453   12733 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22353-9207/.minikube/profiles/old-k8s-version-876718/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
I1229 07:19:02.745913   12733 kapi.go:136] Waiting for deployment netcat to stabilize, generation 1 observed generation 0 spec.replicas 1 status.replicas 0
I1229 07:19:02.751942   12733 kapi.go:136] Waiting for deployment netcat to stabilize, generation 1 observed generation 1 spec.replicas 1 status.replicas 0
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:353: "netcat-5dd4ccdc4b-94rg4" [29f41de1-d60a-4a98-8309-4f64f61a57bc] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E1229 07:19:04.932390   12733 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22353-9207/.minikube/profiles/old-k8s-version-876718/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:353: "netcat-5dd4ccdc4b-94rg4" [29f41de1-d60a-4a98-8309-4f64f61a57bc] Running
E1229 07:19:10.053081   12733 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22353-9207/.minikube/profiles/old-k8s-version-876718/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 9.005686679s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (9.68s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-619064 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-619064 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-619064 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (45.38s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p flannel-619064 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p flannel-619064 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=crio: (45.375885993s)
--- PASS: TestNetworkPlugins/group/flannel/Start (45.38s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (59.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p bridge-619064 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p bridge-619064 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=crio: (59.181202935s)
--- PASS: TestNetworkPlugins/group/bridge/Start (59.18s)

                                                
                                    
x
+
TestPreload/PreloadSrc/gcs (3.88s)

                                                
                                                
=== RUN   TestPreload/PreloadSrc/gcs
preload_test.go:110: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-dl-gcs-175645 --download-only --kubernetes-version v1.34.0-rc.1 --preload-source=gcs --alsologtostderr --v=1 --driver=docker  --container-runtime=crio
preload_test.go:110: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-dl-gcs-175645 --download-only --kubernetes-version v1.34.0-rc.1 --preload-source=gcs --alsologtostderr --v=1 --driver=docker  --container-runtime=crio: (3.37175589s)
helpers_test.go:176: Cleaning up "test-preload-dl-gcs-175645" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-dl-gcs-175645
--- PASS: TestPreload/PreloadSrc/gcs (3.88s)

                                                
                                    
x
+
TestPreload/PreloadSrc/github (13.92s)

                                                
                                                
=== RUN   TestPreload/PreloadSrc/github
preload_test.go:110: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-dl-github-135710 --download-only --kubernetes-version v1.34.0-rc.2 --preload-source=github --alsologtostderr --v=1 --driver=docker  --container-runtime=crio
E1229 07:19:40.775105   12733 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22353-9207/.minikube/profiles/old-k8s-version-876718/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
preload_test.go:110: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-dl-github-135710 --download-only --kubernetes-version v1.34.0-rc.2 --preload-source=github --alsologtostderr --v=1 --driver=docker  --container-runtime=crio: (13.727356566s)
helpers_test.go:176: Cleaning up "test-preload-dl-github-135710" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-dl-github-135710
--- PASS: TestPreload/PreloadSrc/github (13.92s)

                                                
                                    
x
+
TestPreload/PreloadSrc/gcs-cached (0.43s)

                                                
                                                
=== RUN   TestPreload/PreloadSrc/gcs-cached
preload_test.go:110: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-dl-gcs-cached-484343 --download-only --kubernetes-version v1.34.0-rc.2 --preload-source=gcs --alsologtostderr --v=1 --driver=docker  --container-runtime=crio
helpers_test.go:176: Cleaning up "test-preload-dl-gcs-cached-484343" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-dl-gcs-cached-484343
--- PASS: TestPreload/PreloadSrc/gcs-cached (0.43s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p enable-default-cni-619064 "pgrep -a kubelet"
I1229 07:19:55.892455   12733 config.go:182] Loaded profile config "enable-default-cni-619064": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (9.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-619064 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:353: "netcat-5dd4ccdc4b-qkz4m" [774431a7-5525-4ef6-ac91-5ebc4d07d763] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:353: "netcat-5dd4ccdc4b-qkz4m" [774431a7-5525-4ef6-ac91-5ebc4d07d763] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 9.003957739s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (9.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-619064 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-619064 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.09s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-619064 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.09s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:353: "kube-flannel-ds-m2hns" [289acca3-2c97-4b2e-8b3c-9acbcfad3e7b] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.003593036s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p flannel-619064 "pgrep -a kubelet"
I1229 07:20:12.187116   12733 config.go:182] Loaded profile config "flannel-619064": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (9.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-619064 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:353: "netcat-5dd4ccdc4b-m2gmz" [650cc3e5-7bde-4168-b1a5-58442d5f1414] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:353: "netcat-5dd4ccdc4b-m2gmz" [650cc3e5-7bde-4168-b1a5-58442d5f1414] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 9.003302517s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (9.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-619064 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.08s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-619064 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.08s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.09s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-619064 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.09s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p bridge-619064 "pgrep -a kubelet"
I1229 07:20:24.925876   12733 config.go:182] Loaded profile config "bridge-619064": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (8.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-619064 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:353: "netcat-5dd4ccdc4b-r2slr" [e96477ce-a162-4fd2-aa73-296eb5c0b3d2] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:353: "netcat-5dd4ccdc4b-r2slr" [e96477ce-a162-4fd2-aa73-296eb5c0b3d2] Running
E1229 07:20:32.220418   12733 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22353-9207/.minikube/profiles/no-preload-122332/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1229 07:20:32.225681   12733 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22353-9207/.minikube/profiles/no-preload-122332/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1229 07:20:32.235924   12733 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22353-9207/.minikube/profiles/no-preload-122332/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1229 07:20:32.256204   12733 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22353-9207/.minikube/profiles/no-preload-122332/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1229 07:20:32.296502   12733 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22353-9207/.minikube/profiles/no-preload-122332/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1229 07:20:32.376836   12733 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22353-9207/.minikube/profiles/no-preload-122332/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1229 07:20:32.537307   12733 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22353-9207/.minikube/profiles/no-preload-122332/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1229 07:20:32.858129   12733 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22353-9207/.minikube/profiles/no-preload-122332/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 8.004223399s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (8.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-619064 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.09s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-619064 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.09s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.09s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-619064 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.09s)

                                                
                                    

Test skip (27/332)

x
+
TestDownloadOnly/v1.28.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.28.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.28.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.28.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.35.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.35.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.35.0/kubectl (0.00s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/RealCredentials (0s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/RealCredentials
addons_test.go:765: skipping GCPAuth addon test until 'Permission "artifactregistry.repositories.downloadArtifacts" denied on resource "projects/k8s-minikube/locations/us/repositories/test-artifacts" (or it may not exist)' issue is resolved
--- SKIP: TestAddons/serial/GCPAuth/RealCredentials (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:485: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing crio
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with crio true linux amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:37: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:101: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:478: only validate docker env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:565: only validate podman env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes
functional_test.go:82: 
--- SKIP: TestFunctionalNewestKubernetes (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestISOImage (0s)

                                                
                                                
=== RUN   TestISOImage
iso_test.go:36: This test requires a VM driver
--- SKIP: TestISOImage (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing crio container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.17s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:101: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:176: Cleaning up "disable-driver-mounts-708770" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p disable-driver-mounts-708770
--- SKIP: TestStartStop/group/disable-driver-mounts (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (3.39s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as crio container runtimes requires CNI
panic.go:615: 
----------------------- debugLogs start: kubenet-619064 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-619064

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-619064

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-619064

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-619064

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-619064

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-619064

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-619064

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-619064

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-619064

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-619064

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-619064" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-619064"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-619064" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-619064"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-619064" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-619064"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-619064

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-619064" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-619064"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-619064" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-619064"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-619064" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-619064" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-619064" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-619064" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-619064" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-619064" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-619064" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-619064" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-619064" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-619064"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-619064" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-619064"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-619064" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-619064"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-619064" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-619064"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-619064" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-619064"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-619064" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-619064" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-619064" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-619064" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-619064"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-619064" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-619064"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-619064" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-619064"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-619064" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-619064"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-619064" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-619064"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/22353-9207/.minikube/ca.crt
extensions:
- extension:
last-update: Mon, 29 Dec 2025 07:11:11 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.76.2:8443
name: NoKubernetes-868221
- cluster:
certificate-authority: /home/jenkins/minikube-integration/22353-9207/.minikube/ca.crt
extensions:
- extension:
last-update: Mon, 29 Dec 2025 07:10:32 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.94.2:8443
name: stopped-upgrade-518014
- cluster:
certificate-authority: /home/jenkins/minikube-integration/22353-9207/.minikube/ca.crt
extensions:
- extension:
last-update: Mon, 29 Dec 2025 07:11:10 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.85.2:8443
name: test-preload-457393
contexts:
- context:
cluster: NoKubernetes-868221
extensions:
- extension:
last-update: Mon, 29 Dec 2025 07:11:11 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: context_info
namespace: default
user: NoKubernetes-868221
name: NoKubernetes-868221
- context:
cluster: stopped-upgrade-518014
user: stopped-upgrade-518014
name: stopped-upgrade-518014
- context:
cluster: test-preload-457393
extensions:
- extension:
last-update: Mon, 29 Dec 2025 07:11:10 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: context_info
namespace: default
user: test-preload-457393
name: test-preload-457393
current-context: NoKubernetes-868221
kind: Config
users:
- name: NoKubernetes-868221
user:
client-certificate: /home/jenkins/minikube-integration/22353-9207/.minikube/profiles/NoKubernetes-868221/client.crt
client-key: /home/jenkins/minikube-integration/22353-9207/.minikube/profiles/NoKubernetes-868221/client.key
- name: stopped-upgrade-518014
user:
client-certificate: /home/jenkins/minikube-integration/22353-9207/.minikube/profiles/stopped-upgrade-518014/client.crt
client-key: /home/jenkins/minikube-integration/22353-9207/.minikube/profiles/stopped-upgrade-518014/client.key
- name: test-preload-457393
user:
client-certificate: /home/jenkins/minikube-integration/22353-9207/.minikube/profiles/test-preload-457393/client.crt
client-key: /home/jenkins/minikube-integration/22353-9207/.minikube/profiles/test-preload-457393/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-619064

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-619064" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-619064"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-619064" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-619064"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-619064" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-619064"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-619064" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-619064"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-619064" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-619064"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-619064" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-619064"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-619064" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-619064"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-619064" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-619064"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-619064" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-619064"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-619064" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-619064"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-619064" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-619064"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-619064" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-619064"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-619064" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-619064"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-619064" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-619064"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-619064" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-619064"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-619064" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-619064"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-619064" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-619064"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-619064" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-619064"

                                                
                                                
----------------------- debugLogs end: kubenet-619064 [took: 3.232280381s] --------------------------------
helpers_test.go:176: Cleaning up "kubenet-619064" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p kubenet-619064
--- SKIP: TestNetworkPlugins/group/kubenet (3.39s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (3.51s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:615: 
----------------------- debugLogs start: cilium-619064 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-619064

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-619064

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-619064

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-619064

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-619064

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-619064

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-619064

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-619064

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-619064

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-619064

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-619064" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-619064"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-619064" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-619064"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-619064" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-619064"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-619064

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-619064" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-619064"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-619064" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-619064"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-619064" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-619064" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-619064" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-619064" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-619064" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-619064" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-619064" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-619064" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-619064" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-619064"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-619064" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-619064"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-619064" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-619064"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-619064" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-619064"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-619064" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-619064"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-619064

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-619064

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-619064" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-619064" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-619064

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-619064

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-619064" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-619064" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-619064" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-619064" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-619064" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-619064" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-619064"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-619064" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-619064"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-619064" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-619064"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-619064" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-619064"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-619064" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-619064"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/22353-9207/.minikube/ca.crt
extensions:
- extension:
last-update: Mon, 29 Dec 2025 07:11:11 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.76.2:8443
name: NoKubernetes-868221
- cluster:
certificate-authority: /home/jenkins/minikube-integration/22353-9207/.minikube/ca.crt
extensions:
- extension:
last-update: Mon, 29 Dec 2025 07:10:32 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.94.2:8443
name: stopped-upgrade-518014
- cluster:
certificate-authority: /home/jenkins/minikube-integration/22353-9207/.minikube/ca.crt
extensions:
- extension:
last-update: Mon, 29 Dec 2025 07:11:10 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.85.2:8443
name: test-preload-457393
contexts:
- context:
cluster: NoKubernetes-868221
extensions:
- extension:
last-update: Mon, 29 Dec 2025 07:11:11 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: context_info
namespace: default
user: NoKubernetes-868221
name: NoKubernetes-868221
- context:
cluster: stopped-upgrade-518014
user: stopped-upgrade-518014
name: stopped-upgrade-518014
- context:
cluster: test-preload-457393
extensions:
- extension:
last-update: Mon, 29 Dec 2025 07:11:10 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: context_info
namespace: default
user: test-preload-457393
name: test-preload-457393
current-context: NoKubernetes-868221
kind: Config
users:
- name: NoKubernetes-868221
user:
client-certificate: /home/jenkins/minikube-integration/22353-9207/.minikube/profiles/NoKubernetes-868221/client.crt
client-key: /home/jenkins/minikube-integration/22353-9207/.minikube/profiles/NoKubernetes-868221/client.key
- name: stopped-upgrade-518014
user:
client-certificate: /home/jenkins/minikube-integration/22353-9207/.minikube/profiles/stopped-upgrade-518014/client.crt
client-key: /home/jenkins/minikube-integration/22353-9207/.minikube/profiles/stopped-upgrade-518014/client.key
- name: test-preload-457393
user:
client-certificate: /home/jenkins/minikube-integration/22353-9207/.minikube/profiles/test-preload-457393/client.crt
client-key: /home/jenkins/minikube-integration/22353-9207/.minikube/profiles/test-preload-457393/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-619064

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-619064" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-619064"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-619064" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-619064"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-619064" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-619064"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-619064" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-619064"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-619064" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-619064"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-619064" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-619064"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-619064" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-619064"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-619064" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-619064"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-619064" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-619064"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-619064" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-619064"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-619064" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-619064"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-619064" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-619064"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-619064" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-619064"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-619064" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-619064"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-619064" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-619064"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-619064" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-619064"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-619064" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-619064"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-619064" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-619064"

                                                
                                                
----------------------- debugLogs end: cilium-619064 [took: 3.346878682s] --------------------------------
helpers_test.go:176: Cleaning up "cilium-619064" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p cilium-619064
--- SKIP: TestNetworkPlugins/group/cilium (3.51s)

                                                
                                    
Copied to clipboard