Test Report: Docker_Linux_crio_arm64 22344

                    
                      edd64449414ff518763defe8c5f2fdfa65b6a5d9:2025-12-27:43007
                    
                

Test fail (27/332)

x
+
TestAddons/serial/Volcano (0.72s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:852: skipping: crio not supported
addons_test.go:1055: (dbg) Run:  out/minikube-linux-arm64 -p addons-730938 addons disable volcano --alsologtostderr -v=1
addons_test.go:1055: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-730938 addons disable volcano --alsologtostderr -v=1: exit status 11 (714.969607ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1227 09:15:05.721189  309779 out.go:360] Setting OutFile to fd 1 ...
	I1227 09:15:05.725547  309779 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 09:15:05.725570  309779 out.go:374] Setting ErrFile to fd 2...
	I1227 09:15:05.725578  309779 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 09:15:05.725899  309779 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22344-301174/.minikube/bin
	I1227 09:15:05.726717  309779 mustload.go:66] Loading cluster: addons-730938
	I1227 09:15:05.727122  309779 config.go:182] Loaded profile config "addons-730938": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1227 09:15:05.727139  309779 addons.go:622] checking whether the cluster is paused
	I1227 09:15:05.727288  309779 config.go:182] Loaded profile config "addons-730938": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1227 09:15:05.727299  309779 host.go:66] Checking if "addons-730938" exists ...
	I1227 09:15:05.727813  309779 cli_runner.go:164] Run: docker container inspect addons-730938 --format={{.State.Status}}
	I1227 09:15:05.768024  309779 ssh_runner.go:195] Run: systemctl --version
	I1227 09:15:05.768081  309779 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-730938
	I1227 09:15:05.787891  309779 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33141 SSHKeyPath:/home/jenkins/minikube-integration/22344-301174/.minikube/machines/addons-730938/id_rsa Username:docker}
	I1227 09:15:05.901365  309779 cri.go:61] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1227 09:15:05.901490  309779 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1227 09:15:05.935065  309779 cri.go:96] found id: "1ba75b0e93cffeb9564d91bca0b56788c9988cf89785108eae6062d001ca1281"
	I1227 09:15:05.935087  309779 cri.go:96] found id: "bbba8dda954f74d2b2da7d42fcddbc4a1fdc4f4cf71a813b6c9860194a9621a6"
	I1227 09:15:05.935092  309779 cri.go:96] found id: "4f72823b84ebaa52891326b45fea830564bc3f49742b98eb173b49f92ce39b32"
	I1227 09:15:05.935096  309779 cri.go:96] found id: "22217205eb48a1ee3a6775fdd98e76ee8373818a2b962307148599ff6b4db89e"
	I1227 09:15:05.935099  309779 cri.go:96] found id: "97995be12aa464d51de8c36bec6f5b1c1434d2781ebab58eab6b6752af262ea9"
	I1227 09:15:05.935103  309779 cri.go:96] found id: "baf605d1813f1a9e9ed65db58157c7d94524aa89a258af323d12fa9e4e351ef4"
	I1227 09:15:05.935106  309779 cri.go:96] found id: "ea32f9690b19df5cc84a7588918620d2604fee37e5842259d8b41cd049af6b6b"
	I1227 09:15:05.935109  309779 cri.go:96] found id: "5cfe51c2b0a5da8e25cb261e4c52a197a97decd71f95204d7eaea008d0f884fc"
	I1227 09:15:05.935112  309779 cri.go:96] found id: "1587e53995e7f96157748b5b13fab257e13953e779f1b7d30e16c119415a5b28"
	I1227 09:15:05.935118  309779 cri.go:96] found id: "8d7c1678ab0865d6ff304859f9df7461c0b5fb240cccc28dce48d48b2c901d89"
	I1227 09:15:05.935125  309779 cri.go:96] found id: "76711d56ef9e83a83cfb91ab3e51f8eb3b7c37d33894e104276a47089d5daddd"
	I1227 09:15:05.935128  309779 cri.go:96] found id: "d3944a9f1f7d06e620da4fa1914d56a87ca4643a0c71a22346a4e69d166fa3fe"
	I1227 09:15:05.935132  309779 cri.go:96] found id: "ba6025b2bde56161b845a13d2b999f561ffcac9f93c817ee3dbb6e2cb0c42a53"
	I1227 09:15:05.935135  309779 cri.go:96] found id: "f5b8cfdd7b740ad5559a17b57ce6491d162516e94859fe3823c8fa0b8a3aeabb"
	I1227 09:15:05.935138  309779 cri.go:96] found id: "0ffe4ed190eb97e926994dfff9d7230b901fcb07d9b6f49de7216fac62ddc0dd"
	I1227 09:15:05.935143  309779 cri.go:96] found id: "92def02a9b247e15cdb1e61f48495c2505c703028c797b88c3d5486cd99c1921"
	I1227 09:15:05.935147  309779 cri.go:96] found id: "f2eee0c100ef359713536b5061784d84f3ab2e488484b19a6e5121205290199a"
	I1227 09:15:05.935151  309779 cri.go:96] found id: "ce77df4270ad600c2b64995a72ad36c61f8fe73f2347dc1e676d504603cfa764"
	I1227 09:15:05.935154  309779 cri.go:96] found id: "b7820e8e00a4a323234a16edadf1567d38a8353d5bf1183b5df375c072552575"
	I1227 09:15:05.935157  309779 cri.go:96] found id: "375550202d2a9cc42610c65fc954d2a96c60b729e6cdac6a36597b93d67c37b3"
	I1227 09:15:05.935163  309779 cri.go:96] found id: "906652711aa1d15066fd10c3fb9e2e07fcd4427364118efb6e268960ee5f8e23"
	I1227 09:15:05.935166  309779 cri.go:96] found id: "3acfe18930cb522267d2e4c6153dae8a9824a81243600ff5b20ce7d2561fc94e"
	I1227 09:15:05.935168  309779 cri.go:96] found id: "6bc7835ee8cc0444db6317a1fa1a95a206d02f7f7a29035f8adadd2213326620"
	I1227 09:15:05.935171  309779 cri.go:96] found id: ""
	I1227 09:15:05.935222  309779 ssh_runner.go:195] Run: sudo runc list -f json
	I1227 09:15:05.950369  309779 out.go:203] 
	W1227 09:15:05.953186  309779 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-27T09:15:05Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-27T09:15:05Z" level=error msg="open /run/runc: no such file or directory"
	
	W1227 09:15:05.953211  309779 out.go:285] * 
	* 
	W1227 09:15:06.341527  309779 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_9bd16c244da2144137a37071fb77e06a574610a0_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_9bd16c244da2144137a37071fb77e06a574610a0_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1227 09:15:06.344513  309779 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1057: failed to disable volcano addon: args "out/minikube-linux-arm64 -p addons-730938 addons disable volcano --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/serial/Volcano (0.72s)

                                                
                                    
x
+
TestAddons/parallel/Registry (14.79s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:384: registry stabilized in 5.408979ms
addons_test.go:386: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:353: "registry-788cd7d5bc-8bnrd" [d91b0f24-ead6-4a7a-9c0d-b90b07ab7ef6] Running
addons_test.go:386: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.007858938s
addons_test.go:389: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:353: "registry-proxy-j242b" [d24d9cbd-01f8-457f-bf06-940800f1f0d3] Running
addons_test.go:389: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.004195339s
addons_test.go:394: (dbg) Run:  kubectl --context addons-730938 delete po -l run=registry-test --now
addons_test.go:399: (dbg) Run:  kubectl --context addons-730938 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:399: (dbg) Done: kubectl --context addons-730938 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (4.266322295s)
addons_test.go:413: (dbg) Run:  out/minikube-linux-arm64 -p addons-730938 ip
2025/12/27 09:15:31 [DEBUG] GET http://192.168.49.2:5000
addons_test.go:1055: (dbg) Run:  out/minikube-linux-arm64 -p addons-730938 addons disable registry --alsologtostderr -v=1
addons_test.go:1055: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-730938 addons disable registry --alsologtostderr -v=1: exit status 11 (250.734019ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1227 09:15:31.236709  310327 out.go:360] Setting OutFile to fd 1 ...
	I1227 09:15:31.237447  310327 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 09:15:31.237460  310327 out.go:374] Setting ErrFile to fd 2...
	I1227 09:15:31.237466  310327 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 09:15:31.237719  310327 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22344-301174/.minikube/bin
	I1227 09:15:31.238037  310327 mustload.go:66] Loading cluster: addons-730938
	I1227 09:15:31.238498  310327 config.go:182] Loaded profile config "addons-730938": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1227 09:15:31.238526  310327 addons.go:622] checking whether the cluster is paused
	I1227 09:15:31.238642  310327 config.go:182] Loaded profile config "addons-730938": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1227 09:15:31.238658  310327 host.go:66] Checking if "addons-730938" exists ...
	I1227 09:15:31.239214  310327 cli_runner.go:164] Run: docker container inspect addons-730938 --format={{.State.Status}}
	I1227 09:15:31.256926  310327 ssh_runner.go:195] Run: systemctl --version
	I1227 09:15:31.257000  310327 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-730938
	I1227 09:15:31.274264  310327 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33141 SSHKeyPath:/home/jenkins/minikube-integration/22344-301174/.minikube/machines/addons-730938/id_rsa Username:docker}
	I1227 09:15:31.372888  310327 cri.go:61] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1227 09:15:31.373004  310327 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1227 09:15:31.403901  310327 cri.go:96] found id: "1ba75b0e93cffeb9564d91bca0b56788c9988cf89785108eae6062d001ca1281"
	I1227 09:15:31.403921  310327 cri.go:96] found id: "bbba8dda954f74d2b2da7d42fcddbc4a1fdc4f4cf71a813b6c9860194a9621a6"
	I1227 09:15:31.403926  310327 cri.go:96] found id: "4f72823b84ebaa52891326b45fea830564bc3f49742b98eb173b49f92ce39b32"
	I1227 09:15:31.403931  310327 cri.go:96] found id: "22217205eb48a1ee3a6775fdd98e76ee8373818a2b962307148599ff6b4db89e"
	I1227 09:15:31.403934  310327 cri.go:96] found id: "97995be12aa464d51de8c36bec6f5b1c1434d2781ebab58eab6b6752af262ea9"
	I1227 09:15:31.403937  310327 cri.go:96] found id: "baf605d1813f1a9e9ed65db58157c7d94524aa89a258af323d12fa9e4e351ef4"
	I1227 09:15:31.403940  310327 cri.go:96] found id: "ea32f9690b19df5cc84a7588918620d2604fee37e5842259d8b41cd049af6b6b"
	I1227 09:15:31.403970  310327 cri.go:96] found id: "5cfe51c2b0a5da8e25cb261e4c52a197a97decd71f95204d7eaea008d0f884fc"
	I1227 09:15:31.403980  310327 cri.go:96] found id: "1587e53995e7f96157748b5b13fab257e13953e779f1b7d30e16c119415a5b28"
	I1227 09:15:31.403987  310327 cri.go:96] found id: "8d7c1678ab0865d6ff304859f9df7461c0b5fb240cccc28dce48d48b2c901d89"
	I1227 09:15:31.403990  310327 cri.go:96] found id: "76711d56ef9e83a83cfb91ab3e51f8eb3b7c37d33894e104276a47089d5daddd"
	I1227 09:15:31.403993  310327 cri.go:96] found id: "d3944a9f1f7d06e620da4fa1914d56a87ca4643a0c71a22346a4e69d166fa3fe"
	I1227 09:15:31.403997  310327 cri.go:96] found id: "ba6025b2bde56161b845a13d2b999f561ffcac9f93c817ee3dbb6e2cb0c42a53"
	I1227 09:15:31.404000  310327 cri.go:96] found id: "f5b8cfdd7b740ad5559a17b57ce6491d162516e94859fe3823c8fa0b8a3aeabb"
	I1227 09:15:31.404003  310327 cri.go:96] found id: "0ffe4ed190eb97e926994dfff9d7230b901fcb07d9b6f49de7216fac62ddc0dd"
	I1227 09:15:31.404008  310327 cri.go:96] found id: "92def02a9b247e15cdb1e61f48495c2505c703028c797b88c3d5486cd99c1921"
	I1227 09:15:31.404011  310327 cri.go:96] found id: "f2eee0c100ef359713536b5061784d84f3ab2e488484b19a6e5121205290199a"
	I1227 09:15:31.404014  310327 cri.go:96] found id: "ce77df4270ad600c2b64995a72ad36c61f8fe73f2347dc1e676d504603cfa764"
	I1227 09:15:31.404017  310327 cri.go:96] found id: "b7820e8e00a4a323234a16edadf1567d38a8353d5bf1183b5df375c072552575"
	I1227 09:15:31.404021  310327 cri.go:96] found id: "375550202d2a9cc42610c65fc954d2a96c60b729e6cdac6a36597b93d67c37b3"
	I1227 09:15:31.404026  310327 cri.go:96] found id: "906652711aa1d15066fd10c3fb9e2e07fcd4427364118efb6e268960ee5f8e23"
	I1227 09:15:31.404049  310327 cri.go:96] found id: "3acfe18930cb522267d2e4c6153dae8a9824a81243600ff5b20ce7d2561fc94e"
	I1227 09:15:31.404065  310327 cri.go:96] found id: "6bc7835ee8cc0444db6317a1fa1a95a206d02f7f7a29035f8adadd2213326620"
	I1227 09:15:31.404075  310327 cri.go:96] found id: ""
	I1227 09:15:31.404128  310327 ssh_runner.go:195] Run: sudo runc list -f json
	I1227 09:15:31.419033  310327 out.go:203] 
	W1227 09:15:31.422133  310327 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-27T09:15:31Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-27T09:15:31Z" level=error msg="open /run/runc: no such file or directory"
	
	W1227 09:15:31.422194  310327 out.go:285] * 
	* 
	W1227 09:15:31.425440  310327 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_94fa7435cdb0fda2540861b9b71556c8cae5c5f1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_94fa7435cdb0fda2540861b9b71556c8cae5c5f1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1227 09:15:31.428775  310327 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1057: failed to disable registry addon: args "out/minikube-linux-arm64 -p addons-730938 addons disable registry --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/Registry (14.79s)

                                                
                                    
x
+
TestAddons/parallel/RegistryCreds (0.53s)

                                                
                                                
=== RUN   TestAddons/parallel/RegistryCreds
=== PAUSE TestAddons/parallel/RegistryCreds

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/RegistryCreds
addons_test.go:325: registry-creds stabilized in 3.617913ms
addons_test.go:327: (dbg) Run:  out/minikube-linux-arm64 addons configure registry-creds -f ./testdata/addons_testconfig.json -p addons-730938
addons_test.go:334: (dbg) Run:  kubectl --context addons-730938 -n kube-system get secret -o yaml
addons_test.go:1055: (dbg) Run:  out/minikube-linux-arm64 -p addons-730938 addons disable registry-creds --alsologtostderr -v=1
addons_test.go:1055: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-730938 addons disable registry-creds --alsologtostderr -v=1: exit status 11 (282.428779ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1227 09:16:01.920507  312134 out.go:360] Setting OutFile to fd 1 ...
	I1227 09:16:01.921262  312134 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 09:16:01.921306  312134 out.go:374] Setting ErrFile to fd 2...
	I1227 09:16:01.921329  312134 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 09:16:01.921734  312134 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22344-301174/.minikube/bin
	I1227 09:16:01.922257  312134 mustload.go:66] Loading cluster: addons-730938
	I1227 09:16:01.923373  312134 config.go:182] Loaded profile config "addons-730938": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1227 09:16:01.923405  312134 addons.go:622] checking whether the cluster is paused
	I1227 09:16:01.923583  312134 config.go:182] Loaded profile config "addons-730938": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1227 09:16:01.923605  312134 host.go:66] Checking if "addons-730938" exists ...
	I1227 09:16:01.924467  312134 cli_runner.go:164] Run: docker container inspect addons-730938 --format={{.State.Status}}
	I1227 09:16:01.946669  312134 ssh_runner.go:195] Run: systemctl --version
	I1227 09:16:01.946724  312134 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-730938
	I1227 09:16:01.985664  312134 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33141 SSHKeyPath:/home/jenkins/minikube-integration/22344-301174/.minikube/machines/addons-730938/id_rsa Username:docker}
	I1227 09:16:02.094083  312134 cri.go:61] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1227 09:16:02.094178  312134 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1227 09:16:02.124584  312134 cri.go:96] found id: "1ba75b0e93cffeb9564d91bca0b56788c9988cf89785108eae6062d001ca1281"
	I1227 09:16:02.124609  312134 cri.go:96] found id: "bbba8dda954f74d2b2da7d42fcddbc4a1fdc4f4cf71a813b6c9860194a9621a6"
	I1227 09:16:02.124616  312134 cri.go:96] found id: "4f72823b84ebaa52891326b45fea830564bc3f49742b98eb173b49f92ce39b32"
	I1227 09:16:02.124620  312134 cri.go:96] found id: "22217205eb48a1ee3a6775fdd98e76ee8373818a2b962307148599ff6b4db89e"
	I1227 09:16:02.124623  312134 cri.go:96] found id: "97995be12aa464d51de8c36bec6f5b1c1434d2781ebab58eab6b6752af262ea9"
	I1227 09:16:02.124627  312134 cri.go:96] found id: "baf605d1813f1a9e9ed65db58157c7d94524aa89a258af323d12fa9e4e351ef4"
	I1227 09:16:02.124630  312134 cri.go:96] found id: "ea32f9690b19df5cc84a7588918620d2604fee37e5842259d8b41cd049af6b6b"
	I1227 09:16:02.124633  312134 cri.go:96] found id: "5cfe51c2b0a5da8e25cb261e4c52a197a97decd71f95204d7eaea008d0f884fc"
	I1227 09:16:02.124636  312134 cri.go:96] found id: "1587e53995e7f96157748b5b13fab257e13953e779f1b7d30e16c119415a5b28"
	I1227 09:16:02.124643  312134 cri.go:96] found id: "8d7c1678ab0865d6ff304859f9df7461c0b5fb240cccc28dce48d48b2c901d89"
	I1227 09:16:02.124646  312134 cri.go:96] found id: "76711d56ef9e83a83cfb91ab3e51f8eb3b7c37d33894e104276a47089d5daddd"
	I1227 09:16:02.124649  312134 cri.go:96] found id: "d3944a9f1f7d06e620da4fa1914d56a87ca4643a0c71a22346a4e69d166fa3fe"
	I1227 09:16:02.124653  312134 cri.go:96] found id: "ba6025b2bde56161b845a13d2b999f561ffcac9f93c817ee3dbb6e2cb0c42a53"
	I1227 09:16:02.124657  312134 cri.go:96] found id: "f5b8cfdd7b740ad5559a17b57ce6491d162516e94859fe3823c8fa0b8a3aeabb"
	I1227 09:16:02.124660  312134 cri.go:96] found id: "0ffe4ed190eb97e926994dfff9d7230b901fcb07d9b6f49de7216fac62ddc0dd"
	I1227 09:16:02.124664  312134 cri.go:96] found id: "92def02a9b247e15cdb1e61f48495c2505c703028c797b88c3d5486cd99c1921"
	I1227 09:16:02.124668  312134 cri.go:96] found id: "f2eee0c100ef359713536b5061784d84f3ab2e488484b19a6e5121205290199a"
	I1227 09:16:02.124672  312134 cri.go:96] found id: "ce77df4270ad600c2b64995a72ad36c61f8fe73f2347dc1e676d504603cfa764"
	I1227 09:16:02.124675  312134 cri.go:96] found id: "b7820e8e00a4a323234a16edadf1567d38a8353d5bf1183b5df375c072552575"
	I1227 09:16:02.124679  312134 cri.go:96] found id: "375550202d2a9cc42610c65fc954d2a96c60b729e6cdac6a36597b93d67c37b3"
	I1227 09:16:02.124685  312134 cri.go:96] found id: "906652711aa1d15066fd10c3fb9e2e07fcd4427364118efb6e268960ee5f8e23"
	I1227 09:16:02.124692  312134 cri.go:96] found id: "3acfe18930cb522267d2e4c6153dae8a9824a81243600ff5b20ce7d2561fc94e"
	I1227 09:16:02.124696  312134 cri.go:96] found id: "6bc7835ee8cc0444db6317a1fa1a95a206d02f7f7a29035f8adadd2213326620"
	I1227 09:16:02.124704  312134 cri.go:96] found id: ""
	I1227 09:16:02.124770  312134 ssh_runner.go:195] Run: sudo runc list -f json
	I1227 09:16:02.139759  312134 out.go:203] 
	W1227 09:16:02.142686  312134 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-27T09:16:02Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-27T09:16:02Z" level=error msg="open /run/runc: no such file or directory"
	
	W1227 09:16:02.142716  312134 out.go:285] * 
	* 
	W1227 09:16:02.146016  312134 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_ac42ae7bb4bac5cd909a08f6506d602b3d2ccf6c_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_ac42ae7bb4bac5cd909a08f6506d602b3d2ccf6c_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1227 09:16:02.148895  312134 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1057: failed to disable registry-creds addon: args "out/minikube-linux-arm64 -p addons-730938 addons disable registry-creds --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/RegistryCreds (0.53s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (9.76s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:211: (dbg) Run:  kubectl --context addons-730938 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:236: (dbg) Run:  kubectl --context addons-730938 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:249: (dbg) Run:  kubectl --context addons-730938 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:254: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:353: "nginx" [551a9857-97f9-48ef-9c9b-f2cbf3052a9f] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:353: "nginx" [551a9857-97f9-48ef-9c9b-f2cbf3052a9f] Running
addons_test.go:254: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 7.007018301s
I1227 09:15:59.503055  303043 kapi.go:150] Service nginx in namespace default found.
addons_test.go:266: (dbg) Run:  out/minikube-linux-arm64 -p addons-730938 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:290: (dbg) Run:  kubectl --context addons-730938 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:295: (dbg) Run:  out/minikube-linux-arm64 -p addons-730938 ip
addons_test.go:301: (dbg) Run:  nslookup hello-john.test 192.168.49.2
addons_test.go:1055: (dbg) Run:  out/minikube-linux-arm64 -p addons-730938 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:1055: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-730938 addons disable ingress-dns --alsologtostderr -v=1: exit status 11 (305.662377ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1227 09:16:01.047695  312004 out.go:360] Setting OutFile to fd 1 ...
	I1227 09:16:01.048514  312004 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 09:16:01.048551  312004 out.go:374] Setting ErrFile to fd 2...
	I1227 09:16:01.048573  312004 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 09:16:01.048877  312004 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22344-301174/.minikube/bin
	I1227 09:16:01.049245  312004 mustload.go:66] Loading cluster: addons-730938
	I1227 09:16:01.049690  312004 config.go:182] Loaded profile config "addons-730938": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1227 09:16:01.049735  312004 addons.go:622] checking whether the cluster is paused
	I1227 09:16:01.049883  312004 config.go:182] Loaded profile config "addons-730938": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1227 09:16:01.049920  312004 host.go:66] Checking if "addons-730938" exists ...
	I1227 09:16:01.050535  312004 cli_runner.go:164] Run: docker container inspect addons-730938 --format={{.State.Status}}
	I1227 09:16:01.070481  312004 ssh_runner.go:195] Run: systemctl --version
	I1227 09:16:01.070532  312004 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-730938
	I1227 09:16:01.091657  312004 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33141 SSHKeyPath:/home/jenkins/minikube-integration/22344-301174/.minikube/machines/addons-730938/id_rsa Username:docker}
	I1227 09:16:01.194687  312004 cri.go:61] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1227 09:16:01.194801  312004 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1227 09:16:01.242017  312004 cri.go:96] found id: "1ba75b0e93cffeb9564d91bca0b56788c9988cf89785108eae6062d001ca1281"
	I1227 09:16:01.242044  312004 cri.go:96] found id: "bbba8dda954f74d2b2da7d42fcddbc4a1fdc4f4cf71a813b6c9860194a9621a6"
	I1227 09:16:01.242068  312004 cri.go:96] found id: "4f72823b84ebaa52891326b45fea830564bc3f49742b98eb173b49f92ce39b32"
	I1227 09:16:01.242073  312004 cri.go:96] found id: "22217205eb48a1ee3a6775fdd98e76ee8373818a2b962307148599ff6b4db89e"
	I1227 09:16:01.242086  312004 cri.go:96] found id: "97995be12aa464d51de8c36bec6f5b1c1434d2781ebab58eab6b6752af262ea9"
	I1227 09:16:01.242092  312004 cri.go:96] found id: "baf605d1813f1a9e9ed65db58157c7d94524aa89a258af323d12fa9e4e351ef4"
	I1227 09:16:01.242097  312004 cri.go:96] found id: "ea32f9690b19df5cc84a7588918620d2604fee37e5842259d8b41cd049af6b6b"
	I1227 09:16:01.242104  312004 cri.go:96] found id: "5cfe51c2b0a5da8e25cb261e4c52a197a97decd71f95204d7eaea008d0f884fc"
	I1227 09:16:01.242108  312004 cri.go:96] found id: "1587e53995e7f96157748b5b13fab257e13953e779f1b7d30e16c119415a5b28"
	I1227 09:16:01.242116  312004 cri.go:96] found id: "8d7c1678ab0865d6ff304859f9df7461c0b5fb240cccc28dce48d48b2c901d89"
	I1227 09:16:01.242119  312004 cri.go:96] found id: "76711d56ef9e83a83cfb91ab3e51f8eb3b7c37d33894e104276a47089d5daddd"
	I1227 09:16:01.242123  312004 cri.go:96] found id: "d3944a9f1f7d06e620da4fa1914d56a87ca4643a0c71a22346a4e69d166fa3fe"
	I1227 09:16:01.242126  312004 cri.go:96] found id: "ba6025b2bde56161b845a13d2b999f561ffcac9f93c817ee3dbb6e2cb0c42a53"
	I1227 09:16:01.242130  312004 cri.go:96] found id: "f5b8cfdd7b740ad5559a17b57ce6491d162516e94859fe3823c8fa0b8a3aeabb"
	I1227 09:16:01.242133  312004 cri.go:96] found id: "0ffe4ed190eb97e926994dfff9d7230b901fcb07d9b6f49de7216fac62ddc0dd"
	I1227 09:16:01.242138  312004 cri.go:96] found id: "92def02a9b247e15cdb1e61f48495c2505c703028c797b88c3d5486cd99c1921"
	I1227 09:16:01.242142  312004 cri.go:96] found id: "f2eee0c100ef359713536b5061784d84f3ab2e488484b19a6e5121205290199a"
	I1227 09:16:01.242174  312004 cri.go:96] found id: "ce77df4270ad600c2b64995a72ad36c61f8fe73f2347dc1e676d504603cfa764"
	I1227 09:16:01.242178  312004 cri.go:96] found id: "b7820e8e00a4a323234a16edadf1567d38a8353d5bf1183b5df375c072552575"
	I1227 09:16:01.242182  312004 cri.go:96] found id: "375550202d2a9cc42610c65fc954d2a96c60b729e6cdac6a36597b93d67c37b3"
	I1227 09:16:01.242188  312004 cri.go:96] found id: "906652711aa1d15066fd10c3fb9e2e07fcd4427364118efb6e268960ee5f8e23"
	I1227 09:16:01.242192  312004 cri.go:96] found id: "3acfe18930cb522267d2e4c6153dae8a9824a81243600ff5b20ce7d2561fc94e"
	I1227 09:16:01.242195  312004 cri.go:96] found id: "6bc7835ee8cc0444db6317a1fa1a95a206d02f7f7a29035f8adadd2213326620"
	I1227 09:16:01.242198  312004 cri.go:96] found id: ""
	I1227 09:16:01.242260  312004 ssh_runner.go:195] Run: sudo runc list -f json
	I1227 09:16:01.262901  312004 out.go:203] 
	W1227 09:16:01.282393  312004 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-27T09:16:01Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-27T09:16:01Z" level=error msg="open /run/runc: no such file or directory"
	
	W1227 09:16:01.282417  312004 out.go:285] * 
	* 
	W1227 09:16:01.285758  312004 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_4116e8848b7c0e6a40fa9061a5ca6da2e0eb6ead_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_4116e8848b7c0e6a40fa9061a5ca6da2e0eb6ead_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1227 09:16:01.288865  312004 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1057: failed to disable ingress-dns addon: args "out/minikube-linux-arm64 -p addons-730938 addons disable ingress-dns --alsologtostderr -v=1": exit status 11
addons_test.go:1055: (dbg) Run:  out/minikube-linux-arm64 -p addons-730938 addons disable ingress --alsologtostderr -v=1
addons_test.go:1055: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-730938 addons disable ingress --alsologtostderr -v=1: exit status 11 (335.241712ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1227 09:16:01.378825  312061 out.go:360] Setting OutFile to fd 1 ...
	I1227 09:16:01.379601  312061 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 09:16:01.379638  312061 out.go:374] Setting ErrFile to fd 2...
	I1227 09:16:01.379665  312061 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 09:16:01.380085  312061 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22344-301174/.minikube/bin
	I1227 09:16:01.380508  312061 mustload.go:66] Loading cluster: addons-730938
	I1227 09:16:01.381246  312061 config.go:182] Loaded profile config "addons-730938": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1227 09:16:01.381300  312061 addons.go:622] checking whether the cluster is paused
	I1227 09:16:01.381480  312061 config.go:182] Loaded profile config "addons-730938": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1227 09:16:01.381524  312061 host.go:66] Checking if "addons-730938" exists ...
	I1227 09:16:01.382380  312061 cli_runner.go:164] Run: docker container inspect addons-730938 --format={{.State.Status}}
	I1227 09:16:01.401865  312061 ssh_runner.go:195] Run: systemctl --version
	I1227 09:16:01.402005  312061 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-730938
	I1227 09:16:01.422662  312061 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33141 SSHKeyPath:/home/jenkins/minikube-integration/22344-301174/.minikube/machines/addons-730938/id_rsa Username:docker}
	I1227 09:16:01.529022  312061 cri.go:61] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1227 09:16:01.529129  312061 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1227 09:16:01.596373  312061 cri.go:96] found id: "1ba75b0e93cffeb9564d91bca0b56788c9988cf89785108eae6062d001ca1281"
	I1227 09:16:01.596391  312061 cri.go:96] found id: "bbba8dda954f74d2b2da7d42fcddbc4a1fdc4f4cf71a813b6c9860194a9621a6"
	I1227 09:16:01.596396  312061 cri.go:96] found id: "4f72823b84ebaa52891326b45fea830564bc3f49742b98eb173b49f92ce39b32"
	I1227 09:16:01.596400  312061 cri.go:96] found id: "22217205eb48a1ee3a6775fdd98e76ee8373818a2b962307148599ff6b4db89e"
	I1227 09:16:01.596403  312061 cri.go:96] found id: "97995be12aa464d51de8c36bec6f5b1c1434d2781ebab58eab6b6752af262ea9"
	I1227 09:16:01.596407  312061 cri.go:96] found id: "baf605d1813f1a9e9ed65db58157c7d94524aa89a258af323d12fa9e4e351ef4"
	I1227 09:16:01.596410  312061 cri.go:96] found id: "ea32f9690b19df5cc84a7588918620d2604fee37e5842259d8b41cd049af6b6b"
	I1227 09:16:01.596413  312061 cri.go:96] found id: "5cfe51c2b0a5da8e25cb261e4c52a197a97decd71f95204d7eaea008d0f884fc"
	I1227 09:16:01.596416  312061 cri.go:96] found id: "1587e53995e7f96157748b5b13fab257e13953e779f1b7d30e16c119415a5b28"
	I1227 09:16:01.596421  312061 cri.go:96] found id: "8d7c1678ab0865d6ff304859f9df7461c0b5fb240cccc28dce48d48b2c901d89"
	I1227 09:16:01.596424  312061 cri.go:96] found id: "76711d56ef9e83a83cfb91ab3e51f8eb3b7c37d33894e104276a47089d5daddd"
	I1227 09:16:01.596428  312061 cri.go:96] found id: "d3944a9f1f7d06e620da4fa1914d56a87ca4643a0c71a22346a4e69d166fa3fe"
	I1227 09:16:01.596431  312061 cri.go:96] found id: "ba6025b2bde56161b845a13d2b999f561ffcac9f93c817ee3dbb6e2cb0c42a53"
	I1227 09:16:01.596434  312061 cri.go:96] found id: "f5b8cfdd7b740ad5559a17b57ce6491d162516e94859fe3823c8fa0b8a3aeabb"
	I1227 09:16:01.596437  312061 cri.go:96] found id: "0ffe4ed190eb97e926994dfff9d7230b901fcb07d9b6f49de7216fac62ddc0dd"
	I1227 09:16:01.596444  312061 cri.go:96] found id: "92def02a9b247e15cdb1e61f48495c2505c703028c797b88c3d5486cd99c1921"
	I1227 09:16:01.596447  312061 cri.go:96] found id: "f2eee0c100ef359713536b5061784d84f3ab2e488484b19a6e5121205290199a"
	I1227 09:16:01.596452  312061 cri.go:96] found id: "ce77df4270ad600c2b64995a72ad36c61f8fe73f2347dc1e676d504603cfa764"
	I1227 09:16:01.596455  312061 cri.go:96] found id: "b7820e8e00a4a323234a16edadf1567d38a8353d5bf1183b5df375c072552575"
	I1227 09:16:01.596458  312061 cri.go:96] found id: "375550202d2a9cc42610c65fc954d2a96c60b729e6cdac6a36597b93d67c37b3"
	I1227 09:16:01.596463  312061 cri.go:96] found id: "906652711aa1d15066fd10c3fb9e2e07fcd4427364118efb6e268960ee5f8e23"
	I1227 09:16:01.596466  312061 cri.go:96] found id: "3acfe18930cb522267d2e4c6153dae8a9824a81243600ff5b20ce7d2561fc94e"
	I1227 09:16:01.596469  312061 cri.go:96] found id: "6bc7835ee8cc0444db6317a1fa1a95a206d02f7f7a29035f8adadd2213326620"
	I1227 09:16:01.596472  312061 cri.go:96] found id: ""
	I1227 09:16:01.596545  312061 ssh_runner.go:195] Run: sudo runc list -f json
	I1227 09:16:01.615301  312061 out.go:203] 
	W1227 09:16:01.618471  312061 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-27T09:16:01Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-27T09:16:01Z" level=error msg="open /run/runc: no such file or directory"
	
	W1227 09:16:01.618497  312061 out.go:285] * 
	* 
	W1227 09:16:01.621787  312061 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_62553deefc570c97f2052ef703df7b8905a654d6_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_62553deefc570c97f2052ef703df7b8905a654d6_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1227 09:16:01.624927  312061 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1057: failed to disable ingress addon: args "out/minikube-linux-arm64 -p addons-730938 addons disable ingress --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/Ingress (9.76s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (6.34s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:825: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:353: "gadget-gj25j" [e45546cf-00b9-4cf5-b367-5db48ef8175d] Running
addons_test.go:825: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 6.004291782s
addons_test.go:1055: (dbg) Run:  out/minikube-linux-arm64 -p addons-730938 addons disable inspektor-gadget --alsologtostderr -v=1
addons_test.go:1055: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-730938 addons disable inspektor-gadget --alsologtostderr -v=1: exit status 11 (331.289995ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1227 09:15:51.603646  311482 out.go:360] Setting OutFile to fd 1 ...
	I1227 09:15:51.605140  311482 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 09:15:51.605155  311482 out.go:374] Setting ErrFile to fd 2...
	I1227 09:15:51.605162  311482 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 09:15:51.605476  311482 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22344-301174/.minikube/bin
	I1227 09:15:51.605773  311482 mustload.go:66] Loading cluster: addons-730938
	I1227 09:15:51.606210  311482 config.go:182] Loaded profile config "addons-730938": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1227 09:15:51.606234  311482 addons.go:622] checking whether the cluster is paused
	I1227 09:15:51.606345  311482 config.go:182] Loaded profile config "addons-730938": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1227 09:15:51.606355  311482 host.go:66] Checking if "addons-730938" exists ...
	I1227 09:15:51.607361  311482 cli_runner.go:164] Run: docker container inspect addons-730938 --format={{.State.Status}}
	I1227 09:15:51.629171  311482 ssh_runner.go:195] Run: systemctl --version
	I1227 09:15:51.629240  311482 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-730938
	I1227 09:15:51.653536  311482 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33141 SSHKeyPath:/home/jenkins/minikube-integration/22344-301174/.minikube/machines/addons-730938/id_rsa Username:docker}
	I1227 09:15:51.773160  311482 cri.go:61] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1227 09:15:51.773280  311482 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1227 09:15:51.828481  311482 cri.go:96] found id: "1ba75b0e93cffeb9564d91bca0b56788c9988cf89785108eae6062d001ca1281"
	I1227 09:15:51.828547  311482 cri.go:96] found id: "bbba8dda954f74d2b2da7d42fcddbc4a1fdc4f4cf71a813b6c9860194a9621a6"
	I1227 09:15:51.828567  311482 cri.go:96] found id: "4f72823b84ebaa52891326b45fea830564bc3f49742b98eb173b49f92ce39b32"
	I1227 09:15:51.828589  311482 cri.go:96] found id: "22217205eb48a1ee3a6775fdd98e76ee8373818a2b962307148599ff6b4db89e"
	I1227 09:15:51.828625  311482 cri.go:96] found id: "97995be12aa464d51de8c36bec6f5b1c1434d2781ebab58eab6b6752af262ea9"
	I1227 09:15:51.828649  311482 cri.go:96] found id: "baf605d1813f1a9e9ed65db58157c7d94524aa89a258af323d12fa9e4e351ef4"
	I1227 09:15:51.828669  311482 cri.go:96] found id: "ea32f9690b19df5cc84a7588918620d2604fee37e5842259d8b41cd049af6b6b"
	I1227 09:15:51.828690  311482 cri.go:96] found id: "5cfe51c2b0a5da8e25cb261e4c52a197a97decd71f95204d7eaea008d0f884fc"
	I1227 09:15:51.828710  311482 cri.go:96] found id: "1587e53995e7f96157748b5b13fab257e13953e779f1b7d30e16c119415a5b28"
	I1227 09:15:51.828748  311482 cri.go:96] found id: "8d7c1678ab0865d6ff304859f9df7461c0b5fb240cccc28dce48d48b2c901d89"
	I1227 09:15:51.828780  311482 cri.go:96] found id: "76711d56ef9e83a83cfb91ab3e51f8eb3b7c37d33894e104276a47089d5daddd"
	I1227 09:15:51.828801  311482 cri.go:96] found id: "d3944a9f1f7d06e620da4fa1914d56a87ca4643a0c71a22346a4e69d166fa3fe"
	I1227 09:15:51.828823  311482 cri.go:96] found id: "ba6025b2bde56161b845a13d2b999f561ffcac9f93c817ee3dbb6e2cb0c42a53"
	I1227 09:15:51.828863  311482 cri.go:96] found id: "f5b8cfdd7b740ad5559a17b57ce6491d162516e94859fe3823c8fa0b8a3aeabb"
	I1227 09:15:51.828888  311482 cri.go:96] found id: "0ffe4ed190eb97e926994dfff9d7230b901fcb07d9b6f49de7216fac62ddc0dd"
	I1227 09:15:51.828927  311482 cri.go:96] found id: "92def02a9b247e15cdb1e61f48495c2505c703028c797b88c3d5486cd99c1921"
	I1227 09:15:51.828945  311482 cri.go:96] found id: "f2eee0c100ef359713536b5061784d84f3ab2e488484b19a6e5121205290199a"
	I1227 09:15:51.828981  311482 cri.go:96] found id: "ce77df4270ad600c2b64995a72ad36c61f8fe73f2347dc1e676d504603cfa764"
	I1227 09:15:51.829008  311482 cri.go:96] found id: "b7820e8e00a4a323234a16edadf1567d38a8353d5bf1183b5df375c072552575"
	I1227 09:15:51.829042  311482 cri.go:96] found id: "375550202d2a9cc42610c65fc954d2a96c60b729e6cdac6a36597b93d67c37b3"
	I1227 09:15:51.829068  311482 cri.go:96] found id: "906652711aa1d15066fd10c3fb9e2e07fcd4427364118efb6e268960ee5f8e23"
	I1227 09:15:51.829087  311482 cri.go:96] found id: "3acfe18930cb522267d2e4c6153dae8a9824a81243600ff5b20ce7d2561fc94e"
	I1227 09:15:51.829108  311482 cri.go:96] found id: "6bc7835ee8cc0444db6317a1fa1a95a206d02f7f7a29035f8adadd2213326620"
	I1227 09:15:51.829128  311482 cri.go:96] found id: ""
	I1227 09:15:51.829208  311482 ssh_runner.go:195] Run: sudo runc list -f json
	I1227 09:15:51.850853  311482 out.go:203] 
	W1227 09:15:51.854534  311482 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-27T09:15:51Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-27T09:15:51Z" level=error msg="open /run/runc: no such file or directory"
	
	W1227 09:15:51.854575  311482 out.go:285] * 
	* 
	W1227 09:15:51.857900  311482 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_07218961934993dd21acc63caaf1aa08873c018e_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_07218961934993dd21acc63caaf1aa08873c018e_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1227 09:15:51.863327  311482 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1057: failed to disable inspektor-gadget addon: args "out/minikube-linux-arm64 -p addons-730938 addons disable inspektor-gadget --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/InspektorGadget (6.34s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (5.57s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:457: metrics-server stabilized in 4.543539ms
addons_test.go:459: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:353: "metrics-server-5778bb4788-fjcqt" [00092f37-6735-4931-aed2-2c7db199670b] Running
addons_test.go:459: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.00367036s
addons_test.go:465: (dbg) Run:  kubectl --context addons-730938 top pods -n kube-system
addons_test.go:1055: (dbg) Run:  out/minikube-linux-arm64 -p addons-730938 addons disable metrics-server --alsologtostderr -v=1
addons_test.go:1055: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-730938 addons disable metrics-server --alsologtostderr -v=1: exit status 11 (345.271154ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1227 09:15:45.334658  311245 out.go:360] Setting OutFile to fd 1 ...
	I1227 09:15:45.336050  311245 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 09:15:45.336066  311245 out.go:374] Setting ErrFile to fd 2...
	I1227 09:15:45.336072  311245 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 09:15:45.336386  311245 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22344-301174/.minikube/bin
	I1227 09:15:45.336927  311245 mustload.go:66] Loading cluster: addons-730938
	I1227 09:15:45.337476  311245 config.go:182] Loaded profile config "addons-730938": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1227 09:15:45.337506  311245 addons.go:622] checking whether the cluster is paused
	I1227 09:15:45.337666  311245 config.go:182] Loaded profile config "addons-730938": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1227 09:15:45.337687  311245 host.go:66] Checking if "addons-730938" exists ...
	I1227 09:15:45.338342  311245 cli_runner.go:164] Run: docker container inspect addons-730938 --format={{.State.Status}}
	I1227 09:15:45.356459  311245 ssh_runner.go:195] Run: systemctl --version
	I1227 09:15:45.356516  311245 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-730938
	I1227 09:15:45.374862  311245 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33141 SSHKeyPath:/home/jenkins/minikube-integration/22344-301174/.minikube/machines/addons-730938/id_rsa Username:docker}
	I1227 09:15:45.472740  311245 cri.go:61] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1227 09:15:45.472827  311245 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1227 09:15:45.503689  311245 cri.go:96] found id: "1ba75b0e93cffeb9564d91bca0b56788c9988cf89785108eae6062d001ca1281"
	I1227 09:15:45.503711  311245 cri.go:96] found id: "bbba8dda954f74d2b2da7d42fcddbc4a1fdc4f4cf71a813b6c9860194a9621a6"
	I1227 09:15:45.503717  311245 cri.go:96] found id: "4f72823b84ebaa52891326b45fea830564bc3f49742b98eb173b49f92ce39b32"
	I1227 09:15:45.503721  311245 cri.go:96] found id: "22217205eb48a1ee3a6775fdd98e76ee8373818a2b962307148599ff6b4db89e"
	I1227 09:15:45.503724  311245 cri.go:96] found id: "97995be12aa464d51de8c36bec6f5b1c1434d2781ebab58eab6b6752af262ea9"
	I1227 09:15:45.503728  311245 cri.go:96] found id: "baf605d1813f1a9e9ed65db58157c7d94524aa89a258af323d12fa9e4e351ef4"
	I1227 09:15:45.503753  311245 cri.go:96] found id: "ea32f9690b19df5cc84a7588918620d2604fee37e5842259d8b41cd049af6b6b"
	I1227 09:15:45.503766  311245 cri.go:96] found id: "5cfe51c2b0a5da8e25cb261e4c52a197a97decd71f95204d7eaea008d0f884fc"
	I1227 09:15:45.503770  311245 cri.go:96] found id: "1587e53995e7f96157748b5b13fab257e13953e779f1b7d30e16c119415a5b28"
	I1227 09:15:45.503777  311245 cri.go:96] found id: "8d7c1678ab0865d6ff304859f9df7461c0b5fb240cccc28dce48d48b2c901d89"
	I1227 09:15:45.503785  311245 cri.go:96] found id: "76711d56ef9e83a83cfb91ab3e51f8eb3b7c37d33894e104276a47089d5daddd"
	I1227 09:15:45.503788  311245 cri.go:96] found id: "d3944a9f1f7d06e620da4fa1914d56a87ca4643a0c71a22346a4e69d166fa3fe"
	I1227 09:15:45.503791  311245 cri.go:96] found id: "ba6025b2bde56161b845a13d2b999f561ffcac9f93c817ee3dbb6e2cb0c42a53"
	I1227 09:15:45.503794  311245 cri.go:96] found id: "f5b8cfdd7b740ad5559a17b57ce6491d162516e94859fe3823c8fa0b8a3aeabb"
	I1227 09:15:45.503797  311245 cri.go:96] found id: "0ffe4ed190eb97e926994dfff9d7230b901fcb07d9b6f49de7216fac62ddc0dd"
	I1227 09:15:45.503802  311245 cri.go:96] found id: "92def02a9b247e15cdb1e61f48495c2505c703028c797b88c3d5486cd99c1921"
	I1227 09:15:45.503805  311245 cri.go:96] found id: "f2eee0c100ef359713536b5061784d84f3ab2e488484b19a6e5121205290199a"
	I1227 09:15:45.503809  311245 cri.go:96] found id: "ce77df4270ad600c2b64995a72ad36c61f8fe73f2347dc1e676d504603cfa764"
	I1227 09:15:45.503812  311245 cri.go:96] found id: "b7820e8e00a4a323234a16edadf1567d38a8353d5bf1183b5df375c072552575"
	I1227 09:15:45.503831  311245 cri.go:96] found id: "375550202d2a9cc42610c65fc954d2a96c60b729e6cdac6a36597b93d67c37b3"
	I1227 09:15:45.503851  311245 cri.go:96] found id: "906652711aa1d15066fd10c3fb9e2e07fcd4427364118efb6e268960ee5f8e23"
	I1227 09:15:45.503861  311245 cri.go:96] found id: "3acfe18930cb522267d2e4c6153dae8a9824a81243600ff5b20ce7d2561fc94e"
	I1227 09:15:45.503864  311245 cri.go:96] found id: "6bc7835ee8cc0444db6317a1fa1a95a206d02f7f7a29035f8adadd2213326620"
	I1227 09:15:45.503867  311245 cri.go:96] found id: ""
	I1227 09:15:45.503924  311245 ssh_runner.go:195] Run: sudo runc list -f json
	I1227 09:15:45.518813  311245 out.go:203] 
	W1227 09:15:45.521666  311245 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-27T09:15:45Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-27T09:15:45Z" level=error msg="open /run/runc: no such file or directory"
	
	W1227 09:15:45.521692  311245 out.go:285] * 
	* 
	W1227 09:15:45.525410  311245 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_9e377edc2b59264359e9c26f81b048e390fa608a_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_9e377edc2b59264359e9c26f81b048e390fa608a_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1227 09:15:45.528249  311245 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1057: failed to disable metrics-server addon: args "out/minikube-linux-arm64 -p addons-730938 addons disable metrics-server --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/MetricsServer (5.57s)

                                                
                                    
x
+
TestAddons/parallel/CSI (34.39s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
I1227 09:15:36.828908  303043 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
I1227 09:15:36.833395  303043 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I1227 09:15:36.833423  303043 kapi.go:107] duration metric: took 4.539592ms to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
addons_test.go:551: csi-hostpath-driver pods stabilized in 4.550538ms
addons_test.go:554: (dbg) Run:  kubectl --context addons-730938 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:559: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:403: (dbg) Run:  kubectl --context addons-730938 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-730938 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-730938 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-730938 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-730938 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-730938 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-730938 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-730938 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-730938 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-730938 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-730938 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-730938 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-730938 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:564: (dbg) Run:  kubectl --context addons-730938 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:569: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:353: "task-pv-pod" [364d6c73-fe7c-4f76-8e6d-b391d38693fc] Pending
helpers_test.go:353: "task-pv-pod" [364d6c73-fe7c-4f76-8e6d-b391d38693fc] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:353: "task-pv-pod" [364d6c73-fe7c-4f76-8e6d-b391d38693fc] Running
addons_test.go:569: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 8.003390803s
addons_test.go:574: (dbg) Run:  kubectl --context addons-730938 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:579: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:428: (dbg) Run:  kubectl --context addons-730938 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:428: (dbg) Run:  kubectl --context addons-730938 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:584: (dbg) Run:  kubectl --context addons-730938 delete pod task-pv-pod
addons_test.go:590: (dbg) Run:  kubectl --context addons-730938 delete pvc hpvc
addons_test.go:596: (dbg) Run:  kubectl --context addons-730938 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:601: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:403: (dbg) Run:  kubectl --context addons-730938 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-730938 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-730938 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-730938 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:606: (dbg) Run:  kubectl --context addons-730938 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:611: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:353: "task-pv-pod-restore" [3cd9db83-c4fb-40d0-8519-2c08af83c038] Pending
helpers_test.go:353: "task-pv-pod-restore" [3cd9db83-c4fb-40d0-8519-2c08af83c038] Running
addons_test.go:611: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 7.003452345s
addons_test.go:616: (dbg) Run:  kubectl --context addons-730938 delete pod task-pv-pod-restore
addons_test.go:620: (dbg) Run:  kubectl --context addons-730938 delete pvc hpvc-restore
addons_test.go:624: (dbg) Run:  kubectl --context addons-730938 delete volumesnapshot new-snapshot-demo
addons_test.go:1055: (dbg) Run:  out/minikube-linux-arm64 -p addons-730938 addons disable volumesnapshots --alsologtostderr -v=1
addons_test.go:1055: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-730938 addons disable volumesnapshots --alsologtostderr -v=1: exit status 11 (298.849645ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1227 09:16:10.718123  312384 out.go:360] Setting OutFile to fd 1 ...
	I1227 09:16:10.719183  312384 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 09:16:10.719228  312384 out.go:374] Setting ErrFile to fd 2...
	I1227 09:16:10.719251  312384 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 09:16:10.719752  312384 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22344-301174/.minikube/bin
	I1227 09:16:10.723783  312384 mustload.go:66] Loading cluster: addons-730938
	I1227 09:16:10.724338  312384 config.go:182] Loaded profile config "addons-730938": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1227 09:16:10.724387  312384 addons.go:622] checking whether the cluster is paused
	I1227 09:16:10.724541  312384 config.go:182] Loaded profile config "addons-730938": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1227 09:16:10.724576  312384 host.go:66] Checking if "addons-730938" exists ...
	I1227 09:16:10.725145  312384 cli_runner.go:164] Run: docker container inspect addons-730938 --format={{.State.Status}}
	I1227 09:16:10.748032  312384 ssh_runner.go:195] Run: systemctl --version
	I1227 09:16:10.748152  312384 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-730938
	I1227 09:16:10.767435  312384 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33141 SSHKeyPath:/home/jenkins/minikube-integration/22344-301174/.minikube/machines/addons-730938/id_rsa Username:docker}
	I1227 09:16:10.869040  312384 cri.go:61] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1227 09:16:10.869166  312384 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1227 09:16:10.901197  312384 cri.go:96] found id: "1ba75b0e93cffeb9564d91bca0b56788c9988cf89785108eae6062d001ca1281"
	I1227 09:16:10.901217  312384 cri.go:96] found id: "bbba8dda954f74d2b2da7d42fcddbc4a1fdc4f4cf71a813b6c9860194a9621a6"
	I1227 09:16:10.901222  312384 cri.go:96] found id: "4f72823b84ebaa52891326b45fea830564bc3f49742b98eb173b49f92ce39b32"
	I1227 09:16:10.901225  312384 cri.go:96] found id: "22217205eb48a1ee3a6775fdd98e76ee8373818a2b962307148599ff6b4db89e"
	I1227 09:16:10.901228  312384 cri.go:96] found id: "97995be12aa464d51de8c36bec6f5b1c1434d2781ebab58eab6b6752af262ea9"
	I1227 09:16:10.901232  312384 cri.go:96] found id: "baf605d1813f1a9e9ed65db58157c7d94524aa89a258af323d12fa9e4e351ef4"
	I1227 09:16:10.901235  312384 cri.go:96] found id: "ea32f9690b19df5cc84a7588918620d2604fee37e5842259d8b41cd049af6b6b"
	I1227 09:16:10.901238  312384 cri.go:96] found id: "5cfe51c2b0a5da8e25cb261e4c52a197a97decd71f95204d7eaea008d0f884fc"
	I1227 09:16:10.901241  312384 cri.go:96] found id: "1587e53995e7f96157748b5b13fab257e13953e779f1b7d30e16c119415a5b28"
	I1227 09:16:10.901248  312384 cri.go:96] found id: "8d7c1678ab0865d6ff304859f9df7461c0b5fb240cccc28dce48d48b2c901d89"
	I1227 09:16:10.901255  312384 cri.go:96] found id: "76711d56ef9e83a83cfb91ab3e51f8eb3b7c37d33894e104276a47089d5daddd"
	I1227 09:16:10.901258  312384 cri.go:96] found id: "d3944a9f1f7d06e620da4fa1914d56a87ca4643a0c71a22346a4e69d166fa3fe"
	I1227 09:16:10.901261  312384 cri.go:96] found id: "ba6025b2bde56161b845a13d2b999f561ffcac9f93c817ee3dbb6e2cb0c42a53"
	I1227 09:16:10.901264  312384 cri.go:96] found id: "f5b8cfdd7b740ad5559a17b57ce6491d162516e94859fe3823c8fa0b8a3aeabb"
	I1227 09:16:10.901267  312384 cri.go:96] found id: "0ffe4ed190eb97e926994dfff9d7230b901fcb07d9b6f49de7216fac62ddc0dd"
	I1227 09:16:10.901272  312384 cri.go:96] found id: "92def02a9b247e15cdb1e61f48495c2505c703028c797b88c3d5486cd99c1921"
	I1227 09:16:10.901275  312384 cri.go:96] found id: "f2eee0c100ef359713536b5061784d84f3ab2e488484b19a6e5121205290199a"
	I1227 09:16:10.901280  312384 cri.go:96] found id: "ce77df4270ad600c2b64995a72ad36c61f8fe73f2347dc1e676d504603cfa764"
	I1227 09:16:10.901283  312384 cri.go:96] found id: "b7820e8e00a4a323234a16edadf1567d38a8353d5bf1183b5df375c072552575"
	I1227 09:16:10.901287  312384 cri.go:96] found id: "375550202d2a9cc42610c65fc954d2a96c60b729e6cdac6a36597b93d67c37b3"
	I1227 09:16:10.901291  312384 cri.go:96] found id: "906652711aa1d15066fd10c3fb9e2e07fcd4427364118efb6e268960ee5f8e23"
	I1227 09:16:10.901294  312384 cri.go:96] found id: "3acfe18930cb522267d2e4c6153dae8a9824a81243600ff5b20ce7d2561fc94e"
	I1227 09:16:10.901297  312384 cri.go:96] found id: "6bc7835ee8cc0444db6317a1fa1a95a206d02f7f7a29035f8adadd2213326620"
	I1227 09:16:10.901300  312384 cri.go:96] found id: ""
	I1227 09:16:10.901351  312384 ssh_runner.go:195] Run: sudo runc list -f json
	I1227 09:16:10.925756  312384 out.go:203] 
	W1227 09:16:10.928687  312384 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-27T09:16:10Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-27T09:16:10Z" level=error msg="open /run/runc: no such file or directory"
	
	W1227 09:16:10.928793  312384 out.go:285] * 
	* 
	W1227 09:16:10.933647  312384 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_f6150db7515caf82d8c4c5baeba9fd21f738a7e0_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_f6150db7515caf82d8c4c5baeba9fd21f738a7e0_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1227 09:16:10.936598  312384 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1057: failed to disable volumesnapshots addon: args "out/minikube-linux-arm64 -p addons-730938 addons disable volumesnapshots --alsologtostderr -v=1": exit status 11
addons_test.go:1055: (dbg) Run:  out/minikube-linux-arm64 -p addons-730938 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:1055: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-730938 addons disable csi-hostpath-driver --alsologtostderr -v=1: exit status 11 (265.777134ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1227 09:16:10.995157  312439 out.go:360] Setting OutFile to fd 1 ...
	I1227 09:16:10.995971  312439 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 09:16:10.995989  312439 out.go:374] Setting ErrFile to fd 2...
	I1227 09:16:10.995997  312439 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 09:16:10.996414  312439 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22344-301174/.minikube/bin
	I1227 09:16:10.997277  312439 mustload.go:66] Loading cluster: addons-730938
	I1227 09:16:10.998025  312439 config.go:182] Loaded profile config "addons-730938": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1227 09:16:10.998101  312439 addons.go:622] checking whether the cluster is paused
	I1227 09:16:10.998305  312439 config.go:182] Loaded profile config "addons-730938": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1227 09:16:10.998351  312439 host.go:66] Checking if "addons-730938" exists ...
	I1227 09:16:10.999095  312439 cli_runner.go:164] Run: docker container inspect addons-730938 --format={{.State.Status}}
	I1227 09:16:11.020893  312439 ssh_runner.go:195] Run: systemctl --version
	I1227 09:16:11.021015  312439 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-730938
	I1227 09:16:11.040907  312439 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33141 SSHKeyPath:/home/jenkins/minikube-integration/22344-301174/.minikube/machines/addons-730938/id_rsa Username:docker}
	I1227 09:16:11.145171  312439 cri.go:61] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1227 09:16:11.145287  312439 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1227 09:16:11.181650  312439 cri.go:96] found id: "1ba75b0e93cffeb9564d91bca0b56788c9988cf89785108eae6062d001ca1281"
	I1227 09:16:11.181671  312439 cri.go:96] found id: "bbba8dda954f74d2b2da7d42fcddbc4a1fdc4f4cf71a813b6c9860194a9621a6"
	I1227 09:16:11.181676  312439 cri.go:96] found id: "4f72823b84ebaa52891326b45fea830564bc3f49742b98eb173b49f92ce39b32"
	I1227 09:16:11.181680  312439 cri.go:96] found id: "22217205eb48a1ee3a6775fdd98e76ee8373818a2b962307148599ff6b4db89e"
	I1227 09:16:11.181683  312439 cri.go:96] found id: "97995be12aa464d51de8c36bec6f5b1c1434d2781ebab58eab6b6752af262ea9"
	I1227 09:16:11.181686  312439 cri.go:96] found id: "baf605d1813f1a9e9ed65db58157c7d94524aa89a258af323d12fa9e4e351ef4"
	I1227 09:16:11.181689  312439 cri.go:96] found id: "ea32f9690b19df5cc84a7588918620d2604fee37e5842259d8b41cd049af6b6b"
	I1227 09:16:11.181692  312439 cri.go:96] found id: "5cfe51c2b0a5da8e25cb261e4c52a197a97decd71f95204d7eaea008d0f884fc"
	I1227 09:16:11.181695  312439 cri.go:96] found id: "1587e53995e7f96157748b5b13fab257e13953e779f1b7d30e16c119415a5b28"
	I1227 09:16:11.181702  312439 cri.go:96] found id: "8d7c1678ab0865d6ff304859f9df7461c0b5fb240cccc28dce48d48b2c901d89"
	I1227 09:16:11.181705  312439 cri.go:96] found id: "76711d56ef9e83a83cfb91ab3e51f8eb3b7c37d33894e104276a47089d5daddd"
	I1227 09:16:11.181708  312439 cri.go:96] found id: "d3944a9f1f7d06e620da4fa1914d56a87ca4643a0c71a22346a4e69d166fa3fe"
	I1227 09:16:11.181711  312439 cri.go:96] found id: "ba6025b2bde56161b845a13d2b999f561ffcac9f93c817ee3dbb6e2cb0c42a53"
	I1227 09:16:11.181714  312439 cri.go:96] found id: "f5b8cfdd7b740ad5559a17b57ce6491d162516e94859fe3823c8fa0b8a3aeabb"
	I1227 09:16:11.181717  312439 cri.go:96] found id: "0ffe4ed190eb97e926994dfff9d7230b901fcb07d9b6f49de7216fac62ddc0dd"
	I1227 09:16:11.181725  312439 cri.go:96] found id: "92def02a9b247e15cdb1e61f48495c2505c703028c797b88c3d5486cd99c1921"
	I1227 09:16:11.181728  312439 cri.go:96] found id: "f2eee0c100ef359713536b5061784d84f3ab2e488484b19a6e5121205290199a"
	I1227 09:16:11.181732  312439 cri.go:96] found id: "ce77df4270ad600c2b64995a72ad36c61f8fe73f2347dc1e676d504603cfa764"
	I1227 09:16:11.181735  312439 cri.go:96] found id: "b7820e8e00a4a323234a16edadf1567d38a8353d5bf1183b5df375c072552575"
	I1227 09:16:11.181738  312439 cri.go:96] found id: "375550202d2a9cc42610c65fc954d2a96c60b729e6cdac6a36597b93d67c37b3"
	I1227 09:16:11.181743  312439 cri.go:96] found id: "906652711aa1d15066fd10c3fb9e2e07fcd4427364118efb6e268960ee5f8e23"
	I1227 09:16:11.181745  312439 cri.go:96] found id: "3acfe18930cb522267d2e4c6153dae8a9824a81243600ff5b20ce7d2561fc94e"
	I1227 09:16:11.181748  312439 cri.go:96] found id: "6bc7835ee8cc0444db6317a1fa1a95a206d02f7f7a29035f8adadd2213326620"
	I1227 09:16:11.181751  312439 cri.go:96] found id: ""
	I1227 09:16:11.181801  312439 ssh_runner.go:195] Run: sudo runc list -f json
	I1227 09:16:11.196974  312439 out.go:203] 
	W1227 09:16:11.199709  312439 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-27T09:16:11Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-27T09:16:11Z" level=error msg="open /run/runc: no such file or directory"
	
	W1227 09:16:11.199731  312439 out.go:285] * 
	* 
	W1227 09:16:11.203108  312439 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_913eef9b964ccef8b5b536327192b81f4aff5da9_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_913eef9b964ccef8b5b536327192b81f4aff5da9_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1227 09:16:11.206072  312439 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1057: failed to disable csi-hostpath-driver addon: args "out/minikube-linux-arm64 -p addons-730938 addons disable csi-hostpath-driver --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/CSI (34.39s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (3.29s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:810: (dbg) Run:  out/minikube-linux-arm64 addons enable headlamp -p addons-730938 --alsologtostderr -v=1
addons_test.go:810: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable headlamp -p addons-730938 --alsologtostderr -v=1: exit status 11 (352.570681ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1227 09:15:36.757595  310624 out.go:360] Setting OutFile to fd 1 ...
	I1227 09:15:36.758979  310624 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 09:15:36.758998  310624 out.go:374] Setting ErrFile to fd 2...
	I1227 09:15:36.759004  310624 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 09:15:36.759276  310624 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22344-301174/.minikube/bin
	I1227 09:15:36.759588  310624 mustload.go:66] Loading cluster: addons-730938
	I1227 09:15:36.760009  310624 config.go:182] Loaded profile config "addons-730938": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1227 09:15:36.760029  310624 addons.go:622] checking whether the cluster is paused
	I1227 09:15:36.760134  310624 config.go:182] Loaded profile config "addons-730938": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1227 09:15:36.760143  310624 host.go:66] Checking if "addons-730938" exists ...
	I1227 09:15:36.760654  310624 cli_runner.go:164] Run: docker container inspect addons-730938 --format={{.State.Status}}
	I1227 09:15:36.779758  310624 ssh_runner.go:195] Run: systemctl --version
	I1227 09:15:36.780628  310624 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-730938
	I1227 09:15:36.804551  310624 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33141 SSHKeyPath:/home/jenkins/minikube-integration/22344-301174/.minikube/machines/addons-730938/id_rsa Username:docker}
	I1227 09:15:36.927730  310624 cri.go:61] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1227 09:15:36.927835  310624 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1227 09:15:36.991518  310624 cri.go:96] found id: "1ba75b0e93cffeb9564d91bca0b56788c9988cf89785108eae6062d001ca1281"
	I1227 09:15:36.991538  310624 cri.go:96] found id: "bbba8dda954f74d2b2da7d42fcddbc4a1fdc4f4cf71a813b6c9860194a9621a6"
	I1227 09:15:36.991543  310624 cri.go:96] found id: "4f72823b84ebaa52891326b45fea830564bc3f49742b98eb173b49f92ce39b32"
	I1227 09:15:36.991547  310624 cri.go:96] found id: "22217205eb48a1ee3a6775fdd98e76ee8373818a2b962307148599ff6b4db89e"
	I1227 09:15:36.991550  310624 cri.go:96] found id: "97995be12aa464d51de8c36bec6f5b1c1434d2781ebab58eab6b6752af262ea9"
	I1227 09:15:36.991553  310624 cri.go:96] found id: "baf605d1813f1a9e9ed65db58157c7d94524aa89a258af323d12fa9e4e351ef4"
	I1227 09:15:36.991556  310624 cri.go:96] found id: "ea32f9690b19df5cc84a7588918620d2604fee37e5842259d8b41cd049af6b6b"
	I1227 09:15:36.991560  310624 cri.go:96] found id: "5cfe51c2b0a5da8e25cb261e4c52a197a97decd71f95204d7eaea008d0f884fc"
	I1227 09:15:36.991563  310624 cri.go:96] found id: "1587e53995e7f96157748b5b13fab257e13953e779f1b7d30e16c119415a5b28"
	I1227 09:15:36.991577  310624 cri.go:96] found id: "8d7c1678ab0865d6ff304859f9df7461c0b5fb240cccc28dce48d48b2c901d89"
	I1227 09:15:36.991581  310624 cri.go:96] found id: "76711d56ef9e83a83cfb91ab3e51f8eb3b7c37d33894e104276a47089d5daddd"
	I1227 09:15:36.991584  310624 cri.go:96] found id: "d3944a9f1f7d06e620da4fa1914d56a87ca4643a0c71a22346a4e69d166fa3fe"
	I1227 09:15:36.991587  310624 cri.go:96] found id: "ba6025b2bde56161b845a13d2b999f561ffcac9f93c817ee3dbb6e2cb0c42a53"
	I1227 09:15:36.991590  310624 cri.go:96] found id: "f5b8cfdd7b740ad5559a17b57ce6491d162516e94859fe3823c8fa0b8a3aeabb"
	I1227 09:15:36.991593  310624 cri.go:96] found id: "0ffe4ed190eb97e926994dfff9d7230b901fcb07d9b6f49de7216fac62ddc0dd"
	I1227 09:15:36.991601  310624 cri.go:96] found id: "92def02a9b247e15cdb1e61f48495c2505c703028c797b88c3d5486cd99c1921"
	I1227 09:15:36.991605  310624 cri.go:96] found id: "f2eee0c100ef359713536b5061784d84f3ab2e488484b19a6e5121205290199a"
	I1227 09:15:36.991609  310624 cri.go:96] found id: "ce77df4270ad600c2b64995a72ad36c61f8fe73f2347dc1e676d504603cfa764"
	I1227 09:15:36.991612  310624 cri.go:96] found id: "b7820e8e00a4a323234a16edadf1567d38a8353d5bf1183b5df375c072552575"
	I1227 09:15:36.991615  310624 cri.go:96] found id: "375550202d2a9cc42610c65fc954d2a96c60b729e6cdac6a36597b93d67c37b3"
	I1227 09:15:36.991620  310624 cri.go:96] found id: "906652711aa1d15066fd10c3fb9e2e07fcd4427364118efb6e268960ee5f8e23"
	I1227 09:15:36.991623  310624 cri.go:96] found id: "3acfe18930cb522267d2e4c6153dae8a9824a81243600ff5b20ce7d2561fc94e"
	I1227 09:15:36.991627  310624 cri.go:96] found id: "6bc7835ee8cc0444db6317a1fa1a95a206d02f7f7a29035f8adadd2213326620"
	I1227 09:15:36.991631  310624 cri.go:96] found id: ""
	I1227 09:15:36.991682  310624 ssh_runner.go:195] Run: sudo runc list -f json
	I1227 09:15:37.011358  310624 out.go:203] 
	W1227 09:15:37.014365  310624 out.go:285] X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-27T09:15:37Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-27T09:15:37Z" level=error msg="open /run/runc: no such file or directory"
	
	W1227 09:15:37.014393  310624 out.go:285] * 
	* 
	W1227 09:15:37.017992  310624 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_af3b8a9ce4f102efc219f1404c9eed7a69cbf2d5_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_af3b8a9ce4f102efc219f1404c9eed7a69cbf2d5_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1227 09:15:37.021043  310624 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:812: failed to enable headlamp addon: args: "out/minikube-linux-arm64 addons enable headlamp -p addons-730938 --alsologtostderr -v=1": exit status 11
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestAddons/parallel/Headlamp]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestAddons/parallel/Headlamp]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect addons-730938
helpers_test.go:244: (dbg) docker inspect addons-730938:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "600191d502c5bdaf6421cac30d35e9016fb51ee7a8f2cd91f197132f841dff0e",
	        "Created": "2025-12-27T09:13:10.277136785Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 304197,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-27T09:13:10.349725418Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:713d59eed1cb7ea89d31e3cbef8f6274a6ab7509a421e96cf0197c334e76398d",
	        "ResolvConfPath": "/var/lib/docker/containers/600191d502c5bdaf6421cac30d35e9016fb51ee7a8f2cd91f197132f841dff0e/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/600191d502c5bdaf6421cac30d35e9016fb51ee7a8f2cd91f197132f841dff0e/hostname",
	        "HostsPath": "/var/lib/docker/containers/600191d502c5bdaf6421cac30d35e9016fb51ee7a8f2cd91f197132f841dff0e/hosts",
	        "LogPath": "/var/lib/docker/containers/600191d502c5bdaf6421cac30d35e9016fb51ee7a8f2cd91f197132f841dff0e/600191d502c5bdaf6421cac30d35e9016fb51ee7a8f2cd91f197132f841dff0e-json.log",
	        "Name": "/addons-730938",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-730938:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "addons-730938",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "600191d502c5bdaf6421cac30d35e9016fb51ee7a8f2cd91f197132f841dff0e",
	                "LowerDir": "/var/lib/docker/overlay2/b9a70f275ced9483e7946dace4bcdc0df1357bf395b67614f35dde5dab4e8732-init/diff:/var/lib/docker/overlay2/888349771a41a46b09e7de676064af9effbe2f5ae2a8ba49ad062335fb2a70e5/diff",
	                "MergedDir": "/var/lib/docker/overlay2/b9a70f275ced9483e7946dace4bcdc0df1357bf395b67614f35dde5dab4e8732/merged",
	                "UpperDir": "/var/lib/docker/overlay2/b9a70f275ced9483e7946dace4bcdc0df1357bf395b67614f35dde5dab4e8732/diff",
	                "WorkDir": "/var/lib/docker/overlay2/b9a70f275ced9483e7946dace4bcdc0df1357bf395b67614f35dde5dab4e8732/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "addons-730938",
	                "Source": "/var/lib/docker/volumes/addons-730938/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-730938",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-730938",
	                "name.minikube.sigs.k8s.io": "addons-730938",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "b8da761ac89e922ff7c2958a29355a959a263deab6ce71e11e6ce18bda3ee780",
	            "SandboxKey": "/var/run/docker/netns/b8da761ac89e",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33141"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33142"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33145"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33143"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33144"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-730938": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "d2:07:ad:06:9f:e9",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "d1125cec7cc85cd76096f6a1296d3c24ddf9155b67e3a55d490a5292c62127c3",
	                    "EndpointID": "f25298071b30bce19499ee69ed84f7e6afa03e16a87bd5a202c49b78c989cfb4",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "addons-730938",
	                        "600191d502c5"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p addons-730938 -n addons-730938
helpers_test.go:253: <<< TestAddons/parallel/Headlamp FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestAddons/parallel/Headlamp]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p addons-730938 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-arm64 -p addons-730938 logs -n 25: (1.490303108s)
helpers_test.go:261: TestAddons/parallel/Headlamp logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                                                                                                                                                                   ARGS                                                                                                                                                                                                                                   │        PROFILE         │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -o=json --download-only -p download-only-421590 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                │ download-only-421590   │ jenkins │ v1.37.0 │ 27 Dec 25 09:12 UTC │                     │
	│ delete  │ --all                                                                                                                                                                                                                                                                                                                                                                                                                                                                    │ minikube               │ jenkins │ v1.37.0 │ 27 Dec 25 09:12 UTC │ 27 Dec 25 09:12 UTC │
	│ delete  │ -p download-only-421590                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-421590   │ jenkins │ v1.37.0 │ 27 Dec 25 09:12 UTC │ 27 Dec 25 09:12 UTC │
	│ start   │ -o=json --download-only -p download-only-432357 --force --alsologtostderr --kubernetes-version=v1.35.0 --container-runtime=crio --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                │ download-only-432357   │ jenkins │ v1.37.0 │ 27 Dec 25 09:12 UTC │                     │
	│ delete  │ --all                                                                                                                                                                                                                                                                                                                                                                                                                                                                    │ minikube               │ jenkins │ v1.37.0 │ 27 Dec 25 09:12 UTC │ 27 Dec 25 09:12 UTC │
	│ delete  │ -p download-only-432357                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-432357   │ jenkins │ v1.37.0 │ 27 Dec 25 09:12 UTC │ 27 Dec 25 09:12 UTC │
	│ delete  │ -p download-only-421590                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-421590   │ jenkins │ v1.37.0 │ 27 Dec 25 09:12 UTC │ 27 Dec 25 09:12 UTC │
	│ delete  │ -p download-only-432357                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-432357   │ jenkins │ v1.37.0 │ 27 Dec 25 09:12 UTC │ 27 Dec 25 09:12 UTC │
	│ start   │ --download-only -p download-docker-726041 --alsologtostderr --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                                                                                    │ download-docker-726041 │ jenkins │ v1.37.0 │ 27 Dec 25 09:12 UTC │                     │
	│ delete  │ -p download-docker-726041                                                                                                                                                                                                                                                                                                                                                                                                                                                │ download-docker-726041 │ jenkins │ v1.37.0 │ 27 Dec 25 09:12 UTC │ 27 Dec 25 09:12 UTC │
	│ start   │ --download-only -p binary-mirror-053347 --alsologtostderr --binary-mirror http://127.0.0.1:39055 --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                                               │ binary-mirror-053347   │ jenkins │ v1.37.0 │ 27 Dec 25 09:12 UTC │                     │
	│ delete  │ -p binary-mirror-053347                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ binary-mirror-053347   │ jenkins │ v1.37.0 │ 27 Dec 25 09:12 UTC │ 27 Dec 25 09:12 UTC │
	│ addons  │ disable dashboard -p addons-730938                                                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-730938          │ jenkins │ v1.37.0 │ 27 Dec 25 09:12 UTC │                     │
	│ addons  │ enable dashboard -p addons-730938                                                                                                                                                                                                                                                                                                                                                                                                                                        │ addons-730938          │ jenkins │ v1.37.0 │ 27 Dec 25 09:12 UTC │                     │
	│ start   │ -p addons-730938 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher │ addons-730938          │ jenkins │ v1.37.0 │ 27 Dec 25 09:12 UTC │ 27 Dec 25 09:15 UTC │
	│ addons  │ addons-730938 addons disable volcano --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                              │ addons-730938          │ jenkins │ v1.37.0 │ 27 Dec 25 09:15 UTC │                     │
	│ addons  │ addons-730938 addons disable gcp-auth --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-730938          │ jenkins │ v1.37.0 │ 27 Dec 25 09:15 UTC │                     │
	│ addons  │ addons-730938 addons disable yakd --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-730938          │ jenkins │ v1.37.0 │ 27 Dec 25 09:15 UTC │                     │
	│ addons  │ addons-730938 addons disable nvidia-device-plugin --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-730938          │ jenkins │ v1.37.0 │ 27 Dec 25 09:15 UTC │                     │
	│ ip      │ addons-730938 ip                                                                                                                                                                                                                                                                                                                                                                                                                                                         │ addons-730938          │ jenkins │ v1.37.0 │ 27 Dec 25 09:15 UTC │ 27 Dec 25 09:15 UTC │
	│ addons  │ addons-730938 addons disable registry --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-730938          │ jenkins │ v1.37.0 │ 27 Dec 25 09:15 UTC │                     │
	│ ssh     │ addons-730938 ssh cat /opt/local-path-provisioner/pvc-aee992c0-fa66-4d17-ace8-b295e4d945de_default_test-pvc/file1                                                                                                                                                                                                                                                                                                                                                        │ addons-730938          │ jenkins │ v1.37.0 │ 27 Dec 25 09:15 UTC │ 27 Dec 25 09:15 UTC │
	│ addons  │ addons-730938 addons disable storage-provisioner-rancher --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                          │ addons-730938          │ jenkins │ v1.37.0 │ 27 Dec 25 09:15 UTC │                     │
	│ addons  │ addons-730938 addons disable cloud-spanner --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                        │ addons-730938          │ jenkins │ v1.37.0 │ 27 Dec 25 09:15 UTC │                     │
	│ addons  │ enable headlamp -p addons-730938 --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                  │ addons-730938          │ jenkins │ v1.37.0 │ 27 Dec 25 09:15 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/27 09:12:45
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1227 09:12:45.248682  303796 out.go:360] Setting OutFile to fd 1 ...
	I1227 09:12:45.248850  303796 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 09:12:45.248857  303796 out.go:374] Setting ErrFile to fd 2...
	I1227 09:12:45.248863  303796 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 09:12:45.249571  303796 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22344-301174/.minikube/bin
	I1227 09:12:45.250199  303796 out.go:368] Setting JSON to false
	I1227 09:12:45.251152  303796 start.go:133] hostinfo: {"hostname":"ip-172-31-31-251","uptime":6915,"bootTime":1766819851,"procs":160,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1227 09:12:45.251228  303796 start.go:143] virtualization:  
	I1227 09:12:45.266969  303796 out.go:179] * [addons-730938] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1227 09:12:45.314916  303796 out.go:179]   - MINIKUBE_LOCATION=22344
	I1227 09:12:45.315031  303796 notify.go:221] Checking for updates...
	I1227 09:12:45.381154  303796 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1227 09:12:45.412683  303796 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22344-301174/kubeconfig
	I1227 09:12:45.445123  303796 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22344-301174/.minikube
	I1227 09:12:45.476009  303796 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1227 09:12:45.518237  303796 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1227 09:12:45.552897  303796 driver.go:422] Setting default libvirt URI to qemu:///system
	I1227 09:12:45.574466  303796 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1227 09:12:45.574599  303796 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1227 09:12:45.630584  303796 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:47 SystemTime:2025-12-27 09:12:45.620654754 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1227 09:12:45.630700  303796 docker.go:319] overlay module found
	I1227 09:12:45.679993  303796 out.go:179] * Using the docker driver based on user configuration
	I1227 09:12:45.708385  303796 start.go:309] selected driver: docker
	I1227 09:12:45.708418  303796 start.go:928] validating driver "docker" against <nil>
	I1227 09:12:45.708446  303796 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1227 09:12:45.709226  303796 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1227 09:12:45.763247  303796 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:47 SystemTime:2025-12-27 09:12:45.753307479 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1227 09:12:45.763414  303796 start_flags.go:333] no existing cluster config was found, will generate one from the flags 
	I1227 09:12:45.763671  303796 start_flags.go:1019] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1227 09:12:45.808541  303796 out.go:179] * Using Docker driver with root privileges
	I1227 09:12:45.838611  303796 cni.go:84] Creating CNI manager for ""
	I1227 09:12:45.838684  303796 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1227 09:12:45.838693  303796 start_flags.go:342] Found "CNI" CNI - setting NetworkPlugin=cni
	I1227 09:12:45.838783  303796 start.go:353] cluster config:
	{Name:addons-730938 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:addons-730938 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:
AutoPauseInterval:1m0s Rosetta:false}
	I1227 09:12:45.855740  303796 out.go:179] * Starting "addons-730938" primary control-plane node in "addons-730938" cluster
	I1227 09:12:45.905003  303796 cache.go:134] Beginning downloading kic base image for docker with crio
	I1227 09:12:45.935794  303796 out.go:179] * Pulling base image v0.0.48-1766570851-22316 ...
	I1227 09:12:45.968017  303796 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a in local docker daemon
	I1227 09:12:45.968029  303796 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime crio
	I1227 09:12:45.968091  303796 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22344-301174/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-cri-o-overlay-arm64.tar.lz4
	I1227 09:12:45.968103  303796 cache.go:65] Caching tarball of preloaded images
	I1227 09:12:45.968185  303796 preload.go:251] Found /home/jenkins/minikube-integration/22344-301174/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1227 09:12:45.968196  303796 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0 on crio
	I1227 09:12:45.968540  303796 profile.go:143] Saving config to /home/jenkins/minikube-integration/22344-301174/.minikube/profiles/addons-730938/config.json ...
	I1227 09:12:45.968559  303796 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22344-301174/.minikube/profiles/addons-730938/config.json: {Name:mkab4dcd756f42849b5ca0e0965ec85acb6732ad Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 09:12:45.985793  303796 cache.go:163] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a to local cache
	I1227 09:12:45.985940  303796 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a in local cache directory
	I1227 09:12:45.985963  303796 image.go:68] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a in local cache directory, skipping pull
	I1227 09:12:45.985978  303796 image.go:137] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a exists in cache, skipping pull
	I1227 09:12:45.985992  303796 cache.go:166] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a as a tarball
	I1227 09:12:45.985997  303796 cache.go:176] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a from local cache
	I1227 09:13:04.046608  303796 cache.go:178] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a from cached tarball
	I1227 09:13:04.046668  303796 cache.go:243] Successfully downloaded all kic artifacts
	I1227 09:13:04.046707  303796 start.go:360] acquireMachinesLock for addons-730938: {Name:mk8c61115c8b6e395fb3fca0353048c5378c0582 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1227 09:13:04.046831  303796 start.go:364] duration metric: took 99.021µs to acquireMachinesLock for "addons-730938"
	I1227 09:13:04.046868  303796 start.go:93] Provisioning new machine with config: &{Name:addons-730938 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:addons-730938 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false} &{Name: IP: Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1227 09:13:04.046939  303796 start.go:125] createHost starting for "" (driver="docker")
	I1227 09:13:04.050415  303796 out.go:252] * Creating docker container (CPUs=2, Memory=4096MB) ...
	I1227 09:13:04.050689  303796 start.go:159] libmachine.API.Create for "addons-730938" (driver="docker")
	I1227 09:13:04.050730  303796 client.go:173] LocalClient.Create starting
	I1227 09:13:04.050853  303796 main.go:144] libmachine: Creating CA: /home/jenkins/minikube-integration/22344-301174/.minikube/certs/ca.pem
	I1227 09:13:04.816325  303796 main.go:144] libmachine: Creating client certificate: /home/jenkins/minikube-integration/22344-301174/.minikube/certs/cert.pem
	I1227 09:13:05.159428  303796 cli_runner.go:164] Run: docker network inspect addons-730938 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1227 09:13:05.176821  303796 cli_runner.go:211] docker network inspect addons-730938 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1227 09:13:05.176912  303796 network_create.go:284] running [docker network inspect addons-730938] to gather additional debugging logs...
	I1227 09:13:05.176934  303796 cli_runner.go:164] Run: docker network inspect addons-730938
	W1227 09:13:05.191883  303796 cli_runner.go:211] docker network inspect addons-730938 returned with exit code 1
	I1227 09:13:05.191917  303796 network_create.go:287] error running [docker network inspect addons-730938]: docker network inspect addons-730938: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-730938 not found
	I1227 09:13:05.191932  303796 network_create.go:289] output of [docker network inspect addons-730938]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-730938 not found
	
	** /stderr **
	I1227 09:13:05.192095  303796 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1227 09:13:05.208934  303796 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001cab250}
	I1227 09:13:05.208986  303796 network_create.go:124] attempt to create docker network addons-730938 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1227 09:13:05.209049  303796 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-730938 addons-730938
	I1227 09:13:05.267774  303796 network_create.go:108] docker network addons-730938 192.168.49.0/24 created
	I1227 09:13:05.267807  303796 kic.go:121] calculated static IP "192.168.49.2" for the "addons-730938" container
	I1227 09:13:05.267882  303796 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1227 09:13:05.283751  303796 cli_runner.go:164] Run: docker volume create addons-730938 --label name.minikube.sigs.k8s.io=addons-730938 --label created_by.minikube.sigs.k8s.io=true
	I1227 09:13:05.302000  303796 oci.go:103] Successfully created a docker volume addons-730938
	I1227 09:13:05.302110  303796 cli_runner.go:164] Run: docker run --rm --name addons-730938-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-730938 --entrypoint /usr/bin/test -v addons-730938:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a -d /var/lib
	I1227 09:13:06.360567  303796 cli_runner.go:217] Completed: docker run --rm --name addons-730938-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-730938 --entrypoint /usr/bin/test -v addons-730938:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a -d /var/lib: (1.058414283s)
	I1227 09:13:06.360599  303796 oci.go:107] Successfully prepared a docker volume addons-730938
	I1227 09:13:06.360644  303796 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime crio
	I1227 09:13:06.360658  303796 kic.go:194] Starting extracting preloaded images to volume ...
	I1227 09:13:06.360724  303796 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22344-301174/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v addons-730938:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a -I lz4 -xf /preloaded.tar -C /extractDir
	I1227 09:13:10.204592  303796 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22344-301174/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v addons-730938:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a -I lz4 -xf /preloaded.tar -C /extractDir: (3.843830261s)
	I1227 09:13:10.204627  303796 kic.go:203] duration metric: took 3.843965795s to extract preloaded images to volume ...
	W1227 09:13:10.204764  303796 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1227 09:13:10.204873  303796 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1227 09:13:10.261271  303796 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-730938 --name addons-730938 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-730938 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-730938 --network addons-730938 --ip 192.168.49.2 --volume addons-730938:/var --security-opt apparmor=unconfined --memory=4096mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a
	I1227 09:13:10.582961  303796 cli_runner.go:164] Run: docker container inspect addons-730938 --format={{.State.Running}}
	I1227 09:13:10.602297  303796 cli_runner.go:164] Run: docker container inspect addons-730938 --format={{.State.Status}}
	I1227 09:13:10.626336  303796 cli_runner.go:164] Run: docker exec addons-730938 stat /var/lib/dpkg/alternatives/iptables
	I1227 09:13:10.686339  303796 oci.go:144] the created container "addons-730938" has a running status.
	I1227 09:13:10.686367  303796 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/22344-301174/.minikube/machines/addons-730938/id_rsa...
	I1227 09:13:11.279623  303796 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22344-301174/.minikube/machines/addons-730938/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1227 09:13:11.300246  303796 cli_runner.go:164] Run: docker container inspect addons-730938 --format={{.State.Status}}
	I1227 09:13:11.318205  303796 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1227 09:13:11.318230  303796 kic_runner.go:114] Args: [docker exec --privileged addons-730938 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1227 09:13:11.360940  303796 cli_runner.go:164] Run: docker container inspect addons-730938 --format={{.State.Status}}
	I1227 09:13:11.377939  303796 machine.go:94] provisionDockerMachine start ...
	I1227 09:13:11.378047  303796 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-730938
	I1227 09:13:11.396029  303796 main.go:144] libmachine: Using SSH client type: native
	I1227 09:13:11.396428  303796 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dbbe0] 0x3de0e0 <nil>  [] 0s} 127.0.0.1 33141 <nil> <nil>}
	I1227 09:13:11.396445  303796 main.go:144] libmachine: About to run SSH command:
	hostname
	I1227 09:13:11.397105  303796 main.go:144] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1227 09:13:14.537605  303796 main.go:144] libmachine: SSH cmd err, output: <nil>: addons-730938
	
	I1227 09:13:14.537630  303796 ubuntu.go:182] provisioning hostname "addons-730938"
	I1227 09:13:14.537701  303796 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-730938
	I1227 09:13:14.555399  303796 main.go:144] libmachine: Using SSH client type: native
	I1227 09:13:14.555712  303796 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dbbe0] 0x3de0e0 <nil>  [] 0s} 127.0.0.1 33141 <nil> <nil>}
	I1227 09:13:14.555728  303796 main.go:144] libmachine: About to run SSH command:
	sudo hostname addons-730938 && echo "addons-730938" | sudo tee /etc/hostname
	I1227 09:13:14.703391  303796 main.go:144] libmachine: SSH cmd err, output: <nil>: addons-730938
	
	I1227 09:13:14.703472  303796 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-730938
	I1227 09:13:14.721816  303796 main.go:144] libmachine: Using SSH client type: native
	I1227 09:13:14.722144  303796 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dbbe0] 0x3de0e0 <nil>  [] 0s} 127.0.0.1 33141 <nil> <nil>}
	I1227 09:13:14.722191  303796 main.go:144] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-730938' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-730938/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-730938' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1227 09:13:14.858532  303796 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	I1227 09:13:14.858555  303796 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22344-301174/.minikube CaCertPath:/home/jenkins/minikube-integration/22344-301174/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22344-301174/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22344-301174/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22344-301174/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22344-301174/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22344-301174/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22344-301174/.minikube}
	I1227 09:13:14.858580  303796 ubuntu.go:190] setting up certificates
	I1227 09:13:14.858591  303796 provision.go:84] configureAuth start
	I1227 09:13:14.858663  303796 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-730938
	I1227 09:13:14.875939  303796 provision.go:143] copyHostCerts
	I1227 09:13:14.876017  303796 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22344-301174/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22344-301174/.minikube/ca.pem (1082 bytes)
	I1227 09:13:14.876148  303796 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22344-301174/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22344-301174/.minikube/cert.pem (1123 bytes)
	I1227 09:13:14.876217  303796 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22344-301174/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22344-301174/.minikube/key.pem (1675 bytes)
	I1227 09:13:14.876273  303796 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22344-301174/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22344-301174/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22344-301174/.minikube/certs/ca-key.pem org=jenkins.addons-730938 san=[127.0.0.1 192.168.49.2 addons-730938 localhost minikube]
	I1227 09:13:14.957412  303796 provision.go:177] copyRemoteCerts
	I1227 09:13:14.957506  303796 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1227 09:13:14.957580  303796 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-730938
	I1227 09:13:14.979615  303796 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33141 SSHKeyPath:/home/jenkins/minikube-integration/22344-301174/.minikube/machines/addons-730938/id_rsa Username:docker}
	I1227 09:13:15.090344  303796 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22344-301174/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1227 09:13:15.109634  303796 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22344-301174/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1227 09:13:15.128834  303796 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22344-301174/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1227 09:13:15.147107  303796 provision.go:87] duration metric: took 288.502102ms to configureAuth
	I1227 09:13:15.147135  303796 ubuntu.go:206] setting minikube options for container-runtime
	I1227 09:13:15.147331  303796 config.go:182] Loaded profile config "addons-730938": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1227 09:13:15.147448  303796 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-730938
	I1227 09:13:15.165237  303796 main.go:144] libmachine: Using SSH client type: native
	I1227 09:13:15.165549  303796 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dbbe0] 0x3de0e0 <nil>  [] 0s} 127.0.0.1 33141 <nil> <nil>}
	I1227 09:13:15.165563  303796 main.go:144] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1227 09:13:15.437058  303796 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1227 09:13:15.437079  303796 machine.go:97] duration metric: took 4.059116417s to provisionDockerMachine
	I1227 09:13:15.437089  303796 client.go:176] duration metric: took 11.386350516s to LocalClient.Create
	I1227 09:13:15.437103  303796 start.go:167] duration metric: took 11.386417652s to libmachine.API.Create "addons-730938"
	I1227 09:13:15.437110  303796 start.go:293] postStartSetup for "addons-730938" (driver="docker")
	I1227 09:13:15.437120  303796 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1227 09:13:15.437202  303796 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1227 09:13:15.437247  303796 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-730938
	I1227 09:13:15.455261  303796 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33141 SSHKeyPath:/home/jenkins/minikube-integration/22344-301174/.minikube/machines/addons-730938/id_rsa Username:docker}
	I1227 09:13:15.554422  303796 ssh_runner.go:195] Run: cat /etc/os-release
	I1227 09:13:15.557743  303796 main.go:144] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1227 09:13:15.557773  303796 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1227 09:13:15.557786  303796 filesync.go:126] Scanning /home/jenkins/minikube-integration/22344-301174/.minikube/addons for local assets ...
	I1227 09:13:15.557872  303796 filesync.go:126] Scanning /home/jenkins/minikube-integration/22344-301174/.minikube/files for local assets ...
	I1227 09:13:15.557898  303796 start.go:296] duration metric: took 120.782363ms for postStartSetup
	I1227 09:13:15.558242  303796 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-730938
	I1227 09:13:15.574905  303796 profile.go:143] Saving config to /home/jenkins/minikube-integration/22344-301174/.minikube/profiles/addons-730938/config.json ...
	I1227 09:13:15.575205  303796 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1227 09:13:15.575255  303796 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-730938
	I1227 09:13:15.592821  303796 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33141 SSHKeyPath:/home/jenkins/minikube-integration/22344-301174/.minikube/machines/addons-730938/id_rsa Username:docker}
	I1227 09:13:15.687296  303796 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1227 09:13:15.692061  303796 start.go:128] duration metric: took 11.645106352s to createHost
	I1227 09:13:15.692086  303796 start.go:83] releasing machines lock for "addons-730938", held for 11.645240754s
	I1227 09:13:15.692172  303796 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-730938
	I1227 09:13:15.708620  303796 ssh_runner.go:195] Run: cat /version.json
	I1227 09:13:15.708662  303796 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1227 09:13:15.708671  303796 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-730938
	I1227 09:13:15.708729  303796 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-730938
	I1227 09:13:15.726989  303796 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33141 SSHKeyPath:/home/jenkins/minikube-integration/22344-301174/.minikube/machines/addons-730938/id_rsa Username:docker}
	I1227 09:13:15.739850  303796 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33141 SSHKeyPath:/home/jenkins/minikube-integration/22344-301174/.minikube/machines/addons-730938/id_rsa Username:docker}
	I1227 09:13:15.910545  303796 ssh_runner.go:195] Run: systemctl --version
	I1227 09:13:15.917278  303796 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1227 09:13:15.953433  303796 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1227 09:13:15.957923  303796 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1227 09:13:15.957992  303796 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1227 09:13:15.988393  303796 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1227 09:13:15.988421  303796 start.go:496] detecting cgroup driver to use...
	I1227 09:13:15.988468  303796 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1227 09:13:15.988539  303796 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1227 09:13:16.008712  303796 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1227 09:13:16.023019  303796 docker.go:218] disabling cri-docker service (if available) ...
	I1227 09:13:16.023116  303796 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1227 09:13:16.042072  303796 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1227 09:13:16.060639  303796 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1227 09:13:16.182932  303796 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1227 09:13:16.307504  303796 docker.go:234] disabling docker service ...
	I1227 09:13:16.307574  303796 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1227 09:13:16.328869  303796 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1227 09:13:16.341659  303796 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1227 09:13:16.453720  303796 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1227 09:13:16.576559  303796 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1227 09:13:16.590946  303796 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1227 09:13:16.604539  303796 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1227 09:13:16.604643  303796 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 09:13:16.613338  303796 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1227 09:13:16.613451  303796 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 09:13:16.622649  303796 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 09:13:16.631290  303796 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 09:13:16.639924  303796 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1227 09:13:16.647890  303796 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 09:13:16.656494  303796 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 09:13:16.669668  303796 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 09:13:16.678286  303796 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1227 09:13:16.685773  303796 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1227 09:13:16.693086  303796 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1227 09:13:16.799446  303796 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1227 09:13:16.968866  303796 start.go:553] Will wait 60s for socket path /var/run/crio/crio.sock
	I1227 09:13:16.968972  303796 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1227 09:13:16.972756  303796 start.go:574] Will wait 60s for crictl version
	I1227 09:13:16.972823  303796 ssh_runner.go:195] Run: which crictl
	I1227 09:13:16.976543  303796 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1227 09:13:17.000928  303796 start.go:590] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1227 09:13:17.001055  303796 ssh_runner.go:195] Run: crio --version
	I1227 09:13:17.031743  303796 ssh_runner.go:195] Run: crio --version
	I1227 09:13:17.063710  303796 out.go:179] * Preparing Kubernetes v1.35.0 on CRI-O 1.34.3 ...
	I1227 09:13:17.066623  303796 cli_runner.go:164] Run: docker network inspect addons-730938 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1227 09:13:17.081448  303796 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1227 09:13:17.085319  303796 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1227 09:13:17.095580  303796 kubeadm.go:884] updating cluster {Name:addons-730938 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:addons-730938 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketV
MnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false} ...
	I1227 09:13:17.095701  303796 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime crio
	I1227 09:13:17.095769  303796 ssh_runner.go:195] Run: sudo crictl images --output json
	I1227 09:13:17.132122  303796 crio.go:561] all images are preloaded for cri-o runtime.
	I1227 09:13:17.132150  303796 crio.go:433] Images already preloaded, skipping extraction
	I1227 09:13:17.132207  303796 ssh_runner.go:195] Run: sudo crictl images --output json
	I1227 09:13:17.157371  303796 crio.go:561] all images are preloaded for cri-o runtime.
	I1227 09:13:17.157396  303796 cache_images.go:86] Images are preloaded, skipping loading
	I1227 09:13:17.157405  303796 kubeadm.go:935] updating node { 192.168.49.2 8443 v1.35.0 crio true true} ...
	I1227 09:13:17.157502  303796 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=addons-730938 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0 ClusterName:addons-730938 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1227 09:13:17.157592  303796 ssh_runner.go:195] Run: crio config
	I1227 09:13:17.214675  303796 cni.go:84] Creating CNI manager for ""
	I1227 09:13:17.214703  303796 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1227 09:13:17.214727  303796 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1227 09:13:17.214757  303796 kubeadm.go:197] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.35.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-730938 NodeName:addons-730938 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kuberne
tes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock failCgroupV1:false hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1227 09:13:17.214921  303796 kubeadm.go:203] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-730938"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	failCgroupV1: false
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1227 09:13:17.214998  303796 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0
	I1227 09:13:17.223981  303796 binaries.go:51] Found k8s binaries, skipping transfer
	I1227 09:13:17.224063  303796 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1227 09:13:17.232201  303796 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I1227 09:13:17.245265  303796 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1227 09:13:17.258885  303796 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2230 bytes)
	I1227 09:13:17.271848  303796 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1227 09:13:17.275561  303796 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1227 09:13:17.285506  303796 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1227 09:13:17.391603  303796 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1227 09:13:17.407493  303796 certs.go:69] Setting up /home/jenkins/minikube-integration/22344-301174/.minikube/profiles/addons-730938 for IP: 192.168.49.2
	I1227 09:13:17.407563  303796 certs.go:195] generating shared ca certs ...
	I1227 09:13:17.407593  303796 certs.go:227] acquiring lock for ca certs: {Name:mk8bd99999eb300c33129cb11fd55a1be3001328 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 09:13:17.407766  303796 certs.go:241] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/22344-301174/.minikube/ca.key
	I1227 09:13:17.967803  303796 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22344-301174/.minikube/ca.crt ...
	I1227 09:13:17.967836  303796 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22344-301174/.minikube/ca.crt: {Name:mk5cfc19c433c9746a9eec65d3fea66369fead17 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 09:13:17.968076  303796 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22344-301174/.minikube/ca.key ...
	I1227 09:13:17.968092  303796 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22344-301174/.minikube/ca.key: {Name:mk2d9695e2d4c6904e84923d2e1feb10fbdc2fa5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 09:13:17.968185  303796 certs.go:241] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22344-301174/.minikube/proxy-client-ca.key
	I1227 09:13:18.290654  303796 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22344-301174/.minikube/proxy-client-ca.crt ...
	I1227 09:13:18.290686  303796 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22344-301174/.minikube/proxy-client-ca.crt: {Name:mk0038cdd16ea85993e7798056245361608a7b52 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 09:13:18.291476  303796 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22344-301174/.minikube/proxy-client-ca.key ...
	I1227 09:13:18.291493  303796 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22344-301174/.minikube/proxy-client-ca.key: {Name:mk10c3c4e9db670e470d79ea6654e4e7011a1139 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 09:13:18.292234  303796 certs.go:257] generating profile certs ...
	I1227 09:13:18.292299  303796 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22344-301174/.minikube/profiles/addons-730938/client.key
	I1227 09:13:18.292317  303796 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22344-301174/.minikube/profiles/addons-730938/client.crt with IP's: []
	I1227 09:13:18.539176  303796 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22344-301174/.minikube/profiles/addons-730938/client.crt ...
	I1227 09:13:18.539216  303796 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22344-301174/.minikube/profiles/addons-730938/client.crt: {Name:mkf83760afaa201c00b6b508b956db3ab87e3824 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 09:13:18.539405  303796 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22344-301174/.minikube/profiles/addons-730938/client.key ...
	I1227 09:13:18.539420  303796 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22344-301174/.minikube/profiles/addons-730938/client.key: {Name:mkfd0801aa9c868e48405134ee1e8f27c1c7069e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 09:13:18.539496  303796 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22344-301174/.minikube/profiles/addons-730938/apiserver.key.8fca2daa
	I1227 09:13:18.539521  303796 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22344-301174/.minikube/profiles/addons-730938/apiserver.crt.8fca2daa with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I1227 09:13:18.782528  303796 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22344-301174/.minikube/profiles/addons-730938/apiserver.crt.8fca2daa ...
	I1227 09:13:18.782561  303796 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22344-301174/.minikube/profiles/addons-730938/apiserver.crt.8fca2daa: {Name:mk3fb3618ff1ec4a44f6c6e4882c27dca2716ec9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 09:13:18.782748  303796 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22344-301174/.minikube/profiles/addons-730938/apiserver.key.8fca2daa ...
	I1227 09:13:18.782763  303796 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22344-301174/.minikube/profiles/addons-730938/apiserver.key.8fca2daa: {Name:mkaaee6fd54d45cd478df17814a34317afa6e6bc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 09:13:18.782853  303796 certs.go:382] copying /home/jenkins/minikube-integration/22344-301174/.minikube/profiles/addons-730938/apiserver.crt.8fca2daa -> /home/jenkins/minikube-integration/22344-301174/.minikube/profiles/addons-730938/apiserver.crt
	I1227 09:13:18.782935  303796 certs.go:386] copying /home/jenkins/minikube-integration/22344-301174/.minikube/profiles/addons-730938/apiserver.key.8fca2daa -> /home/jenkins/minikube-integration/22344-301174/.minikube/profiles/addons-730938/apiserver.key
	I1227 09:13:18.782991  303796 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22344-301174/.minikube/profiles/addons-730938/proxy-client.key
	I1227 09:13:18.783011  303796 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22344-301174/.minikube/profiles/addons-730938/proxy-client.crt with IP's: []
	I1227 09:13:19.174649  303796 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22344-301174/.minikube/profiles/addons-730938/proxy-client.crt ...
	I1227 09:13:19.174680  303796 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22344-301174/.minikube/profiles/addons-730938/proxy-client.crt: {Name:mk004c851fc70bd84856b35db67b200fd3a174be Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 09:13:19.175451  303796 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22344-301174/.minikube/profiles/addons-730938/proxy-client.key ...
	I1227 09:13:19.175468  303796 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22344-301174/.minikube/profiles/addons-730938/proxy-client.key: {Name:mk2984bc8ef6718ff59578bf2cbd0ec20adea76b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 09:13:19.175666  303796 certs.go:484] found cert: /home/jenkins/minikube-integration/22344-301174/.minikube/certs/ca-key.pem (1675 bytes)
	I1227 09:13:19.175711  303796 certs.go:484] found cert: /home/jenkins/minikube-integration/22344-301174/.minikube/certs/ca.pem (1082 bytes)
	I1227 09:13:19.175749  303796 certs.go:484] found cert: /home/jenkins/minikube-integration/22344-301174/.minikube/certs/cert.pem (1123 bytes)
	I1227 09:13:19.175778  303796 certs.go:484] found cert: /home/jenkins/minikube-integration/22344-301174/.minikube/certs/key.pem (1675 bytes)
	I1227 09:13:19.176341  303796 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22344-301174/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1227 09:13:19.195710  303796 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22344-301174/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1227 09:13:19.213910  303796 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22344-301174/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1227 09:13:19.232265  303796 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22344-301174/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1227 09:13:19.250497  303796 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22344-301174/.minikube/profiles/addons-730938/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1227 09:13:19.268459  303796 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22344-301174/.minikube/profiles/addons-730938/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1227 09:13:19.285508  303796 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22344-301174/.minikube/profiles/addons-730938/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1227 09:13:19.303580  303796 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22344-301174/.minikube/profiles/addons-730938/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1227 09:13:19.321385  303796 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22344-301174/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1227 09:13:19.339872  303796 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (722 bytes)
	I1227 09:13:19.353287  303796 ssh_runner.go:195] Run: openssl version
	I1227 09:13:19.359439  303796 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1227 09:13:19.366704  303796 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1227 09:13:19.374132  303796 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1227 09:13:19.377637  303796 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 27 09:13 /usr/share/ca-certificates/minikubeCA.pem
	I1227 09:13:19.377704  303796 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1227 09:13:19.418438  303796 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1227 09:13:19.425938  303796 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1227 09:13:19.433402  303796 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1227 09:13:19.437170  303796 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1227 09:13:19.437219  303796 kubeadm.go:401] StartCluster: {Name:addons-730938 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:addons-730938 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMne
tClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1227 09:13:19.437308  303796 cri.go:61] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1227 09:13:19.437373  303796 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1227 09:13:19.463787  303796 cri.go:96] found id: ""
	I1227 09:13:19.463890  303796 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1227 09:13:19.471783  303796 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1227 09:13:19.479753  303796 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1227 09:13:19.479852  303796 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1227 09:13:19.487965  303796 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1227 09:13:19.487986  303796 kubeadm.go:158] found existing configuration files:
	
	I1227 09:13:19.488038  303796 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1227 09:13:19.496136  303796 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1227 09:13:19.496229  303796 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1227 09:13:19.503690  303796 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1227 09:13:19.511622  303796 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1227 09:13:19.511689  303796 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1227 09:13:19.519412  303796 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1227 09:13:19.526994  303796 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1227 09:13:19.527085  303796 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1227 09:13:19.534337  303796 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1227 09:13:19.541966  303796 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1227 09:13:19.542029  303796 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1227 09:13:19.549426  303796 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1227 09:13:19.681742  303796 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1227 09:13:19.682215  303796 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1227 09:13:19.746773  303796 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1227 09:13:30.558206  303796 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0
	I1227 09:13:30.558270  303796 kubeadm.go:319] [preflight] Running pre-flight checks
	I1227 09:13:30.558382  303796 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1227 09:13:30.558457  303796 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1227 09:13:30.558498  303796 kubeadm.go:319] OS: Linux
	I1227 09:13:30.558556  303796 kubeadm.go:319] CGROUPS_CPU: enabled
	I1227 09:13:30.558605  303796 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1227 09:13:30.558665  303796 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1227 09:13:30.558715  303796 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1227 09:13:30.558762  303796 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1227 09:13:30.558811  303796 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1227 09:13:30.558857  303796 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1227 09:13:30.558918  303796 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1227 09:13:30.558972  303796 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1227 09:13:30.559049  303796 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1227 09:13:30.559152  303796 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1227 09:13:30.559265  303796 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1227 09:13:30.559338  303796 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1227 09:13:30.564344  303796 out.go:252]   - Generating certificates and keys ...
	I1227 09:13:30.564442  303796 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1227 09:13:30.564510  303796 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1227 09:13:30.564578  303796 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1227 09:13:30.564632  303796 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1227 09:13:30.564689  303796 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1227 09:13:30.564736  303796 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1227 09:13:30.564787  303796 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1227 09:13:30.564897  303796 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [addons-730938 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1227 09:13:30.564948  303796 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1227 09:13:30.565056  303796 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [addons-730938 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1227 09:13:30.565117  303796 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1227 09:13:30.565177  303796 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1227 09:13:30.565230  303796 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1227 09:13:30.565283  303796 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1227 09:13:30.565331  303796 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1227 09:13:30.565383  303796 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1227 09:13:30.565434  303796 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1227 09:13:30.565493  303796 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1227 09:13:30.565545  303796 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1227 09:13:30.565621  303796 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1227 09:13:30.565684  303796 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1227 09:13:30.568567  303796 out.go:252]   - Booting up control plane ...
	I1227 09:13:30.568687  303796 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1227 09:13:30.568773  303796 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1227 09:13:30.568851  303796 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1227 09:13:30.568982  303796 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1227 09:13:30.569087  303796 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1227 09:13:30.569242  303796 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1227 09:13:30.569368  303796 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1227 09:13:30.569430  303796 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1227 09:13:30.569587  303796 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1227 09:13:30.569792  303796 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1227 09:13:30.569873  303796 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.001652596s
	I1227 09:13:30.569972  303796 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1227 09:13:30.570086  303796 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	I1227 09:13:30.570337  303796 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1227 09:13:30.570434  303796 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1227 09:13:30.570527  303796 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 1.012138095s
	I1227 09:13:30.570618  303796 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 3.26596225s
	I1227 09:13:30.570685  303796 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 5.001345325s
	I1227 09:13:30.570800  303796 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1227 09:13:30.570938  303796 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1227 09:13:30.571001  303796 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1227 09:13:30.571187  303796 kubeadm.go:319] [mark-control-plane] Marking the node addons-730938 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1227 09:13:30.571244  303796 kubeadm.go:319] [bootstrap-token] Using token: c1918j.n9ygvbrhghxfp2w8
	I1227 09:13:30.574289  303796 out.go:252]   - Configuring RBAC rules ...
	I1227 09:13:30.574412  303796 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1227 09:13:30.574497  303796 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1227 09:13:30.574634  303796 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1227 09:13:30.574757  303796 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1227 09:13:30.574869  303796 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1227 09:13:30.574956  303796 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1227 09:13:30.575067  303796 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1227 09:13:30.575111  303796 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1227 09:13:30.575156  303796 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1227 09:13:30.575163  303796 kubeadm.go:319] 
	I1227 09:13:30.575220  303796 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1227 09:13:30.575229  303796 kubeadm.go:319] 
	I1227 09:13:30.575305  303796 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1227 09:13:30.575312  303796 kubeadm.go:319] 
	I1227 09:13:30.575337  303796 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1227 09:13:30.575395  303796 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1227 09:13:30.575446  303796 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1227 09:13:30.575453  303796 kubeadm.go:319] 
	I1227 09:13:30.575503  303796 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1227 09:13:30.575510  303796 kubeadm.go:319] 
	I1227 09:13:30.575555  303796 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1227 09:13:30.575562  303796 kubeadm.go:319] 
	I1227 09:13:30.575612  303796 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1227 09:13:30.575685  303796 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1227 09:13:30.575752  303796 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1227 09:13:30.575761  303796 kubeadm.go:319] 
	I1227 09:13:30.575840  303796 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1227 09:13:30.575932  303796 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1227 09:13:30.575939  303796 kubeadm.go:319] 
	I1227 09:13:30.576023  303796 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token c1918j.n9ygvbrhghxfp2w8 \
	I1227 09:13:30.576124  303796 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:08d8899242412948893d7bdf12a68d0584eea0f77b00577c527378f6462e133c \
	I1227 09:13:30.576148  303796 kubeadm.go:319] 	--control-plane 
	I1227 09:13:30.576152  303796 kubeadm.go:319] 
	I1227 09:13:30.576232  303796 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1227 09:13:30.576239  303796 kubeadm.go:319] 
	I1227 09:13:30.576316  303796 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token c1918j.n9ygvbrhghxfp2w8 \
	I1227 09:13:30.576430  303796 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:08d8899242412948893d7bdf12a68d0584eea0f77b00577c527378f6462e133c 
	I1227 09:13:30.576443  303796 cni.go:84] Creating CNI manager for ""
	I1227 09:13:30.576451  303796 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1227 09:13:30.579658  303796 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1227 09:13:30.583064  303796 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1227 09:13:30.587996  303796 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.35.0/kubectl ...
	I1227 09:13:30.588030  303796 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2620 bytes)
	I1227 09:13:30.602513  303796 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1227 09:13:30.912215  303796 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1227 09:13:30.912321  303796 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1227 09:13:30.912370  303796 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-730938 minikube.k8s.io/updated_at=2025_12_27T09_13_30_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=a2daf445edf4872fd9586416ba5dbf507613db86 minikube.k8s.io/name=addons-730938 minikube.k8s.io/primary=true
	I1227 09:13:31.072572  303796 ops.go:34] apiserver oom_adj: -16
	I1227 09:13:31.072676  303796 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1227 09:13:31.573539  303796 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1227 09:13:32.073607  303796 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1227 09:13:32.572958  303796 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1227 09:13:33.072833  303796 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1227 09:13:33.572869  303796 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1227 09:13:34.072800  303796 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1227 09:13:34.573204  303796 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1227 09:13:35.073364  303796 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1227 09:13:35.573184  303796 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1227 09:13:35.685888  303796 kubeadm.go:1114] duration metric: took 4.773669289s to wait for elevateKubeSystemPrivileges
	I1227 09:13:35.685921  303796 kubeadm.go:403] duration metric: took 16.248704871s to StartCluster
	I1227 09:13:35.685953  303796 settings.go:142] acquiring lock: {Name:mkd3c0090464e277fd903dd0170c6ace5a6172ea Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 09:13:35.686707  303796 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22344-301174/kubeconfig
	I1227 09:13:35.687068  303796 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22344-301174/kubeconfig: {Name:mk3a3b85dc4ae4614cb3a9a7459a1ec0ee3f4d9f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 09:13:35.687844  303796 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1227 09:13:35.687875  303796 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1227 09:13:35.688102  303796 config.go:182] Loaded profile config "addons-730938": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1227 09:13:35.688138  303796 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:true auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:true storage-provisioner:true storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I1227 09:13:35.688211  303796 addons.go:70] Setting yakd=true in profile "addons-730938"
	I1227 09:13:35.688230  303796 addons.go:239] Setting addon yakd=true in "addons-730938"
	I1227 09:13:35.688251  303796 host.go:66] Checking if "addons-730938" exists ...
	I1227 09:13:35.688707  303796 cli_runner.go:164] Run: docker container inspect addons-730938 --format={{.State.Status}}
	I1227 09:13:35.689142  303796 addons.go:70] Setting metrics-server=true in profile "addons-730938"
	I1227 09:13:35.689169  303796 addons.go:239] Setting addon metrics-server=true in "addons-730938"
	I1227 09:13:35.689192  303796 host.go:66] Checking if "addons-730938" exists ...
	I1227 09:13:35.689628  303796 cli_runner.go:164] Run: docker container inspect addons-730938 --format={{.State.Status}}
	I1227 09:13:35.691810  303796 addons.go:70] Setting amd-gpu-device-plugin=true in profile "addons-730938"
	I1227 09:13:35.692714  303796 addons.go:239] Setting addon amd-gpu-device-plugin=true in "addons-730938"
	I1227 09:13:35.692779  303796 host.go:66] Checking if "addons-730938" exists ...
	I1227 09:13:35.693370  303796 cli_runner.go:164] Run: docker container inspect addons-730938 --format={{.State.Status}}
	I1227 09:13:35.695326  303796 addons.go:70] Setting cloud-spanner=true in profile "addons-730938"
	I1227 09:13:35.695419  303796 addons.go:239] Setting addon cloud-spanner=true in "addons-730938"
	I1227 09:13:35.695481  303796 host.go:66] Checking if "addons-730938" exists ...
	I1227 09:13:35.696020  303796 cli_runner.go:164] Run: docker container inspect addons-730938 --format={{.State.Status}}
	I1227 09:13:35.692619  303796 addons.go:70] Setting nvidia-device-plugin=true in profile "addons-730938"
	I1227 09:13:35.699113  303796 addons.go:239] Setting addon nvidia-device-plugin=true in "addons-730938"
	I1227 09:13:35.699156  303796 host.go:66] Checking if "addons-730938" exists ...
	I1227 09:13:35.699628  303796 cli_runner.go:164] Run: docker container inspect addons-730938 --format={{.State.Status}}
	I1227 09:13:35.707231  303796 addons.go:70] Setting csi-hostpath-driver=true in profile "addons-730938"
	I1227 09:13:35.707307  303796 addons.go:239] Setting addon csi-hostpath-driver=true in "addons-730938"
	I1227 09:13:35.707339  303796 host.go:66] Checking if "addons-730938" exists ...
	I1227 09:13:35.707801  303796 cli_runner.go:164] Run: docker container inspect addons-730938 --format={{.State.Status}}
	I1227 09:13:35.692633  303796 addons.go:70] Setting registry=true in profile "addons-730938"
	I1227 09:13:35.709213  303796 addons.go:239] Setting addon registry=true in "addons-730938"
	I1227 09:13:35.709248  303796 host.go:66] Checking if "addons-730938" exists ...
	I1227 09:13:35.709856  303796 cli_runner.go:164] Run: docker container inspect addons-730938 --format={{.State.Status}}
	I1227 09:13:35.710119  303796 addons.go:70] Setting default-storageclass=true in profile "addons-730938"
	I1227 09:13:35.736857  303796 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "addons-730938"
	I1227 09:13:35.737279  303796 cli_runner.go:164] Run: docker container inspect addons-730938 --format={{.State.Status}}
	I1227 09:13:35.710317  303796 addons.go:70] Setting gcp-auth=true in profile "addons-730938"
	I1227 09:13:35.747631  303796 mustload.go:66] Loading cluster: addons-730938
	I1227 09:13:35.747978  303796 config.go:182] Loaded profile config "addons-730938": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1227 09:13:35.748319  303796 cli_runner.go:164] Run: docker container inspect addons-730938 --format={{.State.Status}}
	I1227 09:13:35.692661  303796 addons.go:70] Setting storage-provisioner=true in profile "addons-730938"
	I1227 09:13:35.790345  303796 addons.go:239] Setting addon storage-provisioner=true in "addons-730938"
	I1227 09:13:35.790424  303796 host.go:66] Checking if "addons-730938" exists ...
	I1227 09:13:35.791101  303796 cli_runner.go:164] Run: docker container inspect addons-730938 --format={{.State.Status}}
	I1227 09:13:35.692668  303796 addons.go:70] Setting storage-provisioner-rancher=true in profile "addons-730938"
	I1227 09:13:35.798907  303796 addons_storage_classes.go:34] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-730938"
	I1227 09:13:35.799276  303796 cli_runner.go:164] Run: docker container inspect addons-730938 --format={{.State.Status}}
	I1227 09:13:35.814649  303796 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I1227 09:13:35.814966  303796 out.go:179]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.8.0
	I1227 09:13:35.815283  303796 out.go:179]   - Using image ghcr.io/manusa/yakd:0.0.6
	I1227 09:13:35.692675  303796 addons.go:70] Setting volcano=true in profile "addons-730938"
	I1227 09:13:35.692680  303796 addons.go:70] Setting volumesnapshots=true in profile "addons-730938"
	I1227 09:13:35.710361  303796 addons.go:70] Setting ingress=true in profile "addons-730938"
	I1227 09:13:35.710377  303796 addons.go:70] Setting ingress-dns=true in profile "addons-730938"
	I1227 09:13:35.710386  303796 addons.go:70] Setting inspektor-gadget=true in profile "addons-730938"
	I1227 09:13:35.710424  303796 out.go:179] * Verifying Kubernetes components...
	I1227 09:13:35.692645  303796 addons.go:70] Setting registry-creds=true in profile "addons-730938"
	I1227 09:13:35.819049  303796 addons.go:239] Setting addon registry-creds=true in "addons-730938"
	I1227 09:13:35.819104  303796 host.go:66] Checking if "addons-730938" exists ...
	I1227 09:13:35.819584  303796 cli_runner.go:164] Run: docker container inspect addons-730938 --format={{.State.Status}}
	I1227 09:13:35.841090  303796 addons.go:436] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1227 09:13:35.841169  303796 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1227 09:13:35.841283  303796 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-730938
	I1227 09:13:35.848648  303796 out.go:179]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I1227 09:13:35.853978  303796 out.go:179]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I1227 09:13:35.857228  303796 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I1227 09:13:35.860294  303796 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I1227 09:13:35.861690  303796 addons.go:239] Setting addon volcano=true in "addons-730938"
	I1227 09:13:35.861743  303796 host.go:66] Checking if "addons-730938" exists ...
	I1227 09:13:35.862355  303796 cli_runner.go:164] Run: docker container inspect addons-730938 --format={{.State.Status}}
	I1227 09:13:35.870143  303796 addons.go:239] Setting addon volumesnapshots=true in "addons-730938"
	I1227 09:13:35.870262  303796 host.go:66] Checking if "addons-730938" exists ...
	I1227 09:13:35.870768  303796 cli_runner.go:164] Run: docker container inspect addons-730938 --format={{.State.Status}}
	I1227 09:13:35.872657  303796 addons.go:436] installing /etc/kubernetes/addons/yakd-ns.yaml
	I1227 09:13:35.872683  303796 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I1227 09:13:35.872754  303796 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-730938
	I1227 09:13:35.902494  303796 addons.go:239] Setting addon ingress=true in "addons-730938"
	I1227 09:13:35.902551  303796 host.go:66] Checking if "addons-730938" exists ...
	I1227 09:13:35.903119  303796 cli_runner.go:164] Run: docker container inspect addons-730938 --format={{.State.Status}}
	I1227 09:13:35.909361  303796 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I1227 09:13:35.912468  303796 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I1227 09:13:35.914456  303796 out.go:179]   - Using image docker.io/rocm/k8s-device-plugin:1.25.2.8
	I1227 09:13:35.918884  303796 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I1227 09:13:35.921003  303796 addons.go:436] installing /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1227 09:13:35.921026  303796 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/amd-gpu-device-plugin.yaml (1868 bytes)
	I1227 09:13:35.921090  303796 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-730938
	I1227 09:13:35.929831  303796 addons.go:239] Setting addon ingress-dns=true in "addons-730938"
	I1227 09:13:35.929901  303796 host.go:66] Checking if "addons-730938" exists ...
	I1227 09:13:35.930649  303796 cli_runner.go:164] Run: docker container inspect addons-730938 --format={{.State.Status}}
	I1227 09:13:35.932186  303796 out.go:179]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.46
	I1227 09:13:35.934473  303796 out.go:179]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.18.1
	I1227 09:13:35.946505  303796 addons.go:239] Setting addon inspektor-gadget=true in "addons-730938"
	I1227 09:13:35.946555  303796 host.go:66] Checking if "addons-730938" exists ...
	I1227 09:13:35.947012  303796 cli_runner.go:164] Run: docker container inspect addons-730938 --format={{.State.Status}}
	I1227 09:13:35.933256  303796 host.go:66] Checking if "addons-730938" exists ...
	I1227 09:13:35.973084  303796 addons.go:239] Setting addon default-storageclass=true in "addons-730938"
	I1227 09:13:35.973130  303796 host.go:66] Checking if "addons-730938" exists ...
	I1227 09:13:35.973532  303796 cli_runner.go:164] Run: docker container inspect addons-730938 --format={{.State.Status}}
	I1227 09:13:35.933325  303796 addons.go:436] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I1227 09:13:35.986990  303796 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I1227 09:13:35.987111  303796 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-730938
	I1227 09:13:35.949466  303796 addons.go:436] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1227 09:13:36.001543  303796 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I1227 09:13:36.001677  303796 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-730938
	I1227 09:13:36.009338  303796 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1227 09:13:36.009539  303796 out.go:179]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.9
	I1227 09:13:36.012770  303796 addons.go:239] Setting addon storage-provisioner-rancher=true in "addons-730938"
	I1227 09:13:36.012817  303796 host.go:66] Checking if "addons-730938" exists ...
	I1227 09:13:36.014198  303796 cli_runner.go:164] Run: docker container inspect addons-730938 --format={{.State.Status}}
	I1227 09:13:35.949488  303796 addons.go:436] installing /etc/kubernetes/addons/deployment.yaml
	I1227 09:13:36.038639  303796 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I1227 09:13:36.038745  303796 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-730938
	I1227 09:13:35.969691  303796 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.35.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1227 09:13:36.063988  303796 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1227 09:13:36.067027  303796 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1227 09:13:36.067050  303796 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1227 09:13:36.067110  303796 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-730938
	I1227 09:13:36.075135  303796 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33141 SSHKeyPath:/home/jenkins/minikube-integration/22344-301174/.minikube/machines/addons-730938/id_rsa Username:docker}
	I1227 09:13:36.077795  303796 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33141 SSHKeyPath:/home/jenkins/minikube-integration/22344-301174/.minikube/machines/addons-730938/id_rsa Username:docker}
	I1227 09:13:36.079075  303796 out.go:179]   - Using image docker.io/kicbase/minikube-ingress-dns:0.0.4
	I1227 09:13:36.080161  303796 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33141 SSHKeyPath:/home/jenkins/minikube-integration/22344-301174/.minikube/machines/addons-730938/id_rsa Username:docker}
	W1227 09:13:36.081182  303796 out.go:285] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I1227 09:13:36.082260  303796 addons.go:436] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1227 09:13:36.082278  303796 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2889 bytes)
	I1227 09:13:36.082342  303796 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-730938
	I1227 09:13:36.100892  303796 out.go:179]   - Using image docker.io/upmcenterprises/registry-creds:1.10
	I1227 09:13:36.103833  303796 addons.go:436] installing /etc/kubernetes/addons/registry-creds-rc.yaml
	I1227 09:13:36.103861  303796 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-creds-rc.yaml (3306 bytes)
	I1227 09:13:36.103932  303796 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-730938
	I1227 09:13:36.108566  303796 out.go:179]   - Using image docker.io/registry:3.0.0
	I1227 09:13:36.112560  303796 addons.go:436] installing /etc/kubernetes/addons/registry-rc.yaml
	I1227 09:13:36.112587  303796 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I1227 09:13:36.112657  303796 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-730938
	I1227 09:13:36.147840  303796 out.go:179]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I1227 09:13:36.152402  303796 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I1227 09:13:36.152428  303796 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I1227 09:13:36.152494  303796 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-730938
	I1227 09:13:36.184418  303796 out.go:179]   - Using image registry.k8s.io/ingress-nginx/controller:v1.14.1
	I1227 09:13:36.205943  303796 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33141 SSHKeyPath:/home/jenkins/minikube-integration/22344-301174/.minikube/machines/addons-730938/id_rsa Username:docker}
	I1227 09:13:36.211098  303796 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.5
	I1227 09:13:36.222368  303796 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.5
	I1227 09:13:36.225885  303796 addons.go:436] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I1227 09:13:36.225913  303796 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16257 bytes)
	I1227 09:13:36.225990  303796 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-730938
	I1227 09:13:36.228496  303796 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33141 SSHKeyPath:/home/jenkins/minikube-integration/22344-301174/.minikube/machines/addons-730938/id_rsa Username:docker}
	I1227 09:13:36.238374  303796 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1227 09:13:36.238395  303796 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1227 09:13:36.240236  303796 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-730938
	I1227 09:13:36.263590  303796 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33141 SSHKeyPath:/home/jenkins/minikube-integration/22344-301174/.minikube/machines/addons-730938/id_rsa Username:docker}
	I1227 09:13:36.267786  303796 out.go:179]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.47.0
	I1227 09:13:36.272910  303796 addons.go:436] installing /etc/kubernetes/addons/ig-deployment.yaml
	I1227 09:13:36.272940  303796 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-deployment.yaml (15034 bytes)
	I1227 09:13:36.273006  303796 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-730938
	I1227 09:13:36.310002  303796 out.go:179]   - Using image docker.io/busybox:stable
	I1227 09:13:36.316288  303796 out.go:179]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I1227 09:13:36.319248  303796 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33141 SSHKeyPath:/home/jenkins/minikube-integration/22344-301174/.minikube/machines/addons-730938/id_rsa Username:docker}
	I1227 09:13:36.320109  303796 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1227 09:13:36.320127  303796 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I1227 09:13:36.320189  303796 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-730938
	I1227 09:13:36.325454  303796 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33141 SSHKeyPath:/home/jenkins/minikube-integration/22344-301174/.minikube/machines/addons-730938/id_rsa Username:docker}
	I1227 09:13:36.350306  303796 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33141 SSHKeyPath:/home/jenkins/minikube-integration/22344-301174/.minikube/machines/addons-730938/id_rsa Username:docker}
	I1227 09:13:36.350419  303796 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33141 SSHKeyPath:/home/jenkins/minikube-integration/22344-301174/.minikube/machines/addons-730938/id_rsa Username:docker}
	I1227 09:13:36.374713  303796 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33141 SSHKeyPath:/home/jenkins/minikube-integration/22344-301174/.minikube/machines/addons-730938/id_rsa Username:docker}
	I1227 09:13:36.410433  303796 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33141 SSHKeyPath:/home/jenkins/minikube-integration/22344-301174/.minikube/machines/addons-730938/id_rsa Username:docker}
	I1227 09:13:36.411165  303796 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33141 SSHKeyPath:/home/jenkins/minikube-integration/22344-301174/.minikube/machines/addons-730938/id_rsa Username:docker}
	W1227 09:13:36.419391  303796 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1227 09:13:36.419441  303796 retry.go:84] will retry after 300ms: ssh: handshake failed: EOF
	W1227 09:13:36.420211  303796 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1227 09:13:36.423223  303796 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33141 SSHKeyPath:/home/jenkins/minikube-integration/22344-301174/.minikube/machines/addons-730938/id_rsa Username:docker}
	I1227 09:13:36.425980  303796 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33141 SSHKeyPath:/home/jenkins/minikube-integration/22344-301174/.minikube/machines/addons-730938/id_rsa Username:docker}
	I1227 09:13:36.970675  303796 addons.go:436] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1227 09:13:36.970749  303796 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I1227 09:13:37.011207  303796 addons.go:436] installing /etc/kubernetes/addons/yakd-sa.yaml
	I1227 09:13:37.011286  303796 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I1227 09:13:37.186756  303796 addons.go:436] installing /etc/kubernetes/addons/yakd-crb.yaml
	I1227 09:13:37.186787  303796 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I1227 09:13:37.272185  303796 addons.go:436] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1227 09:13:37.272207  303796 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1227 09:13:37.296288  303796 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I1227 09:13:37.296318  303796 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I1227 09:13:37.299661  303796 addons.go:436] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I1227 09:13:37.299686  303796 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I1227 09:13:37.353742  303796 addons.go:436] installing /etc/kubernetes/addons/yakd-svc.yaml
	I1227 09:13:37.353818  303796 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I1227 09:13:37.393016  303796 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1227 09:13:37.432693  303796 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I1227 09:13:37.432770  303796 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I1227 09:13:37.483302  303796 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml
	I1227 09:13:37.495804  303796 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1227 09:13:37.500732  303796 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1227 09:13:37.530378  303796 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I1227 09:13:37.532949  303796 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1227 09:13:37.545605  303796 addons.go:436] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1227 09:13:37.545677  303796 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1227 09:13:37.548971  303796 addons.go:436] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I1227 09:13:37.549048  303796 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I1227 09:13:37.580146  303796 addons.go:436] installing /etc/kubernetes/addons/yakd-dp.yaml
	I1227 09:13:37.580221  303796 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2013 bytes)
	I1227 09:13:37.606731  303796 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1227 09:13:37.631294  303796 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I1227 09:13:37.631370  303796 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I1227 09:13:37.659931  303796 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/ig-deployment.yaml
	I1227 09:13:37.722079  303796 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I1227 09:13:37.733298  303796 addons.go:436] installing /etc/kubernetes/addons/registry-svc.yaml
	I1227 09:13:37.733373  303796 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I1227 09:13:37.757114  303796 addons.go:436] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I1227 09:13:37.757201  303796 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I1227 09:13:37.783641  303796 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1227 09:13:37.814099  303796 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I1227 09:13:37.820796  303796 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1227 09:13:37.951756  303796 addons.go:436] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I1227 09:13:37.951780  303796 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I1227 09:13:38.082198  303796 addons.go:436] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I1227 09:13:38.082271  303796 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I1227 09:13:38.091233  303796 addons.go:436] installing /etc/kubernetes/addons/registry-proxy.yaml
	I1227 09:13:38.091305  303796 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I1227 09:13:38.335188  303796 addons.go:436] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1227 09:13:38.335260  303796 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I1227 09:13:38.393760  303796 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I1227 09:13:38.422371  303796 addons.go:436] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I1227 09:13:38.422446  303796 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I1227 09:13:38.454545  303796 ssh_runner.go:235] Completed: sudo systemctl daemon-reload: (2.445133165s)
	I1227 09:13:38.454718  303796 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.35.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (2.391397015s)
	I1227 09:13:38.454780  303796 start.go:987] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I1227 09:13:38.454749  303796 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1227 09:13:38.658972  303796 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I1227 09:13:38.658995  303796 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I1227 09:13:38.785177  303796 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1227 09:13:38.909160  303796 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I1227 09:13:38.909241  303796 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I1227 09:13:38.965802  303796 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-730938" context rescaled to 1 replicas
	I1227 09:13:39.034025  303796 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml: (1.640919375s)
	I1227 09:13:39.246790  303796 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml: (1.763401367s)
	I1227 09:13:39.285583  303796 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I1227 09:13:39.285607  303796 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I1227 09:13:39.488115  303796 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (1.99223789s)
	I1227 09:13:39.498690  303796 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.99786856s)
	I1227 09:13:39.511615  303796 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I1227 09:13:39.511635  303796 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I1227 09:13:39.658643  303796 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1227 09:13:39.658672  303796 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I1227 09:13:39.796322  303796 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1227 09:13:40.274907  303796 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (2.744453337s)
	I1227 09:13:41.452999  303796 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (3.919975772s)
	I1227 09:13:42.032886  303796 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (4.426068863s)
	I1227 09:13:42.310239  303796 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/ig-deployment.yaml: (4.650224977s)
	I1227 09:13:42.945818  303796 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (5.16208353s)
	I1227 09:13:42.945848  303796 addons.go:495] Verifying addon metrics-server=true in "addons-730938"
	I1227 09:13:42.945885  303796 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (5.131749469s)
	I1227 09:13:42.945947  303796 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (5.125130813s)
	I1227 09:13:42.945970  303796 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (5.223822324s)
	I1227 09:13:42.945983  303796 addons.go:495] Verifying addon ingress=true in "addons-730938"
	I1227 09:13:42.946277  303796 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (4.552437574s)
	I1227 09:13:42.946616  303796 addons.go:495] Verifying addon registry=true in "addons-730938"
	I1227 09:13:42.946416  303796 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (4.49154873s)
	I1227 09:13:42.947672  303796 node_ready.go:35] waiting up to 6m0s for node "addons-730938" to be "Ready" ...
	I1227 09:13:42.946493  303796 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (4.161241596s)
	W1227 09:13:42.947877  303796 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1227 09:13:42.947898  303796 retry.go:84] will retry after 200ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1227 09:13:42.949646  303796 out.go:179] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-730938 service yakd-dashboard -n yakd-dashboard
	
	I1227 09:13:42.949656  303796 out.go:179] * Verifying ingress addon...
	I1227 09:13:42.949763  303796 out.go:179] * Verifying registry addon...
	I1227 09:13:42.954710  303796 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I1227 09:13:42.955669  303796 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I1227 09:13:42.963287  303796 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=registry
	I1227 09:13:42.963357  303796 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1227 09:13:42.965839  303796 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I1227 09:13:42.965943  303796 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1227 09:13:43.125439  303796 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1227 09:13:43.233450  303796 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (3.437064201s)
	I1227 09:13:43.233541  303796 addons.go:495] Verifying addon csi-hostpath-driver=true in "addons-730938"
	I1227 09:13:43.236668  303796 out.go:179] * Verifying csi-hostpath-driver addon...
	I1227 09:13:43.240398  303796 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I1227 09:13:43.258630  303796 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1227 09:13:43.258704  303796 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1227 09:13:43.469205  303796 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1227 09:13:43.469296  303796 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1227 09:13:43.580943  303796 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I1227 09:13:43.581029  303796 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-730938
	I1227 09:13:43.599226  303796 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33141 SSHKeyPath:/home/jenkins/minikube-integration/22344-301174/.minikube/machines/addons-730938/id_rsa Username:docker}
	I1227 09:13:43.702991  303796 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I1227 09:13:43.716727  303796 addons.go:239] Setting addon gcp-auth=true in "addons-730938"
	I1227 09:13:43.716780  303796 host.go:66] Checking if "addons-730938" exists ...
	I1227 09:13:43.717230  303796 cli_runner.go:164] Run: docker container inspect addons-730938 --format={{.State.Status}}
	I1227 09:13:43.737931  303796 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I1227 09:13:43.737988  303796 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-730938
	I1227 09:13:43.744757  303796 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1227 09:13:43.756017  303796 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33141 SSHKeyPath:/home/jenkins/minikube-integration/22344-301174/.minikube/machines/addons-730938/id_rsa Username:docker}
	I1227 09:13:43.959526  303796 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1227 09:13:43.959742  303796 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1227 09:13:44.244141  303796 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1227 09:13:44.458327  303796 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1227 09:13:44.458981  303796 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1227 09:13:44.744378  303796 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1227 09:13:44.950912  303796 node_ready.go:57] node "addons-730938" has "Ready":"False" status (will retry)
	I1227 09:13:44.957765  303796 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1227 09:13:44.958480  303796 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1227 09:13:45.244243  303796 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1227 09:13:45.460063  303796 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1227 09:13:45.460616  303796 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1227 09:13:45.744239  303796 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1227 09:13:45.858298  303796 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.732763321s)
	I1227 09:13:45.858384  303796 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (2.120433525s)
	I1227 09:13:45.861585  303796 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.5
	I1227 09:13:45.864462  303796 out.go:179]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.3
	I1227 09:13:45.867305  303796 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I1227 09:13:45.867336  303796 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I1227 09:13:45.881168  303796 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I1227 09:13:45.881190  303796 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I1227 09:13:45.895582  303796 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1227 09:13:45.895606  303796 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I1227 09:13:45.908748  303796 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1227 09:13:45.960540  303796 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1227 09:13:45.961575  303796 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1227 09:13:46.244801  303796 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1227 09:13:46.398057  303796 addons.go:495] Verifying addon gcp-auth=true in "addons-730938"
	I1227 09:13:46.400862  303796 out.go:179] * Verifying gcp-auth addon...
	I1227 09:13:46.404496  303796 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I1227 09:13:46.412344  303796 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I1227 09:13:46.412369  303796 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1227 09:13:46.512049  303796 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1227 09:13:46.512412  303796 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1227 09:13:46.743746  303796 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1227 09:13:46.907762  303796 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1227 09:13:46.951670  303796 node_ready.go:57] node "addons-730938" has "Ready":"False" status (will retry)
	I1227 09:13:46.958746  303796 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1227 09:13:46.958936  303796 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1227 09:13:47.244033  303796 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1227 09:13:47.408407  303796 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1227 09:13:47.458897  303796 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1227 09:13:47.459438  303796 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1227 09:13:47.743443  303796 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1227 09:13:47.908230  303796 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1227 09:13:47.959624  303796 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1227 09:13:47.960003  303796 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1227 09:13:48.245326  303796 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1227 09:13:48.407279  303796 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1227 09:13:48.459099  303796 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1227 09:13:48.459262  303796 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1227 09:13:48.744246  303796 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1227 09:13:48.909347  303796 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1227 09:13:48.958751  303796 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1227 09:13:48.958910  303796 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1227 09:13:49.244799  303796 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1227 09:13:49.407984  303796 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1227 09:13:49.454321  303796 node_ready.go:57] node "addons-730938" has "Ready":"False" status (will retry)
	I1227 09:13:49.464673  303796 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1227 09:13:49.465050  303796 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1227 09:13:49.819219  303796 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1227 09:13:49.819339  303796 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1227 09:13:49.944480  303796 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1227 09:13:49.956897  303796 node_ready.go:49] node "addons-730938" is "Ready"
	I1227 09:13:49.956973  303796 node_ready.go:38] duration metric: took 7.009280573s for node "addons-730938" to be "Ready" ...
	I1227 09:13:49.957001  303796 api_server.go:52] waiting for apiserver process to appear ...
	I1227 09:13:49.957081  303796 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 09:13:49.980776  303796 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1227 09:13:49.981221  303796 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I1227 09:13:49.981241  303796 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1227 09:13:49.985150  303796 api_server.go:72] duration metric: took 14.297242876s to wait for apiserver process to appear ...
	I1227 09:13:49.985177  303796 api_server.go:88] waiting for apiserver healthz status ...
	I1227 09:13:49.985218  303796 api_server.go:299] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1227 09:13:49.996960  303796 api_server.go:325] https://192.168.49.2:8443/healthz returned 200:
	ok
	I1227 09:13:50.003458  303796 api_server.go:141] control plane version: v1.35.0
	I1227 09:13:50.003499  303796 api_server.go:131] duration metric: took 18.313784ms to wait for apiserver health ...
	I1227 09:13:50.003510  303796 system_pods.go:43] waiting for kube-system pods to appear ...
	I1227 09:13:50.037098  303796 system_pods.go:59] 19 kube-system pods found
	I1227 09:13:50.037142  303796 system_pods.go:61] "coredns-7d764666f9-xtj9t" [24c14bda-773d-415f-b762-1450176d6d61] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1227 09:13:50.037153  303796 system_pods.go:61] "csi-hostpath-attacher-0" [5126a6a7-1d42-48d4-acbe-8a5685077733] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1227 09:13:50.037159  303796 system_pods.go:61] "csi-hostpath-resizer-0" [06db8ffa-b07e-4aeb-bd46-a9ecb9a67589] Pending
	I1227 09:13:50.037164  303796 system_pods.go:61] "csi-hostpathplugin-25pt8" [7cdc8d25-60b9-4976-8487-5d0c851cee4c] Pending
	I1227 09:13:50.037169  303796 system_pods.go:61] "etcd-addons-730938" [7285901f-d692-42c6-8ebd-639a35437696] Running
	I1227 09:13:50.037175  303796 system_pods.go:61] "kindnet-lh6m8" [4a445964-f164-40f8-a75a-ab72be949024] Running
	I1227 09:13:50.037180  303796 system_pods.go:61] "kube-apiserver-addons-730938" [c7900903-40ba-4ae0-a78a-84f9f38e2983] Running
	I1227 09:13:50.037186  303796 system_pods.go:61] "kube-controller-manager-addons-730938" [ac4ae5fb-0e42-4ea3-be89-85ec8d00d726] Running
	I1227 09:13:50.037191  303796 system_pods.go:61] "kube-ingress-dns-minikube" [fa60965c-1728-4a80-aade-c9cf2cf3c2d4] Pending
	I1227 09:13:50.037198  303796 system_pods.go:61] "kube-proxy-7bh9h" [ae689d09-cb21-4a2d-8289-0510bcd830eb] Running
	I1227 09:13:50.037208  303796 system_pods.go:61] "kube-scheduler-addons-730938" [9d05a602-52bf-4ba9-9a02-86a3bc16d767] Running
	I1227 09:13:50.037215  303796 system_pods.go:61] "metrics-server-5778bb4788-fjcqt" [00092f37-6735-4931-aed2-2c7db199670b] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1227 09:13:50.037220  303796 system_pods.go:61] "nvidia-device-plugin-daemonset-vmvrx" [e2b6c834-084a-4efc-8620-1ace4680f81e] Pending
	I1227 09:13:50.037232  303796 system_pods.go:61] "registry-788cd7d5bc-8bnrd" [d91b0f24-ead6-4a7a-9c0d-b90b07ab7ef6] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1227 09:13:50.037238  303796 system_pods.go:61] "registry-creds-567fb78d95-gv8vz" [5eef20df-0f92-4c24-a247-6e0cce141060] Pending
	I1227 09:13:50.037250  303796 system_pods.go:61] "registry-proxy-j242b" [d24d9cbd-01f8-457f-bf06-940800f1f0d3] Pending
	I1227 09:13:50.037254  303796 system_pods.go:61] "snapshot-controller-6588d87457-7gcsz" [31eae95c-849e-4afb-a907-35500153dad2] Pending
	I1227 09:13:50.037259  303796 system_pods.go:61] "snapshot-controller-6588d87457-qk4q8" [bedb426f-4b56-4bfb-82c9-a589362a6a58] Pending
	I1227 09:13:50.037263  303796 system_pods.go:61] "storage-provisioner" [ffb632a1-439a-4db2-8c01-b53eef01b2f0] Pending
	I1227 09:13:50.037269  303796 system_pods.go:74] duration metric: took 33.752103ms to wait for pod list to return data ...
	I1227 09:13:50.037282  303796 default_sa.go:34] waiting for default service account to be created ...
	I1227 09:13:50.056077  303796 default_sa.go:45] found service account: "default"
	I1227 09:13:50.056116  303796 default_sa.go:55] duration metric: took 18.827448ms for default service account to be created ...
	I1227 09:13:50.056128  303796 system_pods.go:116] waiting for k8s-apps to be running ...
	I1227 09:13:50.088882  303796 system_pods.go:86] 19 kube-system pods found
	I1227 09:13:50.088931  303796 system_pods.go:89] "coredns-7d764666f9-xtj9t" [24c14bda-773d-415f-b762-1450176d6d61] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1227 09:13:50.088944  303796 system_pods.go:89] "csi-hostpath-attacher-0" [5126a6a7-1d42-48d4-acbe-8a5685077733] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1227 09:13:50.088951  303796 system_pods.go:89] "csi-hostpath-resizer-0" [06db8ffa-b07e-4aeb-bd46-a9ecb9a67589] Pending
	I1227 09:13:50.088956  303796 system_pods.go:89] "csi-hostpathplugin-25pt8" [7cdc8d25-60b9-4976-8487-5d0c851cee4c] Pending
	I1227 09:13:50.088960  303796 system_pods.go:89] "etcd-addons-730938" [7285901f-d692-42c6-8ebd-639a35437696] Running
	I1227 09:13:50.088967  303796 system_pods.go:89] "kindnet-lh6m8" [4a445964-f164-40f8-a75a-ab72be949024] Running
	I1227 09:13:50.088976  303796 system_pods.go:89] "kube-apiserver-addons-730938" [c7900903-40ba-4ae0-a78a-84f9f38e2983] Running
	I1227 09:13:50.088982  303796 system_pods.go:89] "kube-controller-manager-addons-730938" [ac4ae5fb-0e42-4ea3-be89-85ec8d00d726] Running
	I1227 09:13:50.089050  303796 system_pods.go:89] "kube-ingress-dns-minikube" [fa60965c-1728-4a80-aade-c9cf2cf3c2d4] Pending
	I1227 09:13:50.089060  303796 system_pods.go:89] "kube-proxy-7bh9h" [ae689d09-cb21-4a2d-8289-0510bcd830eb] Running
	I1227 09:13:50.089066  303796 system_pods.go:89] "kube-scheduler-addons-730938" [9d05a602-52bf-4ba9-9a02-86a3bc16d767] Running
	I1227 09:13:50.089073  303796 system_pods.go:89] "metrics-server-5778bb4788-fjcqt" [00092f37-6735-4931-aed2-2c7db199670b] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1227 09:13:50.089087  303796 system_pods.go:89] "nvidia-device-plugin-daemonset-vmvrx" [e2b6c834-084a-4efc-8620-1ace4680f81e] Pending
	I1227 09:13:50.089096  303796 system_pods.go:89] "registry-788cd7d5bc-8bnrd" [d91b0f24-ead6-4a7a-9c0d-b90b07ab7ef6] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1227 09:13:50.089107  303796 system_pods.go:89] "registry-creds-567fb78d95-gv8vz" [5eef20df-0f92-4c24-a247-6e0cce141060] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1227 09:13:50.089121  303796 system_pods.go:89] "registry-proxy-j242b" [d24d9cbd-01f8-457f-bf06-940800f1f0d3] Pending
	I1227 09:13:50.089133  303796 system_pods.go:89] "snapshot-controller-6588d87457-7gcsz" [31eae95c-849e-4afb-a907-35500153dad2] Pending
	I1227 09:13:50.089139  303796 system_pods.go:89] "snapshot-controller-6588d87457-qk4q8" [bedb426f-4b56-4bfb-82c9-a589362a6a58] Pending
	I1227 09:13:50.089143  303796 system_pods.go:89] "storage-provisioner" [ffb632a1-439a-4db2-8c01-b53eef01b2f0] Pending
	I1227 09:13:50.089164  303796 retry.go:84] will retry after 200ms: missing components: kube-dns
	I1227 09:13:50.247557  303796 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1227 09:13:50.376533  303796 system_pods.go:86] 19 kube-system pods found
	I1227 09:13:50.376580  303796 system_pods.go:89] "coredns-7d764666f9-xtj9t" [24c14bda-773d-415f-b762-1450176d6d61] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1227 09:13:50.376591  303796 system_pods.go:89] "csi-hostpath-attacher-0" [5126a6a7-1d42-48d4-acbe-8a5685077733] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1227 09:13:50.376599  303796 system_pods.go:89] "csi-hostpath-resizer-0" [06db8ffa-b07e-4aeb-bd46-a9ecb9a67589] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1227 09:13:50.376616  303796 system_pods.go:89] "csi-hostpathplugin-25pt8" [7cdc8d25-60b9-4976-8487-5d0c851cee4c] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1227 09:13:50.376622  303796 system_pods.go:89] "etcd-addons-730938" [7285901f-d692-42c6-8ebd-639a35437696] Running
	I1227 09:13:50.376628  303796 system_pods.go:89] "kindnet-lh6m8" [4a445964-f164-40f8-a75a-ab72be949024] Running
	I1227 09:13:50.376646  303796 system_pods.go:89] "kube-apiserver-addons-730938" [c7900903-40ba-4ae0-a78a-84f9f38e2983] Running
	I1227 09:13:50.376656  303796 system_pods.go:89] "kube-controller-manager-addons-730938" [ac4ae5fb-0e42-4ea3-be89-85ec8d00d726] Running
	I1227 09:13:50.376663  303796 system_pods.go:89] "kube-ingress-dns-minikube" [fa60965c-1728-4a80-aade-c9cf2cf3c2d4] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1227 09:13:50.376668  303796 system_pods.go:89] "kube-proxy-7bh9h" [ae689d09-cb21-4a2d-8289-0510bcd830eb] Running
	I1227 09:13:50.376680  303796 system_pods.go:89] "kube-scheduler-addons-730938" [9d05a602-52bf-4ba9-9a02-86a3bc16d767] Running
	I1227 09:13:50.376686  303796 system_pods.go:89] "metrics-server-5778bb4788-fjcqt" [00092f37-6735-4931-aed2-2c7db199670b] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1227 09:13:50.376691  303796 system_pods.go:89] "nvidia-device-plugin-daemonset-vmvrx" [e2b6c834-084a-4efc-8620-1ace4680f81e] Pending
	I1227 09:13:50.376703  303796 system_pods.go:89] "registry-788cd7d5bc-8bnrd" [d91b0f24-ead6-4a7a-9c0d-b90b07ab7ef6] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1227 09:13:50.376717  303796 system_pods.go:89] "registry-creds-567fb78d95-gv8vz" [5eef20df-0f92-4c24-a247-6e0cce141060] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1227 09:13:50.376724  303796 system_pods.go:89] "registry-proxy-j242b" [d24d9cbd-01f8-457f-bf06-940800f1f0d3] Pending
	I1227 09:13:50.376729  303796 system_pods.go:89] "snapshot-controller-6588d87457-7gcsz" [31eae95c-849e-4afb-a907-35500153dad2] Pending
	I1227 09:13:50.376738  303796 system_pods.go:89] "snapshot-controller-6588d87457-qk4q8" [bedb426f-4b56-4bfb-82c9-a589362a6a58] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1227 09:13:50.376748  303796 system_pods.go:89] "storage-provisioner" [ffb632a1-439a-4db2-8c01-b53eef01b2f0] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1227 09:13:50.448775  303796 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1227 09:13:50.460104  303796 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1227 09:13:50.460547  303796 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1227 09:13:50.697952  303796 system_pods.go:86] 19 kube-system pods found
	I1227 09:13:50.698044  303796 system_pods.go:89] "coredns-7d764666f9-xtj9t" [24c14bda-773d-415f-b762-1450176d6d61] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1227 09:13:50.698068  303796 system_pods.go:89] "csi-hostpath-attacher-0" [5126a6a7-1d42-48d4-acbe-8a5685077733] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1227 09:13:50.698109  303796 system_pods.go:89] "csi-hostpath-resizer-0" [06db8ffa-b07e-4aeb-bd46-a9ecb9a67589] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1227 09:13:50.698140  303796 system_pods.go:89] "csi-hostpathplugin-25pt8" [7cdc8d25-60b9-4976-8487-5d0c851cee4c] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1227 09:13:50.698179  303796 system_pods.go:89] "etcd-addons-730938" [7285901f-d692-42c6-8ebd-639a35437696] Running
	I1227 09:13:50.698213  303796 system_pods.go:89] "kindnet-lh6m8" [4a445964-f164-40f8-a75a-ab72be949024] Running
	I1227 09:13:50.698235  303796 system_pods.go:89] "kube-apiserver-addons-730938" [c7900903-40ba-4ae0-a78a-84f9f38e2983] Running
	I1227 09:13:50.698254  303796 system_pods.go:89] "kube-controller-manager-addons-730938" [ac4ae5fb-0e42-4ea3-be89-85ec8d00d726] Running
	I1227 09:13:50.698278  303796 system_pods.go:89] "kube-ingress-dns-minikube" [fa60965c-1728-4a80-aade-c9cf2cf3c2d4] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1227 09:13:50.698341  303796 system_pods.go:89] "kube-proxy-7bh9h" [ae689d09-cb21-4a2d-8289-0510bcd830eb] Running
	I1227 09:13:50.698364  303796 system_pods.go:89] "kube-scheduler-addons-730938" [9d05a602-52bf-4ba9-9a02-86a3bc16d767] Running
	I1227 09:13:50.698395  303796 system_pods.go:89] "metrics-server-5778bb4788-fjcqt" [00092f37-6735-4931-aed2-2c7db199670b] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1227 09:13:50.698430  303796 system_pods.go:89] "nvidia-device-plugin-daemonset-vmvrx" [e2b6c834-084a-4efc-8620-1ace4680f81e] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1227 09:13:50.698462  303796 system_pods.go:89] "registry-788cd7d5bc-8bnrd" [d91b0f24-ead6-4a7a-9c0d-b90b07ab7ef6] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1227 09:13:50.698489  303796 system_pods.go:89] "registry-creds-567fb78d95-gv8vz" [5eef20df-0f92-4c24-a247-6e0cce141060] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1227 09:13:50.698516  303796 system_pods.go:89] "registry-proxy-j242b" [d24d9cbd-01f8-457f-bf06-940800f1f0d3] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1227 09:13:50.698548  303796 system_pods.go:89] "snapshot-controller-6588d87457-7gcsz" [31eae95c-849e-4afb-a907-35500153dad2] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1227 09:13:50.698577  303796 system_pods.go:89] "snapshot-controller-6588d87457-qk4q8" [bedb426f-4b56-4bfb-82c9-a589362a6a58] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1227 09:13:50.698603  303796 system_pods.go:89] "storage-provisioner" [ffb632a1-439a-4db2-8c01-b53eef01b2f0] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1227 09:13:50.747286  303796 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1227 09:13:50.907661  303796 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1227 09:13:50.958967  303796 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1227 09:13:50.960646  303796 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1227 09:13:51.124467  303796 system_pods.go:86] 19 kube-system pods found
	I1227 09:13:51.124562  303796 system_pods.go:89] "coredns-7d764666f9-xtj9t" [24c14bda-773d-415f-b762-1450176d6d61] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1227 09:13:51.124587  303796 system_pods.go:89] "csi-hostpath-attacher-0" [5126a6a7-1d42-48d4-acbe-8a5685077733] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1227 09:13:51.124626  303796 system_pods.go:89] "csi-hostpath-resizer-0" [06db8ffa-b07e-4aeb-bd46-a9ecb9a67589] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1227 09:13:51.124658  303796 system_pods.go:89] "csi-hostpathplugin-25pt8" [7cdc8d25-60b9-4976-8487-5d0c851cee4c] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1227 09:13:51.124681  303796 system_pods.go:89] "etcd-addons-730938" [7285901f-d692-42c6-8ebd-639a35437696] Running
	I1227 09:13:51.124705  303796 system_pods.go:89] "kindnet-lh6m8" [4a445964-f164-40f8-a75a-ab72be949024] Running
	I1227 09:13:51.124738  303796 system_pods.go:89] "kube-apiserver-addons-730938" [c7900903-40ba-4ae0-a78a-84f9f38e2983] Running
	I1227 09:13:51.124768  303796 system_pods.go:89] "kube-controller-manager-addons-730938" [ac4ae5fb-0e42-4ea3-be89-85ec8d00d726] Running
	I1227 09:13:51.124796  303796 system_pods.go:89] "kube-ingress-dns-minikube" [fa60965c-1728-4a80-aade-c9cf2cf3c2d4] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1227 09:13:51.124818  303796 system_pods.go:89] "kube-proxy-7bh9h" [ae689d09-cb21-4a2d-8289-0510bcd830eb] Running
	I1227 09:13:51.124854  303796 system_pods.go:89] "kube-scheduler-addons-730938" [9d05a602-52bf-4ba9-9a02-86a3bc16d767] Running
	I1227 09:13:51.124890  303796 system_pods.go:89] "metrics-server-5778bb4788-fjcqt" [00092f37-6735-4931-aed2-2c7db199670b] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1227 09:13:51.124914  303796 system_pods.go:89] "nvidia-device-plugin-daemonset-vmvrx" [e2b6c834-084a-4efc-8620-1ace4680f81e] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1227 09:13:51.124960  303796 system_pods.go:89] "registry-788cd7d5bc-8bnrd" [d91b0f24-ead6-4a7a-9c0d-b90b07ab7ef6] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1227 09:13:51.124989  303796 system_pods.go:89] "registry-creds-567fb78d95-gv8vz" [5eef20df-0f92-4c24-a247-6e0cce141060] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1227 09:13:51.125012  303796 system_pods.go:89] "registry-proxy-j242b" [d24d9cbd-01f8-457f-bf06-940800f1f0d3] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1227 09:13:51.125040  303796 system_pods.go:89] "snapshot-controller-6588d87457-7gcsz" [31eae95c-849e-4afb-a907-35500153dad2] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1227 09:13:51.125078  303796 system_pods.go:89] "snapshot-controller-6588d87457-qk4q8" [bedb426f-4b56-4bfb-82c9-a589362a6a58] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1227 09:13:51.125108  303796 system_pods.go:89] "storage-provisioner" [ffb632a1-439a-4db2-8c01-b53eef01b2f0] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1227 09:13:51.254123  303796 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1227 09:13:51.408694  303796 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1227 09:13:51.459391  303796 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1227 09:13:51.459643  303796 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1227 09:13:51.592902  303796 system_pods.go:86] 19 kube-system pods found
	I1227 09:13:51.592942  303796 system_pods.go:89] "coredns-7d764666f9-xtj9t" [24c14bda-773d-415f-b762-1450176d6d61] Running
	I1227 09:13:51.592953  303796 system_pods.go:89] "csi-hostpath-attacher-0" [5126a6a7-1d42-48d4-acbe-8a5685077733] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1227 09:13:51.592960  303796 system_pods.go:89] "csi-hostpath-resizer-0" [06db8ffa-b07e-4aeb-bd46-a9ecb9a67589] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1227 09:13:51.592972  303796 system_pods.go:89] "csi-hostpathplugin-25pt8" [7cdc8d25-60b9-4976-8487-5d0c851cee4c] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1227 09:13:51.592978  303796 system_pods.go:89] "etcd-addons-730938" [7285901f-d692-42c6-8ebd-639a35437696] Running
	I1227 09:13:51.592984  303796 system_pods.go:89] "kindnet-lh6m8" [4a445964-f164-40f8-a75a-ab72be949024] Running
	I1227 09:13:51.592989  303796 system_pods.go:89] "kube-apiserver-addons-730938" [c7900903-40ba-4ae0-a78a-84f9f38e2983] Running
	I1227 09:13:51.592995  303796 system_pods.go:89] "kube-controller-manager-addons-730938" [ac4ae5fb-0e42-4ea3-be89-85ec8d00d726] Running
	I1227 09:13:51.593005  303796 system_pods.go:89] "kube-ingress-dns-minikube" [fa60965c-1728-4a80-aade-c9cf2cf3c2d4] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1227 09:13:51.593010  303796 system_pods.go:89] "kube-proxy-7bh9h" [ae689d09-cb21-4a2d-8289-0510bcd830eb] Running
	I1227 09:13:51.593028  303796 system_pods.go:89] "kube-scheduler-addons-730938" [9d05a602-52bf-4ba9-9a02-86a3bc16d767] Running
	I1227 09:13:51.593034  303796 system_pods.go:89] "metrics-server-5778bb4788-fjcqt" [00092f37-6735-4931-aed2-2c7db199670b] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1227 09:13:51.593048  303796 system_pods.go:89] "nvidia-device-plugin-daemonset-vmvrx" [e2b6c834-084a-4efc-8620-1ace4680f81e] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1227 09:13:51.593062  303796 system_pods.go:89] "registry-788cd7d5bc-8bnrd" [d91b0f24-ead6-4a7a-9c0d-b90b07ab7ef6] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1227 09:13:51.593069  303796 system_pods.go:89] "registry-creds-567fb78d95-gv8vz" [5eef20df-0f92-4c24-a247-6e0cce141060] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1227 09:13:51.593085  303796 system_pods.go:89] "registry-proxy-j242b" [d24d9cbd-01f8-457f-bf06-940800f1f0d3] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1227 09:13:51.593092  303796 system_pods.go:89] "snapshot-controller-6588d87457-7gcsz" [31eae95c-849e-4afb-a907-35500153dad2] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1227 09:13:51.593099  303796 system_pods.go:89] "snapshot-controller-6588d87457-qk4q8" [bedb426f-4b56-4bfb-82c9-a589362a6a58] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1227 09:13:51.593110  303796 system_pods.go:89] "storage-provisioner" [ffb632a1-439a-4db2-8c01-b53eef01b2f0] Running
	I1227 09:13:51.593118  303796 system_pods.go:126] duration metric: took 1.53698379s to wait for k8s-apps to be running ...
	I1227 09:13:51.593130  303796 system_svc.go:44] waiting for kubelet service to be running ....
	I1227 09:13:51.593185  303796 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1227 09:13:51.611750  303796 system_svc.go:56] duration metric: took 18.610394ms WaitForService to wait for kubelet
	I1227 09:13:51.611781  303796 kubeadm.go:587] duration metric: took 15.923876653s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1227 09:13:51.611839  303796 node_conditions.go:102] verifying NodePressure condition ...
	I1227 09:13:51.615782  303796 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1227 09:13:51.615816  303796 node_conditions.go:123] node cpu capacity is 2
	I1227 09:13:51.615831  303796 node_conditions.go:105] duration metric: took 3.985141ms to run NodePressure ...
	I1227 09:13:51.615844  303796 start.go:242] waiting for startup goroutines ...
	I1227 09:13:51.745601  303796 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1227 09:13:51.907585  303796 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1227 09:13:51.959341  303796 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1227 09:13:51.959465  303796 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1227 09:13:52.244513  303796 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1227 09:13:52.408443  303796 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1227 09:13:52.459291  303796 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1227 09:13:52.461507  303796 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1227 09:13:52.744794  303796 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1227 09:13:52.907845  303796 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1227 09:13:52.958201  303796 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1227 09:13:52.958252  303796 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1227 09:13:53.245637  303796 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1227 09:13:53.408075  303796 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1227 09:13:53.461049  303796 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1227 09:13:53.461578  303796 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1227 09:13:53.744666  303796 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1227 09:13:53.907918  303796 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1227 09:13:53.959847  303796 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1227 09:13:53.960483  303796 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1227 09:13:54.244846  303796 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1227 09:13:54.408225  303796 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1227 09:13:54.460202  303796 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1227 09:13:54.460599  303796 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1227 09:13:54.744644  303796 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1227 09:13:54.907925  303796 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1227 09:13:54.959980  303796 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1227 09:13:54.960707  303796 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1227 09:13:55.246593  303796 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1227 09:13:55.408381  303796 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1227 09:13:55.460714  303796 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1227 09:13:55.460805  303796 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1227 09:13:55.744310  303796 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1227 09:13:55.907757  303796 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1227 09:13:55.960237  303796 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1227 09:13:55.960789  303796 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1227 09:13:56.244612  303796 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1227 09:13:56.407946  303796 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1227 09:13:56.459205  303796 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1227 09:13:56.461030  303796 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1227 09:13:56.744911  303796 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1227 09:13:56.907784  303796 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1227 09:13:56.958422  303796 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1227 09:13:56.959494  303796 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1227 09:13:57.244369  303796 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1227 09:13:57.408881  303796 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1227 09:13:57.460352  303796 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1227 09:13:57.460820  303796 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1227 09:13:57.744523  303796 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1227 09:13:57.907703  303796 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1227 09:13:57.959560  303796 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1227 09:13:57.960362  303796 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1227 09:13:58.244725  303796 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1227 09:13:58.408521  303796 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1227 09:13:58.460298  303796 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1227 09:13:58.460678  303796 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1227 09:13:58.745668  303796 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1227 09:13:58.912597  303796 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1227 09:13:58.966020  303796 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1227 09:13:58.966488  303796 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1227 09:13:59.246771  303796 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1227 09:13:59.412495  303796 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1227 09:13:59.460943  303796 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1227 09:13:59.461471  303796 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1227 09:13:59.745845  303796 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1227 09:13:59.907368  303796 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1227 09:13:59.960768  303796 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1227 09:13:59.960868  303796 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1227 09:14:00.262024  303796 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1227 09:14:00.408828  303796 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1227 09:14:00.460872  303796 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1227 09:14:00.461362  303796 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1227 09:14:00.744881  303796 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1227 09:14:00.908127  303796 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1227 09:14:00.960136  303796 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1227 09:14:00.960556  303796 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1227 09:14:01.245795  303796 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1227 09:14:01.408134  303796 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1227 09:14:01.460667  303796 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1227 09:14:01.461140  303796 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1227 09:14:01.748339  303796 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1227 09:14:01.907983  303796 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1227 09:14:01.961870  303796 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1227 09:14:01.962477  303796 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1227 09:14:02.245058  303796 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1227 09:14:02.413153  303796 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1227 09:14:02.461818  303796 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1227 09:14:02.463800  303796 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1227 09:14:02.745555  303796 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1227 09:14:02.907764  303796 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1227 09:14:02.961179  303796 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1227 09:14:02.961559  303796 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1227 09:14:03.243936  303796 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1227 09:14:03.414794  303796 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1227 09:14:03.516379  303796 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1227 09:14:03.516781  303796 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1227 09:14:03.744629  303796 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1227 09:14:03.908170  303796 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1227 09:14:03.960898  303796 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1227 09:14:03.960973  303796 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1227 09:14:04.246863  303796 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1227 09:14:04.407502  303796 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1227 09:14:04.458767  303796 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1227 09:14:04.458897  303796 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1227 09:14:04.746361  303796 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1227 09:14:04.907727  303796 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1227 09:14:04.958650  303796 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1227 09:14:04.959395  303796 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1227 09:14:05.243869  303796 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1227 09:14:05.411684  303796 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1227 09:14:05.461691  303796 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1227 09:14:05.462197  303796 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1227 09:14:05.745155  303796 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1227 09:14:05.908390  303796 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1227 09:14:05.960128  303796 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1227 09:14:05.960534  303796 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1227 09:14:06.244271  303796 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1227 09:14:06.408742  303796 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1227 09:14:06.459307  303796 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1227 09:14:06.459801  303796 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1227 09:14:06.745331  303796 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1227 09:14:06.908270  303796 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1227 09:14:06.958397  303796 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1227 09:14:06.958731  303796 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1227 09:14:07.243883  303796 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1227 09:14:07.408287  303796 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1227 09:14:07.459300  303796 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1227 09:14:07.460583  303796 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1227 09:14:07.744216  303796 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1227 09:14:07.908392  303796 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1227 09:14:07.960176  303796 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1227 09:14:07.961149  303796 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1227 09:14:08.243930  303796 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1227 09:14:08.408891  303796 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1227 09:14:08.461684  303796 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1227 09:14:08.462418  303796 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1227 09:14:08.744256  303796 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1227 09:14:08.909497  303796 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1227 09:14:08.959780  303796 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1227 09:14:08.960003  303796 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1227 09:14:09.244720  303796 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1227 09:14:09.407875  303796 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1227 09:14:09.457771  303796 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1227 09:14:09.459989  303796 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1227 09:14:09.743970  303796 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1227 09:14:09.907737  303796 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1227 09:14:09.958849  303796 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1227 09:14:09.960858  303796 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1227 09:14:10.245844  303796 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1227 09:14:10.408331  303796 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1227 09:14:10.459187  303796 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1227 09:14:10.459586  303796 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1227 09:14:10.743673  303796 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1227 09:14:10.907649  303796 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1227 09:14:10.960484  303796 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1227 09:14:10.961319  303796 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1227 09:14:11.244392  303796 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1227 09:14:11.407549  303796 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1227 09:14:11.459684  303796 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1227 09:14:11.460361  303796 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1227 09:14:11.744475  303796 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1227 09:14:11.907658  303796 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1227 09:14:11.961624  303796 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1227 09:14:11.962044  303796 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1227 09:14:12.244685  303796 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1227 09:14:12.412936  303796 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1227 09:14:12.459991  303796 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1227 09:14:12.460188  303796 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1227 09:14:12.744880  303796 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1227 09:14:12.907920  303796 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1227 09:14:12.958297  303796 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1227 09:14:12.960744  303796 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1227 09:14:13.243750  303796 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1227 09:14:13.407573  303796 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1227 09:14:13.459825  303796 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1227 09:14:13.460537  303796 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1227 09:14:13.744255  303796 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1227 09:14:13.908360  303796 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1227 09:14:13.959136  303796 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1227 09:14:13.959276  303796 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1227 09:14:14.244667  303796 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1227 09:14:14.408128  303796 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1227 09:14:14.469188  303796 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1227 09:14:14.474492  303796 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1227 09:14:14.746893  303796 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1227 09:14:14.907921  303796 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1227 09:14:14.960014  303796 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1227 09:14:14.960178  303796 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1227 09:14:15.244973  303796 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1227 09:14:15.407401  303796 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1227 09:14:15.458143  303796 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1227 09:14:15.460073  303796 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1227 09:14:15.745075  303796 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1227 09:14:15.908080  303796 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1227 09:14:15.959578  303796 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1227 09:14:15.961711  303796 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1227 09:14:16.244132  303796 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1227 09:14:16.408431  303796 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1227 09:14:16.458960  303796 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1227 09:14:16.459418  303796 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1227 09:14:16.743940  303796 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1227 09:14:16.908034  303796 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1227 09:14:16.958571  303796 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1227 09:14:16.959665  303796 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1227 09:14:17.245482  303796 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1227 09:14:17.407872  303796 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1227 09:14:17.457980  303796 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1227 09:14:17.458287  303796 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1227 09:14:17.744123  303796 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1227 09:14:17.908046  303796 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1227 09:14:17.959211  303796 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1227 09:14:17.959846  303796 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1227 09:14:18.244031  303796 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1227 09:14:18.408024  303796 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1227 09:14:18.459678  303796 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1227 09:14:18.460188  303796 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1227 09:14:18.744872  303796 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1227 09:14:18.909001  303796 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1227 09:14:18.959855  303796 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1227 09:14:18.960203  303796 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1227 09:14:19.244802  303796 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1227 09:14:19.408594  303796 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1227 09:14:19.459478  303796 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1227 09:14:19.460556  303796 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1227 09:14:19.743863  303796 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1227 09:14:19.908018  303796 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1227 09:14:19.960126  303796 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1227 09:14:19.961289  303796 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1227 09:14:20.244551  303796 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1227 09:14:20.407963  303796 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1227 09:14:20.460112  303796 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1227 09:14:20.460389  303796 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1227 09:14:20.754921  303796 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1227 09:14:20.908544  303796 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1227 09:14:20.959750  303796 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1227 09:14:20.959925  303796 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1227 09:14:21.244701  303796 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1227 09:14:21.412435  303796 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1227 09:14:21.461080  303796 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1227 09:14:21.463804  303796 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1227 09:14:21.744502  303796 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1227 09:14:21.907645  303796 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1227 09:14:21.958965  303796 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1227 09:14:21.959077  303796 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1227 09:14:22.244475  303796 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1227 09:14:22.426610  303796 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1227 09:14:22.517053  303796 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1227 09:14:22.520846  303796 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1227 09:14:22.744558  303796 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1227 09:14:22.907296  303796 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1227 09:14:22.960429  303796 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1227 09:14:22.960914  303796 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1227 09:14:23.244379  303796 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1227 09:14:23.408604  303796 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1227 09:14:23.464237  303796 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1227 09:14:23.464581  303796 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1227 09:14:23.744578  303796 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1227 09:14:23.908226  303796 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1227 09:14:23.960781  303796 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1227 09:14:23.961475  303796 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1227 09:14:24.244489  303796 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1227 09:14:24.407574  303796 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1227 09:14:24.460574  303796 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1227 09:14:24.460969  303796 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1227 09:14:24.745327  303796 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1227 09:14:24.908019  303796 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1227 09:14:24.959108  303796 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1227 09:14:24.959258  303796 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1227 09:14:25.243883  303796 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1227 09:14:25.409248  303796 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1227 09:14:25.461101  303796 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1227 09:14:25.470660  303796 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1227 09:14:25.744338  303796 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1227 09:14:25.908221  303796 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1227 09:14:25.959475  303796 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1227 09:14:25.960637  303796 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1227 09:14:26.244631  303796 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1227 09:14:26.407555  303796 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1227 09:14:26.467565  303796 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1227 09:14:26.469189  303796 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1227 09:14:26.745696  303796 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1227 09:14:26.908189  303796 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1227 09:14:26.958269  303796 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1227 09:14:26.959533  303796 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1227 09:14:27.244073  303796 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1227 09:14:27.410444  303796 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1227 09:14:27.461678  303796 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1227 09:14:27.463541  303796 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1227 09:14:27.744455  303796 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1227 09:14:27.908488  303796 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1227 09:14:27.959327  303796 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1227 09:14:27.960230  303796 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1227 09:14:28.244897  303796 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1227 09:14:28.408055  303796 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1227 09:14:28.459644  303796 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1227 09:14:28.459849  303796 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1227 09:14:28.744634  303796 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1227 09:14:28.907831  303796 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1227 09:14:28.959038  303796 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1227 09:14:28.960744  303796 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1227 09:14:29.244501  303796 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1227 09:14:29.408015  303796 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1227 09:14:29.461164  303796 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1227 09:14:29.461310  303796 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1227 09:14:29.743766  303796 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1227 09:14:29.908595  303796 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1227 09:14:29.959146  303796 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1227 09:14:29.958977  303796 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1227 09:14:30.248116  303796 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1227 09:14:30.407731  303796 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1227 09:14:30.458948  303796 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1227 09:14:30.459132  303796 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1227 09:14:30.746329  303796 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1227 09:14:30.908960  303796 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1227 09:14:30.961553  303796 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1227 09:14:30.962442  303796 kapi.go:107] duration metric: took 48.006778411s to wait for kubernetes.io/minikube-addons=registry ...
	I1227 09:14:31.244650  303796 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1227 09:14:31.407794  303796 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1227 09:14:31.458204  303796 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1227 09:14:31.744922  303796 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1227 09:14:31.908339  303796 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1227 09:14:31.958770  303796 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1227 09:14:32.243595  303796 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1227 09:14:32.410576  303796 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1227 09:14:32.459259  303796 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1227 09:14:32.750786  303796 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1227 09:14:32.907358  303796 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1227 09:14:32.958582  303796 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1227 09:14:33.244382  303796 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1227 09:14:33.407439  303796 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1227 09:14:33.458671  303796 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1227 09:14:33.744155  303796 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1227 09:14:33.909478  303796 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1227 09:14:33.958995  303796 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1227 09:14:34.245153  303796 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1227 09:14:34.409118  303796 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1227 09:14:34.459766  303796 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1227 09:14:34.749646  303796 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1227 09:14:34.907889  303796 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1227 09:14:34.957838  303796 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1227 09:14:35.244623  303796 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1227 09:14:35.408646  303796 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1227 09:14:35.457935  303796 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1227 09:14:35.749198  303796 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1227 09:14:35.920700  303796 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1227 09:14:36.021628  303796 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1227 09:14:36.243567  303796 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1227 09:14:36.407968  303796 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1227 09:14:36.458938  303796 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1227 09:14:36.751817  303796 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1227 09:14:36.908612  303796 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1227 09:14:36.959296  303796 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1227 09:14:37.244401  303796 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1227 09:14:37.409908  303796 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1227 09:14:37.509667  303796 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1227 09:14:37.743848  303796 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1227 09:14:37.907720  303796 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1227 09:14:37.957926  303796 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1227 09:14:38.245653  303796 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1227 09:14:38.409717  303796 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1227 09:14:38.458735  303796 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1227 09:14:38.745357  303796 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1227 09:14:38.909031  303796 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1227 09:14:38.958293  303796 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1227 09:14:39.244569  303796 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1227 09:14:39.407672  303796 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1227 09:14:39.457476  303796 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1227 09:14:39.745078  303796 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1227 09:14:39.908138  303796 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1227 09:14:39.958508  303796 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1227 09:14:40.244126  303796 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1227 09:14:40.407262  303796 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1227 09:14:40.458699  303796 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1227 09:14:40.747432  303796 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1227 09:14:40.908579  303796 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1227 09:14:40.958421  303796 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1227 09:14:41.245602  303796 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1227 09:14:41.408465  303796 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1227 09:14:41.458703  303796 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1227 09:14:41.744185  303796 kapi.go:107] duration metric: took 58.503786799s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I1227 09:14:41.908299  303796 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1227 09:14:41.958234  303796 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1227 09:14:42.408151  303796 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1227 09:14:42.458382  303796 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1227 09:14:42.908052  303796 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1227 09:14:42.959177  303796 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1227 09:14:43.418872  303796 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1227 09:14:43.458301  303796 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1227 09:14:43.907420  303796 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1227 09:14:43.958556  303796 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1227 09:14:44.407898  303796 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1227 09:14:44.457993  303796 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1227 09:14:44.908069  303796 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1227 09:14:44.958051  303796 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1227 09:14:45.408605  303796 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1227 09:14:45.457516  303796 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1227 09:14:45.908061  303796 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1227 09:14:45.958066  303796 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1227 09:14:46.407756  303796 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1227 09:14:46.457960  303796 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1227 09:14:46.909159  303796 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1227 09:14:46.958228  303796 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1227 09:14:47.407501  303796 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1227 09:14:47.458311  303796 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1227 09:14:47.907220  303796 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1227 09:14:47.958358  303796 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1227 09:14:48.407606  303796 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1227 09:14:48.459247  303796 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1227 09:14:48.907281  303796 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1227 09:14:48.958585  303796 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1227 09:14:49.408382  303796 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1227 09:14:49.458676  303796 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1227 09:14:49.908514  303796 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1227 09:14:49.958780  303796 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1227 09:14:50.408412  303796 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1227 09:14:50.457955  303796 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1227 09:14:50.908139  303796 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1227 09:14:50.958804  303796 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1227 09:14:51.407891  303796 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1227 09:14:51.457925  303796 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1227 09:14:51.908748  303796 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1227 09:14:51.958505  303796 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1227 09:14:52.407821  303796 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1227 09:14:52.457909  303796 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1227 09:14:52.908098  303796 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1227 09:14:52.958591  303796 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1227 09:14:53.407838  303796 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1227 09:14:53.458120  303796 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1227 09:14:53.908447  303796 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1227 09:14:53.958563  303796 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1227 09:14:54.408156  303796 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1227 09:14:54.458742  303796 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1227 09:14:54.907540  303796 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1227 09:14:54.959690  303796 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1227 09:14:55.408695  303796 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1227 09:14:55.459244  303796 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1227 09:14:55.908376  303796 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1227 09:14:55.960037  303796 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1227 09:14:56.407494  303796 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1227 09:14:56.459007  303796 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1227 09:14:56.908526  303796 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1227 09:14:56.962882  303796 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1227 09:14:57.407959  303796 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1227 09:14:57.458227  303796 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1227 09:14:57.908427  303796 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1227 09:14:57.958797  303796 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1227 09:14:58.408528  303796 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1227 09:14:58.459403  303796 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1227 09:14:58.907742  303796 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1227 09:14:58.958910  303796 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1227 09:14:59.407806  303796 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1227 09:14:59.458773  303796 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1227 09:14:59.907690  303796 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1227 09:14:59.958858  303796 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1227 09:15:00.416519  303796 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1227 09:15:00.470103  303796 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1227 09:15:00.908386  303796 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1227 09:15:00.959456  303796 kapi.go:107] duration metric: took 1m18.004743088s to wait for app.kubernetes.io/name=ingress-nginx ...
	I1227 09:15:01.408125  303796 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1227 09:15:01.908665  303796 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1227 09:15:02.409605  303796 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1227 09:15:02.907324  303796 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1227 09:15:03.407837  303796 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1227 09:15:03.911318  303796 kapi.go:107] duration metric: took 1m17.506818164s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I1227 09:15:03.914248  303796 out.go:179] * Your GCP credentials will now be mounted into every pod created in the addons-730938 cluster.
	I1227 09:15:03.917212  303796 out.go:179] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I1227 09:15:03.919963  303796 out.go:179] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I1227 09:15:03.922900  303796 out.go:179] * Enabled addons: amd-gpu-device-plugin, registry-creds, nvidia-device-plugin, default-storageclass, cloud-spanner, ingress-dns, storage-provisioner, inspektor-gadget, metrics-server, yakd, storage-provisioner-rancher, volumesnapshots, registry, csi-hostpath-driver, ingress, gcp-auth
	I1227 09:15:03.925743  303796 addons.go:530] duration metric: took 1m28.237603649s for enable addons: enabled=[amd-gpu-device-plugin registry-creds nvidia-device-plugin default-storageclass cloud-spanner ingress-dns storage-provisioner inspektor-gadget metrics-server yakd storage-provisioner-rancher volumesnapshots registry csi-hostpath-driver ingress gcp-auth]
	I1227 09:15:03.925797  303796 start.go:247] waiting for cluster config update ...
	I1227 09:15:03.925819  303796 start.go:256] writing updated cluster config ...
	I1227 09:15:03.926202  303796 ssh_runner.go:195] Run: rm -f paused
	I1227 09:15:03.931290  303796 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1227 09:15:03.935425  303796 pod_ready.go:83] waiting for pod "coredns-7d764666f9-xtj9t" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 09:15:03.940450  303796 pod_ready.go:94] pod "coredns-7d764666f9-xtj9t" is "Ready"
	I1227 09:15:03.940478  303796 pod_ready.go:86] duration metric: took 4.981494ms for pod "coredns-7d764666f9-xtj9t" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 09:15:03.942989  303796 pod_ready.go:83] waiting for pod "etcd-addons-730938" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 09:15:03.947971  303796 pod_ready.go:94] pod "etcd-addons-730938" is "Ready"
	I1227 09:15:03.948019  303796 pod_ready.go:86] duration metric: took 5.004707ms for pod "etcd-addons-730938" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 09:15:03.950245  303796 pod_ready.go:83] waiting for pod "kube-apiserver-addons-730938" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 09:15:03.955130  303796 pod_ready.go:94] pod "kube-apiserver-addons-730938" is "Ready"
	I1227 09:15:03.955156  303796 pod_ready.go:86] duration metric: took 4.888611ms for pod "kube-apiserver-addons-730938" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 09:15:03.957422  303796 pod_ready.go:83] waiting for pod "kube-controller-manager-addons-730938" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 09:15:04.335313  303796 pod_ready.go:94] pod "kube-controller-manager-addons-730938" is "Ready"
	I1227 09:15:04.335410  303796 pod_ready.go:86] duration metric: took 377.93273ms for pod "kube-controller-manager-addons-730938" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 09:15:04.536090  303796 pod_ready.go:83] waiting for pod "kube-proxy-7bh9h" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 09:15:04.935037  303796 pod_ready.go:94] pod "kube-proxy-7bh9h" is "Ready"
	I1227 09:15:04.935069  303796 pod_ready.go:86] duration metric: took 398.950613ms for pod "kube-proxy-7bh9h" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 09:15:05.135778  303796 pod_ready.go:83] waiting for pod "kube-scheduler-addons-730938" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 09:15:05.535722  303796 pod_ready.go:94] pod "kube-scheduler-addons-730938" is "Ready"
	I1227 09:15:05.535802  303796 pod_ready.go:86] duration metric: took 399.99792ms for pod "kube-scheduler-addons-730938" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 09:15:05.535832  303796 pod_ready.go:40] duration metric: took 1.604503526s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1227 09:15:05.607457  303796 start.go:625] kubectl: 1.33.2, cluster: 1.35.0 (minor skew: 2)
	I1227 09:15:05.610812  303796 out.go:203] 
	W1227 09:15:05.613568  303796 out.go:285] ! /usr/local/bin/kubectl is version 1.33.2, which may have incompatibilities with Kubernetes 1.35.0.
	I1227 09:15:05.616439  303796 out.go:179]   - Want kubectl v1.35.0? Try 'minikube kubectl -- get pods -A'
	I1227 09:15:05.619301  303796 out.go:179] * Done! kubectl is now configured to use "addons-730938" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Dec 27 09:15:33 addons-730938 crio[828]: time="2025-12-27T09:15:33.873245304Z" level=info msg="Started container" PID=5394 containerID=c51d741a8146b5af5bd1f4998d993389a833315aa299ff3dd264cf37c1b2da11 description=default/test-local-path/busybox id=c8420bda-782d-405c-966a-672f522d902c name=/runtime.v1.RuntimeService/StartContainer sandboxID=e9164ee30ae7037218f902877f49225dbb6c7e2869b0f1f9420d9625c23ca2f9
	Dec 27 09:15:34 addons-730938 crio[828]: time="2025-12-27T09:15:34.927212712Z" level=info msg="Stopping pod sandbox: e9164ee30ae7037218f902877f49225dbb6c7e2869b0f1f9420d9625c23ca2f9" id=e489453f-4f6d-4523-9910-75a21b31198b name=/runtime.v1.RuntimeService/StopPodSandbox
	Dec 27 09:15:34 addons-730938 crio[828]: time="2025-12-27T09:15:34.927538616Z" level=info msg="Got pod network &{Name:test-local-path Namespace:default ID:e9164ee30ae7037218f902877f49225dbb6c7e2869b0f1f9420d9625c23ca2f9 UID:164e368f-a5a3-4605-9bac-4ada46dbe522 NetNS:/var/run/netns/d98fd576-4fe3-4f0d-92c1-d4b33c2be410 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x40013d64b0}] Aliases:map[]}"
	Dec 27 09:15:34 addons-730938 crio[828]: time="2025-12-27T09:15:34.927681116Z" level=info msg="Deleting pod default_test-local-path from CNI network \"kindnet\" (type=ptp)"
	Dec 27 09:15:34 addons-730938 crio[828]: time="2025-12-27T09:15:34.955304037Z" level=info msg="Stopped pod sandbox: e9164ee30ae7037218f902877f49225dbb6c7e2869b0f1f9420d9625c23ca2f9" id=e489453f-4f6d-4523-9910-75a21b31198b name=/runtime.v1.RuntimeService/StopPodSandbox
	Dec 27 09:15:36 addons-730938 crio[828]: time="2025-12-27T09:15:36.62465414Z" level=info msg="Running pod sandbox: local-path-storage/helper-pod-delete-pvc-aee992c0-fa66-4d17-ace8-b295e4d945de/POD" id=bf7b3059-6051-46ca-8211-fee6c713f937 name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 27 09:15:36 addons-730938 crio[828]: time="2025-12-27T09:15:36.624742494Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 27 09:15:36 addons-730938 crio[828]: time="2025-12-27T09:15:36.641812166Z" level=info msg="Got pod network &{Name:helper-pod-delete-pvc-aee992c0-fa66-4d17-ace8-b295e4d945de Namespace:local-path-storage ID:2e02f47739cb3a2bf9f596ad01f4a11bc893238ceb2925b852eb55b92975bc07 UID:b7eab776-9841-44cb-830a-a03996bf8a56 NetNS:/var/run/netns/8a4c0968-909e-4219-aa31-0b579642760d Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x40013d6080}] Aliases:map[]}"
	Dec 27 09:15:36 addons-730938 crio[828]: time="2025-12-27T09:15:36.642012087Z" level=info msg="Adding pod local-path-storage_helper-pod-delete-pvc-aee992c0-fa66-4d17-ace8-b295e4d945de to CNI network \"kindnet\" (type=ptp)"
	Dec 27 09:15:36 addons-730938 crio[828]: time="2025-12-27T09:15:36.652976221Z" level=info msg="Got pod network &{Name:helper-pod-delete-pvc-aee992c0-fa66-4d17-ace8-b295e4d945de Namespace:local-path-storage ID:2e02f47739cb3a2bf9f596ad01f4a11bc893238ceb2925b852eb55b92975bc07 UID:b7eab776-9841-44cb-830a-a03996bf8a56 NetNS:/var/run/netns/8a4c0968-909e-4219-aa31-0b579642760d Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x40013d6080}] Aliases:map[]}"
	Dec 27 09:15:36 addons-730938 crio[828]: time="2025-12-27T09:15:36.654567439Z" level=info msg="Checking pod local-path-storage_helper-pod-delete-pvc-aee992c0-fa66-4d17-ace8-b295e4d945de for CNI network kindnet (type=ptp)"
	Dec 27 09:15:36 addons-730938 crio[828]: time="2025-12-27T09:15:36.669689687Z" level=info msg="Ran pod sandbox 2e02f47739cb3a2bf9f596ad01f4a11bc893238ceb2925b852eb55b92975bc07 with infra container: local-path-storage/helper-pod-delete-pvc-aee992c0-fa66-4d17-ace8-b295e4d945de/POD" id=bf7b3059-6051-46ca-8211-fee6c713f937 name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 27 09:15:36 addons-730938 crio[828]: time="2025-12-27T09:15:36.671080237Z" level=info msg="Checking image status: docker.io/busybox:stable@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79" id=9faf8522-cf29-46ac-acfd-38c32e2960a9 name=/runtime.v1.ImageService/ImageStatus
	Dec 27 09:15:36 addons-730938 crio[828]: time="2025-12-27T09:15:36.67241071Z" level=info msg="Checking image status: docker.io/busybox:stable@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79" id=10183045-ff9d-4b87-a414-59d72b4d1e4e name=/runtime.v1.ImageService/ImageStatus
	Dec 27 09:15:36 addons-730938 crio[828]: time="2025-12-27T09:15:36.683697285Z" level=info msg="Creating container: local-path-storage/helper-pod-delete-pvc-aee992c0-fa66-4d17-ace8-b295e4d945de/helper-pod" id=9f44e5bd-d36f-492b-8c44-38fd8fd3cbc0 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 27 09:15:36 addons-730938 crio[828]: time="2025-12-27T09:15:36.683809557Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 27 09:15:36 addons-730938 crio[828]: time="2025-12-27T09:15:36.698073651Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 27 09:15:36 addons-730938 crio[828]: time="2025-12-27T09:15:36.69867384Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 27 09:15:36 addons-730938 crio[828]: time="2025-12-27T09:15:36.71786195Z" level=info msg="Created container 8b67196d0a6ae4f8ac09f286a73f945da53845fff414ce7632175ea383d2c679: local-path-storage/helper-pod-delete-pvc-aee992c0-fa66-4d17-ace8-b295e4d945de/helper-pod" id=9f44e5bd-d36f-492b-8c44-38fd8fd3cbc0 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 27 09:15:36 addons-730938 crio[828]: time="2025-12-27T09:15:36.727499764Z" level=info msg="Starting container: 8b67196d0a6ae4f8ac09f286a73f945da53845fff414ce7632175ea383d2c679" id=539029ed-521c-4dd1-a0ed-7e51311c2465 name=/runtime.v1.RuntimeService/StartContainer
	Dec 27 09:15:36 addons-730938 crio[828]: time="2025-12-27T09:15:36.740274467Z" level=info msg="Started container" PID=5496 containerID=8b67196d0a6ae4f8ac09f286a73f945da53845fff414ce7632175ea383d2c679 description=local-path-storage/helper-pod-delete-pvc-aee992c0-fa66-4d17-ace8-b295e4d945de/helper-pod id=539029ed-521c-4dd1-a0ed-7e51311c2465 name=/runtime.v1.RuntimeService/StartContainer sandboxID=2e02f47739cb3a2bf9f596ad01f4a11bc893238ceb2925b852eb55b92975bc07
	Dec 27 09:15:37 addons-730938 crio[828]: time="2025-12-27T09:15:37.947329343Z" level=info msg="Stopping pod sandbox: 2e02f47739cb3a2bf9f596ad01f4a11bc893238ceb2925b852eb55b92975bc07" id=7a863593-803c-41f2-a59e-4f09c2546f0b name=/runtime.v1.RuntimeService/StopPodSandbox
	Dec 27 09:15:37 addons-730938 crio[828]: time="2025-12-27T09:15:37.948041263Z" level=info msg="Got pod network &{Name:helper-pod-delete-pvc-aee992c0-fa66-4d17-ace8-b295e4d945de Namespace:local-path-storage ID:2e02f47739cb3a2bf9f596ad01f4a11bc893238ceb2925b852eb55b92975bc07 UID:b7eab776-9841-44cb-830a-a03996bf8a56 NetNS:/var/run/netns/8a4c0968-909e-4219-aa31-0b579642760d Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x40015510a8}] Aliases:map[]}"
	Dec 27 09:15:37 addons-730938 crio[828]: time="2025-12-27T09:15:37.948230016Z" level=info msg="Deleting pod local-path-storage_helper-pod-delete-pvc-aee992c0-fa66-4d17-ace8-b295e4d945de from CNI network \"kindnet\" (type=ptp)"
	Dec 27 09:15:37 addons-730938 crio[828]: time="2025-12-27T09:15:37.97601782Z" level=info msg="Stopped pod sandbox: 2e02f47739cb3a2bf9f596ad01f4a11bc893238ceb2925b852eb55b92975bc07" id=7a863593-803c-41f2-a59e-4f09c2546f0b name=/runtime.v1.RuntimeService/StopPodSandbox
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                                        CREATED              STATE               NAME                                     ATTEMPT             POD ID              POD                                                          NAMESPACE
	8b67196d0a6ae       fc9db2894f4e4b8c296b8c9dab7e18a6e78de700d21bc0cfaf5c78484226db9c                                                                             1 second ago         Exited              helper-pod                               0                   2e02f47739cb3       helper-pod-delete-pvc-aee992c0-fa66-4d17-ace8-b295e4d945de   local-path-storage
	c51d741a8146b       docker.io/library/busybox@sha256:079b4a73854a059a2073c6e1a031b17fcbf23a47c6c59ae760d78045199e403c                                            4 seconds ago        Exited              busybox                                  0                   e9164ee30ae70       test-local-path                                              default
	3a7610f536262       docker.io/library/busybox@sha256:1fa89c01cd0473cedbd1a470abb8c139eeb80920edf1bc55de87851bfb63ea11                                            8 seconds ago        Exited              helper-pod                               0                   700654f8cfacd       helper-pod-create-pvc-aee992c0-fa66-4d17-ace8-b295e4d945de   local-path-storage
	54bcfeb511bc5       gcr.io/k8s-minikube/busybox@sha256:a77fe109c026308f149d36484d795b42efe0fd29b332be9071f63e1634c36ac9                                          9 seconds ago        Exited              registry-test                            0                   afb4b53a209a7       registry-test                                                default
	7cd327226a96b       gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e                                          29 seconds ago       Running             busybox                                  0                   6089de9d7c636       busybox                                                      default
	212fd3052d6f7       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:2de98fa4b397f92e5e8e05d73caf21787a1c72c41378f3eb7bad72b1e0f4e9ff                                 35 seconds ago       Running             gcp-auth                                 0                   0595afd468f24       gcp-auth-5bbcf684b5-lwzmk                                    gcp-auth
	562a63de7e22d       registry.k8s.io/ingress-nginx/controller@sha256:75494e2145fbebf362d24e24e9285b7fbb7da8783ab272092e3126e24ee4776d                             38 seconds ago       Running             controller                               0                   474cf60279913       ingress-nginx-controller-7847b5c79c-7vjrf                    ingress-nginx
	1ba75b0e93cff       registry.k8s.io/sig-storage/csi-snapshotter@sha256:bd6b8417b2a83e66ab1d4c1193bb2774f027745bdebbd9e0c1a6518afdecc39a                          57 seconds ago       Running             csi-snapshotter                          0                   c0ec5d92adce6       csi-hostpathplugin-25pt8                                     kube-system
	bbba8dda954f7       registry.k8s.io/sig-storage/csi-provisioner@sha256:98ffd09c0784203d200e0f8c241501de31c8df79644caac7eed61bd6391e5d49                          58 seconds ago       Running             csi-provisioner                          0                   c0ec5d92adce6       csi-hostpathplugin-25pt8                                     kube-system
	4f72823b84eba       registry.k8s.io/sig-storage/livenessprobe@sha256:8b00c6e8f52639ed9c6f866085893ab688e57879741b3089e3cfa9998502e158                            About a minute ago   Running             liveness-probe                           0                   c0ec5d92adce6       csi-hostpathplugin-25pt8                                     kube-system
	22217205eb48a       registry.k8s.io/sig-storage/hostpathplugin@sha256:7b1dfc90a367222067fc468442fdf952e20fc5961f25c1ad654300ddc34d7083                           About a minute ago   Running             hostpath                                 0                   c0ec5d92adce6       csi-hostpathplugin-25pt8                                     kube-system
	97995be12aa46       registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:511b8c8ac828194a753909d26555ff08bc12f497dd8daeb83fe9d593693a26c1                About a minute ago   Running             node-driver-registrar                    0                   c0ec5d92adce6       csi-hostpathplugin-25pt8                                     kube-system
	9289e8b0db997       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:c9c1ef89e4bb9d6c9c6c0b5375c3253a0b951e5b731240be20cebe5593de142d                   About a minute ago   Exited              patch                                    1                   911febddceeb7       ingress-nginx-admission-patch-q8btc                          ingress-nginx
	c9de917a3f5ab       ghcr.io/inspektor-gadget/inspektor-gadget@sha256:fadc7bf59b69965b6707edb68022bed4f55a1f99b15f7acd272793e48f171496                            About a minute ago   Running             gadget                                   0                   5984ba7447161       gadget-gj25j                                                 gadget
	baf605d1813f1       docker.io/library/registry@sha256:8715992817b2254fe61e74ffc6a4096d57a0cde36c95ea075676c05f7a94a630                                           About a minute ago   Running             registry                                 0                   aa33ea5f4d6cf       registry-788cd7d5bc-8bnrd                                    kube-system
	464d8a8fd1104       docker.io/rancher/local-path-provisioner@sha256:689a2489a24e74426e4a4666e611c988202c5fa995908b0c60133aca3eb87d98                             About a minute ago   Running             local-path-provisioner                   0                   167f38a19c5c1       local-path-provisioner-c44bcd496-nhm98                       local-path-storage
	ea32f9690b19d       registry.k8s.io/sig-storage/snapshot-controller@sha256:5d668e35c15df6e87e2530da25d557f543182cedbdb39d421b87076463ee9857                      About a minute ago   Running             volume-snapshot-controller               0                   b68f04ffd0c9b       snapshot-controller-6588d87457-qk4q8                         kube-system
	5cfe51c2b0a5d       registry.k8s.io/sig-storage/snapshot-controller@sha256:5d668e35c15df6e87e2530da25d557f543182cedbdb39d421b87076463ee9857                      About a minute ago   Running             volume-snapshot-controller               0                   7e75c97397a73       snapshot-controller-6588d87457-7gcsz                         kube-system
	1587e53995e7f       gcr.io/k8s-minikube/kube-registry-proxy@sha256:26c84a64530a67aa4d749dd4356d67ea27a2576e4d25b640d21857b0574cfd4b                              About a minute ago   Running             registry-proxy                           0                   67e7775a2ae4c       registry-proxy-j242b                                         kube-system
	b4430baef8968       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:c9c1ef89e4bb9d6c9c6c0b5375c3253a0b951e5b731240be20cebe5593de142d                   About a minute ago   Exited              create                                   0                   b7baa14363048       ingress-nginx-admission-create-dr8pw                         ingress-nginx
	8d7c1678ab086       nvcr.io/nvidia/k8s-device-plugin@sha256:10b7b747520ba2314061b5b319d3b2766b9cec1fd9404109c607e85b30af6905                                     About a minute ago   Running             nvidia-device-plugin-ctr                 0                   ee8a0c3aa6b69       nvidia-device-plugin-daemonset-vmvrx                         kube-system
	76711d56ef9e8       registry.k8s.io/sig-storage/csi-attacher@sha256:4b5609c78455de45821910065281a368d5f760b41250f90cbde5110543bdc326                             About a minute ago   Running             csi-attacher                             0                   71f27d7dc2860       csi-hostpath-attacher-0                                      kube-system
	d3944a9f1f7d0       registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:8b9df00898ded1bfb4d8f3672679f29cd9f88e651b76fef64121c8d347dd12c0   About a minute ago   Running             csi-external-health-monitor-controller   0                   c0ec5d92adce6       csi-hostpathplugin-25pt8                                     kube-system
	ba6025b2bde56       registry.k8s.io/sig-storage/csi-resizer@sha256:82c1945463342884c05a5b2bc31319712ce75b154c279c2a10765f61e0f688af                              About a minute ago   Running             csi-resizer                              0                   accbec23434f1       csi-hostpath-resizer-0                                       kube-system
	741be6f972155       ghcr.io/manusa/yakd@sha256:0b7e831df7fe4ad1c8c56a736a8d66bd86e243f6777d3c512ead47199d8fbe1a                                                  About a minute ago   Running             yakd                                     0                   7cb15d3bfe0a5       yakd-dashboard-865bfb49b9-2ht94                              yakd-dashboard
	68d9397bf9ce7       gcr.io/cloud-spanner-emulator/emulator@sha256:084e511546640743b2d25fe2ee59800bc7ec910acfc12175bad2270f159f5eba                               About a minute ago   Running             cloud-spanner-emulator                   0                   067754daa8c51       cloud-spanner-emulator-5649ccbc87-29jvv                      default
	f5b8cfdd7b740       registry.k8s.io/metrics-server/metrics-server@sha256:8f49cf1b0688bb0eae18437882dbf6de2c7a2baac71b1492bc4eca25439a1bf2                        About a minute ago   Running             metrics-server                           0                   622afcdffd890       metrics-server-5778bb4788-fjcqt                              kube-system
	0ffe4ed190eb9       docker.io/kicbase/minikube-ingress-dns@sha256:6d710af680d8a9b5a5b1f9047eb83ee4c9258efd3fcd962f938c00bcbb4c5958                               About a minute ago   Running             minikube-ingress-dns                     0                   baa60d710ac50       kube-ingress-dns-minikube                                    kube-system
	92def02a9b247       e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf                                                                             About a minute ago   Running             coredns                                  0                   54200f4a27b30       coredns-7d764666f9-xtj9t                                     kube-system
	f2eee0c100ef3       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                                                             About a minute ago   Running             storage-provisioner                      0                   5b06d67a3d4bc       storage-provisioner                                          kube-system
	ce77df4270ad6       docker.io/kindest/kindnetd@sha256:f1260f5691195cc9a693dc0b55178aa724d944efd62486a8320f0583272b1fa3                                           About a minute ago   Running             kindnet-cni                              0                   dcdf2b6cc8549       kindnet-lh6m8                                                kube-system
	b7820e8e00a4a       de369f46c2ff55c31ea783a663eb203caa820f3db1f9b9c935e79e7d1e9fd9e5                                                                             2 minutes ago        Running             kube-proxy                               0                   2047175c58b8c       kube-proxy-7bh9h                                             kube-system
	375550202d2a9       88898f1d1a62a3ea9db5d4d099dee7aa52ebe8191016c5b3c721388a309983e0                                                                             2 minutes ago        Running             kube-controller-manager                  0                   12246fc9ba4e1       kube-controller-manager-addons-730938                        kube-system
	906652711aa1d       271e49a0ebc56647476845128fcd2a73bb138beeca3878cc3bf52b4ff1172a57                                                                             2 minutes ago        Running             etcd                                     0                   f24be93bc7665       etcd-addons-730938                                           kube-system
	3acfe18930cb5       ddc8422d4d35a6fc66c34be61e24df795e5cebf197eb546f62740d0bafef874f                                                                             2 minutes ago        Running             kube-scheduler                           0                   c2c5a2260b4fc       kube-scheduler-addons-730938                                 kube-system
	6bc7835ee8cc0       c3fcf259c473a57a5d7da116e29161904491091743512d27467c907c5516f856                                                                             2 minutes ago        Running             kube-apiserver                           0                   021965dbed1a1       kube-apiserver-addons-730938                                 kube-system
	
	
	==> coredns [92def02a9b247e15cdb1e61f48495c2505c703028c797b88c3d5486cd99c1921] <==
	[INFO] 10.244.0.10:48953 - 29349 "A IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 94 false 1232" NXDOMAIN qr,rd,ra 83 0.003319063s
	[INFO] 10.244.0.10:48953 - 49792 "AAAA IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 149 0.000153824s
	[INFO] 10.244.0.10:48953 - 50946 "A IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 110 0.000106742s
	[INFO] 10.244.0.10:56623 - 45820 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.00030741s
	[INFO] 10.244.0.10:56623 - 46066 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000269477s
	[INFO] 10.244.0.10:59740 - 27072 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000153487s
	[INFO] 10.244.0.10:59740 - 27260 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000109138s
	[INFO] 10.244.0.10:43139 - 8927 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000116465s
	[INFO] 10.244.0.10:43139 - 9116 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000149459s
	[INFO] 10.244.0.10:50620 - 2524 "A IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.001814861s
	[INFO] 10.244.0.10:50620 - 2755 "AAAA IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.001661882s
	[INFO] 10.244.0.10:55033 - 50503 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000144863s
	[INFO] 10.244.0.10:55033 - 50692 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000199076s
	[INFO] 10.244.0.20:49136 - 36200 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000158271s
	[INFO] 10.244.0.20:48701 - 35767 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000180778s
	[INFO] 10.244.0.20:57120 - 63213 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000226407s
	[INFO] 10.244.0.20:54982 - 19556 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000100391s
	[INFO] 10.244.0.20:53543 - 50426 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000131062s
	[INFO] 10.244.0.20:45382 - 45383 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000094804s
	[INFO] 10.244.0.20:46447 - 56802 "AAAA IN storage.googleapis.com.us-east-2.compute.internal. udp 78 false 1232" NXDOMAIN qr,rd,ra 67 0.001969307s
	[INFO] 10.244.0.20:42634 - 63277 "A IN storage.googleapis.com.us-east-2.compute.internal. udp 78 false 1232" NXDOMAIN qr,rd,ra 67 0.001933254s
	[INFO] 10.244.0.20:49647 - 48694 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 610 0.002017826s
	[INFO] 10.244.0.20:43542 - 41919 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.00205397s
	[INFO] 10.244.0.22:38021 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000151978s
	[INFO] 10.244.0.22:38797 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000115686s
	
	
	==> describe nodes <==
	Name:               addons-730938
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=addons-730938
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=a2daf445edf4872fd9586416ba5dbf507613db86
	                    minikube.k8s.io/name=addons-730938
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_27T09_13_30_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-730938
	Annotations:        csi.volume.kubernetes.io/nodeid: {"hostpath.csi.k8s.io":"addons-730938"}
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 27 Dec 2025 09:13:27 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-730938
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 27 Dec 2025 09:15:32 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 27 Dec 2025 09:15:22 +0000   Sat, 27 Dec 2025 09:13:24 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 27 Dec 2025 09:15:22 +0000   Sat, 27 Dec 2025 09:13:24 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 27 Dec 2025 09:15:22 +0000   Sat, 27 Dec 2025 09:13:24 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 27 Dec 2025 09:15:22 +0000   Sat, 27 Dec 2025 09:13:49 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-730938
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022304Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022304Ki
	  pods:               110
	System Info:
	  Machine ID:                 56495bcb733b49eb3642bf15694bbc8c
	  System UUID:                3dcc28a2-1915-4829-992d-548af5eb9b03
	  Boot ID:                    bc2ae459-717b-455e-ba6b-b9cf89c5e3c8
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.3
	  Kubelet Version:            v1.35.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (26 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         32s
	  default                     cloud-spanner-emulator-5649ccbc87-29jvv      0 (0%)        0 (0%)      0 (0%)           0 (0%)         118s
	  gadget                      gadget-gj25j                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         116s
	  gcp-auth                    gcp-auth-5bbcf684b5-lwzmk                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         112s
	  ingress-nginx               ingress-nginx-controller-7847b5c79c-7vjrf    100m (5%)     0 (0%)      90Mi (1%)        0 (0%)         116s
	  kube-system                 coredns-7d764666f9-xtj9t                     100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     2m3s
	  kube-system                 csi-hostpath-attacher-0                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         115s
	  kube-system                 csi-hostpath-resizer-0                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         115s
	  kube-system                 csi-hostpathplugin-25pt8                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         109s
	  kube-system                 etcd-addons-730938                           100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         2m8s
	  kube-system                 kindnet-lh6m8                                100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      2m3s
	  kube-system                 kube-apiserver-addons-730938                 250m (12%)    0 (0%)      0 (0%)           0 (0%)         2m8s
	  kube-system                 kube-controller-manager-addons-730938        200m (10%)    0 (0%)      0 (0%)           0 (0%)         2m8s
	  kube-system                 kube-ingress-dns-minikube                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         117s
	  kube-system                 kube-proxy-7bh9h                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m3s
	  kube-system                 kube-scheduler-addons-730938                 100m (5%)     0 (0%)      0 (0%)           0 (0%)         2m8s
	  kube-system                 metrics-server-5778bb4788-fjcqt              100m (5%)     0 (0%)      200Mi (2%)       0 (0%)         117s
	  kube-system                 nvidia-device-plugin-daemonset-vmvrx         0 (0%)        0 (0%)      0 (0%)           0 (0%)         109s
	  kube-system                 registry-788cd7d5bc-8bnrd                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         117s
	  kube-system                 registry-creds-567fb78d95-gv8vz              0 (0%)        0 (0%)      0 (0%)           0 (0%)         119s
	  kube-system                 registry-proxy-j242b                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         109s
	  kube-system                 snapshot-controller-6588d87457-7gcsz         0 (0%)        0 (0%)      0 (0%)           0 (0%)         116s
	  kube-system                 snapshot-controller-6588d87457-qk4q8         0 (0%)        0 (0%)      0 (0%)           0 (0%)         116s
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         117s
	  local-path-storage          local-path-provisioner-c44bcd496-nhm98       0 (0%)        0 (0%)      0 (0%)           0 (0%)         116s
	  yakd-dashboard              yakd-dashboard-865bfb49b9-2ht94              0 (0%)        0 (0%)      128Mi (1%)       256Mi (3%)     117s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1050m (52%)  100m (5%)
	  memory             638Mi (8%)   476Mi (6%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-1Gi      0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	  hugepages-32Mi     0 (0%)       0 (0%)
	  hugepages-64Ki     0 (0%)       0 (0%)
	Events:
	  Type    Reason          Age   From             Message
	  ----    ------          ----  ----             -------
	  Normal  RegisteredNode  2m4s  node-controller  Node addons-730938 event: Registered Node addons-730938 in Controller
	
	
	==> dmesg <==
	[Dec27 07:17] ACPI: SRAT not present
	[  +0.000000] ACPI: SRAT not present
	[  +0.000000] SPI driver altr_a10sr has no spi_device_id for altr,a10sr
	[  +0.014566] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.507261] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.034994] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +0.815111] ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy.
	[  +6.929393] kauditd_printk_skb: 36 callbacks suppressed
	[Dec27 08:11] hrtimer: interrupt took 6667405 ns
	[Dec27 08:14] kauditd_printk_skb: 8 callbacks suppressed
	[Dec27 09:12] kauditd_printk_skb: 8 callbacks suppressed
	[Dec27 09:13] overlayfs: idmapped layers are currently not supported
	[  +0.064109] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	
	
	==> etcd [906652711aa1d15066fd10c3fb9e2e07fcd4427364118efb6e268960ee5f8e23] <==
	{"level":"info","ts":"2025-12-27T09:13:24.606772Z","caller":"membership/cluster.go:424","msg":"added member","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","added-peer-id":"aec36adc501070cc","added-peer-peer-urls":["https://192.168.49.2:2380"],"added-peer-is-learner":false}
	{"level":"info","ts":"2025-12-27T09:13:25.557477Z","logger":"raft","caller":"v3@v3.6.0/raft.go:988","msg":"aec36adc501070cc is starting a new election at term 1"}
	{"level":"info","ts":"2025-12-27T09:13:25.557526Z","logger":"raft","caller":"v3@v3.6.0/raft.go:930","msg":"aec36adc501070cc became pre-candidate at term 1"}
	{"level":"info","ts":"2025-12-27T09:13:25.557569Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1077","msg":"aec36adc501070cc received MsgPreVoteResp from aec36adc501070cc at term 1"}
	{"level":"info","ts":"2025-12-27T09:13:25.557598Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1693","msg":"aec36adc501070cc has received 1 MsgPreVoteResp votes and 0 vote rejections"}
	{"level":"info","ts":"2025-12-27T09:13:25.557615Z","logger":"raft","caller":"v3@v3.6.0/raft.go:912","msg":"aec36adc501070cc became candidate at term 2"}
	{"level":"info","ts":"2025-12-27T09:13:25.558699Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1077","msg":"aec36adc501070cc received MsgVoteResp from aec36adc501070cc at term 2"}
	{"level":"info","ts":"2025-12-27T09:13:25.558731Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1693","msg":"aec36adc501070cc has received 1 MsgVoteResp votes and 0 vote rejections"}
	{"level":"info","ts":"2025-12-27T09:13:25.558750Z","logger":"raft","caller":"v3@v3.6.0/raft.go:970","msg":"aec36adc501070cc became leader at term 2"}
	{"level":"info","ts":"2025-12-27T09:13:25.558761Z","logger":"raft","caller":"v3@v3.6.0/node.go:370","msg":"raft.node: aec36adc501070cc elected leader aec36adc501070cc at term 2"}
	{"level":"info","ts":"2025-12-27T09:13:25.560036Z","caller":"etcdserver/server.go:1820","msg":"published local member to cluster through raft","local-member-id":"aec36adc501070cc","local-member-attributes":"{Name:addons-730938 ClientURLs:[https://192.168.49.2:2379]}","cluster-id":"fa54960ea34d58be","publish-timeout":"7s"}
	{"level":"info","ts":"2025-12-27T09:13:25.560061Z","caller":"etcdserver/server.go:2420","msg":"setting up initial cluster version using v3 API","cluster-version":"3.6"}
	{"level":"info","ts":"2025-12-27T09:13:25.560291Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-12-27T09:13:25.560314Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-12-27T09:13:25.560083Z","caller":"embed/serve.go:138","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-12-27T09:13:25.560094Z","caller":"embed/serve.go:138","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-12-27T09:13:25.563015Z","caller":"membership/cluster.go:682","msg":"set initial cluster version","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","cluster-version":"3.6"}
	{"level":"info","ts":"2025-12-27T09:13:25.563104Z","caller":"api/capability.go:76","msg":"enabled capabilities for version","cluster-version":"3.6"}
	{"level":"info","ts":"2025-12-27T09:13:25.563133Z","caller":"etcdserver/server.go:2440","msg":"cluster version is updated","cluster-version":"3.6"}
	{"level":"info","ts":"2025-12-27T09:13:25.563154Z","caller":"version/monitor.go:116","msg":"cluster version differs from storage version.","cluster-version":"3.6.0","storage-version":"3.5.0"}
	{"level":"info","ts":"2025-12-27T09:13:25.563200Z","caller":"schema/migration.go:65","msg":"updated storage version","new-storage-version":"3.6.0"}
	{"level":"info","ts":"2025-12-27T09:13:25.563505Z","caller":"v3rpc/health.go:63","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-12-27T09:13:25.564662Z","caller":"v3rpc/health.go:63","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-12-27T09:13:25.566310Z","caller":"embed/serve.go:283","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2025-12-27T09:13:25.566524Z","caller":"embed/serve.go:283","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.49.2:2379"}
	
	
	==> gcp-auth [212fd3052d6f72078066f00f4b912771ebc80d54adf25ad35ffc162ddf9086b9] <==
	2025/12/27 09:15:03 GCP Auth Webhook started!
	2025/12/27 09:15:06 Ready to marshal response ...
	2025/12/27 09:15:06 Ready to write response ...
	2025/12/27 09:15:06 Ready to marshal response ...
	2025/12/27 09:15:06 Ready to write response ...
	2025/12/27 09:15:06 Ready to marshal response ...
	2025/12/27 09:15:06 Ready to write response ...
	2025/12/27 09:15:26 Ready to marshal response ...
	2025/12/27 09:15:26 Ready to write response ...
	2025/12/27 09:15:28 Ready to marshal response ...
	2025/12/27 09:15:28 Ready to write response ...
	2025/12/27 09:15:28 Ready to marshal response ...
	2025/12/27 09:15:28 Ready to write response ...
	2025/12/27 09:15:36 Ready to marshal response ...
	2025/12/27 09:15:36 Ready to write response ...
	
	
	==> kernel <==
	 09:15:38 up  1:58,  0 user,  load average: 2.50, 2.73, 2.53
	Linux addons-730938 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [ce77df4270ad600c2b64995a72ad36c61f8fe73f2347dc1e676d504603cfa764] <==
	I1227 09:13:39.421571       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1227 09:13:39.421603       1 metrics.go:72] Registering metrics
	I1227 09:13:39.421648       1 controller.go:711] "Syncing nftables rules"
	I1227 09:13:49.235809       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1227 09:13:49.235983       1 main.go:301] handling current node
	I1227 09:13:59.236438       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1227 09:13:59.236472       1 main.go:301] handling current node
	I1227 09:14:09.236432       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1227 09:14:09.236477       1 main.go:301] handling current node
	I1227 09:14:19.237104       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1227 09:14:19.237148       1 main.go:301] handling current node
	I1227 09:14:29.235596       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1227 09:14:29.235648       1 main.go:301] handling current node
	I1227 09:14:39.236164       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1227 09:14:39.236216       1 main.go:301] handling current node
	I1227 09:14:49.241116       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1227 09:14:49.241148       1 main.go:301] handling current node
	I1227 09:14:59.243113       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1227 09:14:59.243149       1 main.go:301] handling current node
	I1227 09:15:09.236466       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1227 09:15:09.236500       1 main.go:301] handling current node
	I1227 09:15:19.237112       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1227 09:15:19.237146       1 main.go:301] handling current node
	I1227 09:15:29.235471       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1227 09:15:29.235502       1 main.go:301] handling current node
	
	
	==> kube-apiserver [6bc7835ee8cc0444db6317a1fa1a95a206d02f7f7a29035f8adadd2213326620] <==
	W1227 09:13:43.459489       1 logging.go:55] [core] [Channel #259 SubChannel #260]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	W1227 09:13:43.479181       1 logging.go:55] [core] [Channel #263 SubChannel #264]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	I1227 09:13:46.284744       1 alloc.go:329] "allocated clusterIPs" service="gcp-auth/gcp-auth" clusterIPs={"IPv4":"10.108.118.41"}
	W1227 09:13:49.478454       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.108.118.41:443: connect: connection refused
	E1227 09:13:49.478500       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.108.118.41:443: connect: connection refused" logger="UnhandledError"
	W1227 09:13:49.478973       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.108.118.41:443: connect: connection refused
	E1227 09:13:49.479007       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.108.118.41:443: connect: connection refused" logger="UnhandledError"
	W1227 09:13:49.592947       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.108.118.41:443: connect: connection refused
	E1227 09:13:49.592987       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.108.118.41:443: connect: connection refused" logger="UnhandledError"
	E1227 09:14:03.380594       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.103.187.156:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.103.187.156:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.103.187.156:443: connect: connection refused" logger="UnhandledError"
	W1227 09:14:03.380780       1 handler_proxy.go:99] no RequestInfo found in the context
	E1227 09:14:03.380865       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	E1227 09:14:03.381427       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.103.187.156:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.103.187.156:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.103.187.156:443: connect: connection refused" logger="UnhandledError"
	E1227 09:14:03.386584       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.103.187.156:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.103.187.156:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.103.187.156:443: connect: connection refused" logger="UnhandledError"
	I1227 09:14:03.512279       1 handler.go:304] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	W1227 09:14:04.128209       1 logging.go:55] [core] [Channel #267 SubChannel #268]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1227 09:14:04.145616       1 logging.go:55] [core] [Channel #271 SubChannel #272]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1227 09:14:04.236014       1 logging.go:55] [core] [Channel #275 SubChannel #276]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	W1227 09:14:04.261160       1 logging.go:55] [core] [Channel #279 SubChannel #280]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	E1227 09:15:16.033412       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:50864: use of closed network connection
	E1227 09:15:16.262126       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:50894: use of closed network connection
	E1227 09:15:16.390925       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:50902: use of closed network connection
	
	
	==> kube-controller-manager [375550202d2a9cc42610c65fc954d2a96c60b729e6cdac6a36597b93d67c37b3] <==
	I1227 09:13:34.111577       1 shared_informer.go:377] "Caches are synced"
	I1227 09:13:34.111597       1 shared_informer.go:377] "Caches are synced"
	I1227 09:13:34.111631       1 shared_informer.go:377] "Caches are synced"
	I1227 09:13:34.111651       1 shared_informer.go:377] "Caches are synced"
	I1227 09:13:34.111671       1 shared_informer.go:377] "Caches are synced"
	I1227 09:13:34.111714       1 shared_informer.go:377] "Caches are synced"
	I1227 09:13:34.111827       1 shared_informer.go:377] "Caches are synced"
	I1227 09:13:34.111854       1 shared_informer.go:377] "Caches are synced"
	I1227 09:13:34.111874       1 shared_informer.go:377] "Caches are synced"
	I1227 09:13:34.111915       1 shared_informer.go:377] "Caches are synced"
	I1227 09:13:34.113435       1 shared_informer.go:377] "Caches are synced"
	I1227 09:13:34.114044       1 shared_informer.go:377] "Caches are synced"
	I1227 09:13:34.117943       1 shared_informer.go:370] "Waiting for caches to sync"
	I1227 09:13:34.156971       1 range_allocator.go:433] "Set node PodCIDR" node="addons-730938" podCIDRs=["10.244.0.0/24"]
	I1227 09:13:34.203389       1 shared_informer.go:377] "Caches are synced"
	I1227 09:13:34.203413       1 garbagecollector.go:166] "Garbage collector: all resource monitors have synced"
	I1227 09:13:34.203420       1 garbagecollector.go:169] "Proceeding to collect garbage"
	I1227 09:13:34.218568       1 shared_informer.go:377] "Caches are synced"
	E1227 09:13:41.878603       1 replica_set.go:592] "Unhandled Error" err="sync \"kube-system/metrics-server-5778bb4788\" failed with pods \"metrics-server-5778bb4788-\" is forbidden: error looking up service account kube-system/metrics-server: serviceaccount \"metrics-server\" not found" logger="UnhandledError"
	I1227 09:13:54.103309       1 node_lifecycle_controller.go:1057] "Controller detected that some Nodes are Ready. Exiting master disruption mode"
	I1227 09:14:04.120322       1 resource_quota_monitor.go:228] "QuotaMonitor created object count evaluator" resource="volumesnapshots.snapshot.storage.k8s.io"
	I1227 09:14:04.120409       1 shared_informer.go:370] "Waiting for caches to sync"
	I1227 09:14:04.220651       1 shared_informer.go:377] "Caches are synced"
	I1227 09:14:04.228325       1 shared_informer.go:370] "Waiting for caches to sync"
	I1227 09:14:04.329420       1 shared_informer.go:377] "Caches are synced"
	
	
	==> kube-proxy [b7820e8e00a4a323234a16edadf1567d38a8353d5bf1183b5df375c072552575] <==
	I1227 09:13:35.944816       1 server_linux.go:53] "Using iptables proxy"
	I1227 09:13:36.046025       1 shared_informer.go:370] "Waiting for caches to sync"
	I1227 09:13:36.149933       1 shared_informer.go:377] "Caches are synced"
	I1227 09:13:36.149966       1 server.go:218] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1227 09:13:36.150038       1 server.go:255] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1227 09:13:36.347704       1 server.go:264] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1227 09:13:36.347755       1 server_linux.go:136] "Using iptables Proxier"
	I1227 09:13:36.364232       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1227 09:13:36.364500       1 server.go:529] "Version info" version="v1.35.0"
	I1227 09:13:36.364511       1 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1227 09:13:36.374941       1 config.go:200] "Starting service config controller"
	I1227 09:13:36.374954       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1227 09:13:36.374975       1 config.go:106] "Starting endpoint slice config controller"
	I1227 09:13:36.374979       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1227 09:13:36.374991       1 config.go:403] "Starting serviceCIDR config controller"
	I1227 09:13:36.374999       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1227 09:13:36.375654       1 config.go:309] "Starting node config controller"
	I1227 09:13:36.375662       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1227 09:13:36.375668       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1227 09:13:36.475816       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1227 09:13:36.475861       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1227 09:13:36.475889       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [3acfe18930cb522267d2e4c6153dae8a9824a81243600ff5b20ce7d2561fc94e] <==
	E1227 09:13:27.443180       1 reflector.go:204] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.CSINode"
	E1227 09:13:27.443258       1 reflector.go:204] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.PersistentVolume"
	E1227 09:13:27.443307       1 reflector.go:204] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.CSIDriver"
	E1227 09:13:27.443372       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Service"
	E1227 09:13:27.443422       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ReplicationController"
	E1227 09:13:27.443680       1 reflector.go:204] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.DeviceClass"
	E1227 09:13:27.443787       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Node"
	E1227 09:13:27.443886       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ResourceClaim"
	E1227 09:13:27.443977       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ResourceSlice"
	E1227 09:13:27.444071       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Namespace"
	E1227 09:13:27.444169       1 reflector.go:204] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.CSIStorageCapacity"
	E1227 09:13:27.444291       1 reflector.go:204] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.PodDisruptionBudget"
	E1227 09:13:27.454614       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ReplicaSet"
	E1227 09:13:27.454771       1 reflector.go:204] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.PersistentVolumeClaim"
	E1227 09:13:27.454918       1 reflector.go:204] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.VolumeAttachment"
	E1227 09:13:27.461664       1 reflector.go:204] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.StorageClass"
	E1227 09:13:28.267395       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Pod"
	E1227 09:13:28.309458       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ResourceSlice"
	E1227 09:13:28.326110       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Namespace"
	E1227 09:13:28.372022       1 reflector.go:204] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.StatefulSet"
	E1227 09:13:28.454042       1 reflector.go:204] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.PersistentVolume"
	E1227 09:13:28.532738       1 reflector.go:204] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.CSIDriver"
	E1227 09:13:28.543396       1 reflector.go:204] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.CSIStorageCapacity"
	E1227 09:13:28.907172       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1268" type="*v1.ConfigMap"
	I1227 09:13:30.785941       1 shared_informer.go:377] "Caches are synced"
	
	
	==> kubelet <==
	Dec 27 09:15:35 addons-730938 kubelet[1253]: I1227 09:15:35.161342    1253 operation_generator.go:779] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/164e368f-a5a3-4605-9bac-4ada46dbe522-pvc-aee992c0-fa66-4d17-ace8-b295e4d945de" pod "164e368f-a5a3-4605-9bac-4ada46dbe522" (UID: "164e368f-a5a3-4605-9bac-4ada46dbe522"). InnerVolumeSpecName "pvc-aee992c0-fa66-4d17-ace8-b295e4d945de". PluginName "kubernetes.io/host-path", VolumeGIDValue ""
	Dec 27 09:15:35 addons-730938 kubelet[1253]: I1227 09:15:35.161488    1253 operation_generator.go:779] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/164e368f-a5a3-4605-9bac-4ada46dbe522-gcp-creds" pod "164e368f-a5a3-4605-9bac-4ada46dbe522" (UID: "164e368f-a5a3-4605-9bac-4ada46dbe522"). InnerVolumeSpecName "gcp-creds". PluginName "kubernetes.io/host-path", VolumeGIDValue ""
	Dec 27 09:15:35 addons-730938 kubelet[1253]: I1227 09:15:35.166602    1253 operation_generator.go:779] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/164e368f-a5a3-4605-9bac-4ada46dbe522-kube-api-access-648vj" pod "164e368f-a5a3-4605-9bac-4ada46dbe522" (UID: "164e368f-a5a3-4605-9bac-4ada46dbe522"). InnerVolumeSpecName "kube-api-access-648vj". PluginName "kubernetes.io/projected", VolumeGIDValue ""
	Dec 27 09:15:35 addons-730938 kubelet[1253]: I1227 09:15:35.261336    1253 reconciler_common.go:299] "Volume detached for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/164e368f-a5a3-4605-9bac-4ada46dbe522-gcp-creds\") on node \"addons-730938\" DevicePath \"\""
	Dec 27 09:15:35 addons-730938 kubelet[1253]: I1227 09:15:35.261397    1253 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-648vj\" (UniqueName: \"kubernetes.io/projected/164e368f-a5a3-4605-9bac-4ada46dbe522-kube-api-access-648vj\") on node \"addons-730938\" DevicePath \"\""
	Dec 27 09:15:35 addons-730938 kubelet[1253]: I1227 09:15:35.261413    1253 reconciler_common.go:299] "Volume detached for volume \"pvc-aee992c0-fa66-4d17-ace8-b295e4d945de\" (UniqueName: \"kubernetes.io/host-path/164e368f-a5a3-4605-9bac-4ada46dbe522-pvc-aee992c0-fa66-4d17-ace8-b295e4d945de\") on node \"addons-730938\" DevicePath \"\""
	Dec 27 09:15:35 addons-730938 kubelet[1253]: I1227 09:15:35.937375    1253 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e9164ee30ae7037218f902877f49225dbb6c7e2869b0f1f9420d9625c23ca2f9"
	Dec 27 09:15:36 addons-730938 kubelet[1253]: I1227 09:15:36.374689    1253 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"script\" (UniqueName: \"kubernetes.io/configmap/b7eab776-9841-44cb-830a-a03996bf8a56-script\") pod \"helper-pod-delete-pvc-aee992c0-fa66-4d17-ace8-b295e4d945de\" (UID: \"b7eab776-9841-44cb-830a-a03996bf8a56\") " pod="local-path-storage/helper-pod-delete-pvc-aee992c0-fa66-4d17-ace8-b295e4d945de"
	Dec 27 09:15:36 addons-730938 kubelet[1253]: I1227 09:15:36.375333    1253 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7gfp4\" (UniqueName: \"kubernetes.io/projected/b7eab776-9841-44cb-830a-a03996bf8a56-kube-api-access-7gfp4\") pod \"helper-pod-delete-pvc-aee992c0-fa66-4d17-ace8-b295e4d945de\" (UID: \"b7eab776-9841-44cb-830a-a03996bf8a56\") " pod="local-path-storage/helper-pod-delete-pvc-aee992c0-fa66-4d17-ace8-b295e4d945de"
	Dec 27 09:15:36 addons-730938 kubelet[1253]: I1227 09:15:36.375499    1253 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data\" (UniqueName: \"kubernetes.io/host-path/b7eab776-9841-44cb-830a-a03996bf8a56-data\") pod \"helper-pod-delete-pvc-aee992c0-fa66-4d17-ace8-b295e4d945de\" (UID: \"b7eab776-9841-44cb-830a-a03996bf8a56\") " pod="local-path-storage/helper-pod-delete-pvc-aee992c0-fa66-4d17-ace8-b295e4d945de"
	Dec 27 09:15:36 addons-730938 kubelet[1253]: I1227 09:15:36.375642    1253 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/b7eab776-9841-44cb-830a-a03996bf8a56-gcp-creds\") pod \"helper-pod-delete-pvc-aee992c0-fa66-4d17-ace8-b295e4d945de\" (UID: \"b7eab776-9841-44cb-830a-a03996bf8a56\") " pod="local-path-storage/helper-pod-delete-pvc-aee992c0-fa66-4d17-ace8-b295e4d945de"
	Dec 27 09:15:36 addons-730938 kubelet[1253]: W1227 09:15:36.660230    1253 manager.go:1172] Failed to process watch event {EventType:0 Name:/docker/600191d502c5bdaf6421cac30d35e9016fb51ee7a8f2cd91f197132f841dff0e/crio-2e02f47739cb3a2bf9f596ad01f4a11bc893238ceb2925b852eb55b92975bc07 WatchSource:0}: Error finding container 2e02f47739cb3a2bf9f596ad01f4a11bc893238ceb2925b852eb55b92975bc07: Status 404 returned error can't find the container with id 2e02f47739cb3a2bf9f596ad01f4a11bc893238ceb2925b852eb55b92975bc07
	Dec 27 09:15:37 addons-730938 kubelet[1253]: I1227 09:15:37.930887    1253 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="164e368f-a5a3-4605-9bac-4ada46dbe522" path="/var/lib/kubelet/pods/164e368f-a5a3-4605-9bac-4ada46dbe522/volumes"
	Dec 27 09:15:37 addons-730938 kubelet[1253]: I1227 09:15:37.989878    1253 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kubernetes.io/projected/b7eab776-9841-44cb-830a-a03996bf8a56-kube-api-access-7gfp4\" (UniqueName: \"kubernetes.io/projected/b7eab776-9841-44cb-830a-a03996bf8a56-kube-api-access-7gfp4\") pod \"b7eab776-9841-44cb-830a-a03996bf8a56\" (UID: \"b7eab776-9841-44cb-830a-a03996bf8a56\") "
	Dec 27 09:15:37 addons-730938 kubelet[1253]: I1227 09:15:37.990813    1253 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kubernetes.io/host-path/b7eab776-9841-44cb-830a-a03996bf8a56-gcp-creds\" (UniqueName: \"kubernetes.io/host-path/b7eab776-9841-44cb-830a-a03996bf8a56-gcp-creds\") pod \"b7eab776-9841-44cb-830a-a03996bf8a56\" (UID: \"b7eab776-9841-44cb-830a-a03996bf8a56\") "
	Dec 27 09:15:37 addons-730938 kubelet[1253]: I1227 09:15:37.990841    1253 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kubernetes.io/host-path/b7eab776-9841-44cb-830a-a03996bf8a56-data\" (UniqueName: \"kubernetes.io/host-path/b7eab776-9841-44cb-830a-a03996bf8a56-data\") pod \"b7eab776-9841-44cb-830a-a03996bf8a56\" (UID: \"b7eab776-9841-44cb-830a-a03996bf8a56\") "
	Dec 27 09:15:37 addons-730938 kubelet[1253]: I1227 09:15:37.990874    1253 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kubernetes.io/configmap/b7eab776-9841-44cb-830a-a03996bf8a56-script\" (UniqueName: \"kubernetes.io/configmap/b7eab776-9841-44cb-830a-a03996bf8a56-script\") pod \"b7eab776-9841-44cb-830a-a03996bf8a56\" (UID: \"b7eab776-9841-44cb-830a-a03996bf8a56\") "
	Dec 27 09:15:37 addons-730938 kubelet[1253]: I1227 09:15:37.991351    1253 operation_generator.go:779] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b7eab776-9841-44cb-830a-a03996bf8a56-script" pod "b7eab776-9841-44cb-830a-a03996bf8a56" (UID: "b7eab776-9841-44cb-830a-a03996bf8a56"). InnerVolumeSpecName "script". PluginName "kubernetes.io/configmap", VolumeGIDValue ""
	Dec 27 09:15:37 addons-730938 kubelet[1253]: I1227 09:15:37.991402    1253 operation_generator.go:779] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b7eab776-9841-44cb-830a-a03996bf8a56-gcp-creds" pod "b7eab776-9841-44cb-830a-a03996bf8a56" (UID: "b7eab776-9841-44cb-830a-a03996bf8a56"). InnerVolumeSpecName "gcp-creds". PluginName "kubernetes.io/host-path", VolumeGIDValue ""
	Dec 27 09:15:37 addons-730938 kubelet[1253]: I1227 09:15:37.991421    1253 operation_generator.go:779] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b7eab776-9841-44cb-830a-a03996bf8a56-data" pod "b7eab776-9841-44cb-830a-a03996bf8a56" (UID: "b7eab776-9841-44cb-830a-a03996bf8a56"). InnerVolumeSpecName "data". PluginName "kubernetes.io/host-path", VolumeGIDValue ""
	Dec 27 09:15:37 addons-730938 kubelet[1253]: I1227 09:15:37.994232    1253 operation_generator.go:779] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b7eab776-9841-44cb-830a-a03996bf8a56-kube-api-access-7gfp4" pod "b7eab776-9841-44cb-830a-a03996bf8a56" (UID: "b7eab776-9841-44cb-830a-a03996bf8a56"). InnerVolumeSpecName "kube-api-access-7gfp4". PluginName "kubernetes.io/projected", VolumeGIDValue ""
	Dec 27 09:15:38 addons-730938 kubelet[1253]: I1227 09:15:38.091590    1253 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-7gfp4\" (UniqueName: \"kubernetes.io/projected/b7eab776-9841-44cb-830a-a03996bf8a56-kube-api-access-7gfp4\") on node \"addons-730938\" DevicePath \"\""
	Dec 27 09:15:38 addons-730938 kubelet[1253]: I1227 09:15:38.091641    1253 reconciler_common.go:299] "Volume detached for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/b7eab776-9841-44cb-830a-a03996bf8a56-gcp-creds\") on node \"addons-730938\" DevicePath \"\""
	Dec 27 09:15:38 addons-730938 kubelet[1253]: I1227 09:15:38.091658    1253 reconciler_common.go:299] "Volume detached for volume \"data\" (UniqueName: \"kubernetes.io/host-path/b7eab776-9841-44cb-830a-a03996bf8a56-data\") on node \"addons-730938\" DevicePath \"\""
	Dec 27 09:15:38 addons-730938 kubelet[1253]: I1227 09:15:38.091677    1253 reconciler_common.go:299] "Volume detached for volume \"script\" (UniqueName: \"kubernetes.io/configmap/b7eab776-9841-44cb-830a-a03996bf8a56-script\") on node \"addons-730938\" DevicePath \"\""
	
	
	==> storage-provisioner [f2eee0c100ef359713536b5061784d84f3ab2e488484b19a6e5121205290199a] <==
	W1227 09:15:13.051975       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1227 09:15:15.055858       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1227 09:15:15.060934       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1227 09:15:17.064453       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1227 09:15:17.069043       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1227 09:15:19.073169       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1227 09:15:19.087103       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1227 09:15:21.091189       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1227 09:15:21.095924       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1227 09:15:23.099304       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1227 09:15:23.106298       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1227 09:15:25.112229       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1227 09:15:25.117662       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1227 09:15:27.121201       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1227 09:15:27.128404       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1227 09:15:29.138172       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1227 09:15:29.143057       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1227 09:15:31.147590       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1227 09:15:31.153094       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1227 09:15:33.156631       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1227 09:15:33.166610       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1227 09:15:35.169786       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1227 09:15:35.175108       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1227 09:15:37.177920       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1227 09:15:37.185307       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p addons-730938 -n addons-730938
helpers_test.go:270: (dbg) Run:  kubectl --context addons-730938 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:281: non-running pods: ingress-nginx-admission-create-dr8pw ingress-nginx-admission-patch-q8btc registry-creds-567fb78d95-gv8vz
helpers_test.go:283: ======> post-mortem[TestAddons/parallel/Headlamp]: describe non-running pods <======
helpers_test.go:286: (dbg) Run:  kubectl --context addons-730938 describe pod ingress-nginx-admission-create-dr8pw ingress-nginx-admission-patch-q8btc registry-creds-567fb78d95-gv8vz
helpers_test.go:286: (dbg) Non-zero exit: kubectl --context addons-730938 describe pod ingress-nginx-admission-create-dr8pw ingress-nginx-admission-patch-q8btc registry-creds-567fb78d95-gv8vz: exit status 1 (85.622964ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-dr8pw" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-q8btc" not found
	Error from server (NotFound): pods "registry-creds-567fb78d95-gv8vz" not found

                                                
                                                
** /stderr **
helpers_test.go:288: kubectl --context addons-730938 describe pod ingress-nginx-admission-create-dr8pw ingress-nginx-admission-patch-q8btc registry-creds-567fb78d95-gv8vz: exit status 1
addons_test.go:1055: (dbg) Run:  out/minikube-linux-arm64 -p addons-730938 addons disable headlamp --alsologtostderr -v=1
addons_test.go:1055: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-730938 addons disable headlamp --alsologtostderr -v=1: exit status 11 (272.857631ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1227 09:15:39.748218  311154 out.go:360] Setting OutFile to fd 1 ...
	I1227 09:15:39.749113  311154 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 09:15:39.749131  311154 out.go:374] Setting ErrFile to fd 2...
	I1227 09:15:39.749139  311154 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 09:15:39.749409  311154 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22344-301174/.minikube/bin
	I1227 09:15:39.749733  311154 mustload.go:66] Loading cluster: addons-730938
	I1227 09:15:39.750212  311154 config.go:182] Loaded profile config "addons-730938": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1227 09:15:39.750239  311154 addons.go:622] checking whether the cluster is paused
	I1227 09:15:39.750361  311154 config.go:182] Loaded profile config "addons-730938": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1227 09:15:39.750376  311154 host.go:66] Checking if "addons-730938" exists ...
	I1227 09:15:39.750927  311154 cli_runner.go:164] Run: docker container inspect addons-730938 --format={{.State.Status}}
	I1227 09:15:39.769700  311154 ssh_runner.go:195] Run: systemctl --version
	I1227 09:15:39.769762  311154 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-730938
	I1227 09:15:39.788454  311154 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33141 SSHKeyPath:/home/jenkins/minikube-integration/22344-301174/.minikube/machines/addons-730938/id_rsa Username:docker}
	I1227 09:15:39.897117  311154 cri.go:61] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1227 09:15:39.897210  311154 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1227 09:15:39.927483  311154 cri.go:96] found id: "1ba75b0e93cffeb9564d91bca0b56788c9988cf89785108eae6062d001ca1281"
	I1227 09:15:39.927558  311154 cri.go:96] found id: "bbba8dda954f74d2b2da7d42fcddbc4a1fdc4f4cf71a813b6c9860194a9621a6"
	I1227 09:15:39.927579  311154 cri.go:96] found id: "4f72823b84ebaa52891326b45fea830564bc3f49742b98eb173b49f92ce39b32"
	I1227 09:15:39.927600  311154 cri.go:96] found id: "22217205eb48a1ee3a6775fdd98e76ee8373818a2b962307148599ff6b4db89e"
	I1227 09:15:39.927635  311154 cri.go:96] found id: "97995be12aa464d51de8c36bec6f5b1c1434d2781ebab58eab6b6752af262ea9"
	I1227 09:15:39.927659  311154 cri.go:96] found id: "baf605d1813f1a9e9ed65db58157c7d94524aa89a258af323d12fa9e4e351ef4"
	I1227 09:15:39.927681  311154 cri.go:96] found id: "ea32f9690b19df5cc84a7588918620d2604fee37e5842259d8b41cd049af6b6b"
	I1227 09:15:39.927715  311154 cri.go:96] found id: "5cfe51c2b0a5da8e25cb261e4c52a197a97decd71f95204d7eaea008d0f884fc"
	I1227 09:15:39.927739  311154 cri.go:96] found id: "1587e53995e7f96157748b5b13fab257e13953e779f1b7d30e16c119415a5b28"
	I1227 09:15:39.927765  311154 cri.go:96] found id: "8d7c1678ab0865d6ff304859f9df7461c0b5fb240cccc28dce48d48b2c901d89"
	I1227 09:15:39.927799  311154 cri.go:96] found id: "76711d56ef9e83a83cfb91ab3e51f8eb3b7c37d33894e104276a47089d5daddd"
	I1227 09:15:39.927823  311154 cri.go:96] found id: "d3944a9f1f7d06e620da4fa1914d56a87ca4643a0c71a22346a4e69d166fa3fe"
	I1227 09:15:39.927844  311154 cri.go:96] found id: "ba6025b2bde56161b845a13d2b999f561ffcac9f93c817ee3dbb6e2cb0c42a53"
	I1227 09:15:39.927879  311154 cri.go:96] found id: "f5b8cfdd7b740ad5559a17b57ce6491d162516e94859fe3823c8fa0b8a3aeabb"
	I1227 09:15:39.927900  311154 cri.go:96] found id: "0ffe4ed190eb97e926994dfff9d7230b901fcb07d9b6f49de7216fac62ddc0dd"
	I1227 09:15:39.927937  311154 cri.go:96] found id: "92def02a9b247e15cdb1e61f48495c2505c703028c797b88c3d5486cd99c1921"
	I1227 09:15:39.927967  311154 cri.go:96] found id: "f2eee0c100ef359713536b5061784d84f3ab2e488484b19a6e5121205290199a"
	I1227 09:15:39.927991  311154 cri.go:96] found id: "ce77df4270ad600c2b64995a72ad36c61f8fe73f2347dc1e676d504603cfa764"
	I1227 09:15:39.928010  311154 cri.go:96] found id: "b7820e8e00a4a323234a16edadf1567d38a8353d5bf1183b5df375c072552575"
	I1227 09:15:39.928030  311154 cri.go:96] found id: "375550202d2a9cc42610c65fc954d2a96c60b729e6cdac6a36597b93d67c37b3"
	I1227 09:15:39.928068  311154 cri.go:96] found id: "906652711aa1d15066fd10c3fb9e2e07fcd4427364118efb6e268960ee5f8e23"
	I1227 09:15:39.928085  311154 cri.go:96] found id: "3acfe18930cb522267d2e4c6153dae8a9824a81243600ff5b20ce7d2561fc94e"
	I1227 09:15:39.928105  311154 cri.go:96] found id: "6bc7835ee8cc0444db6317a1fa1a95a206d02f7f7a29035f8adadd2213326620"
	I1227 09:15:39.928135  311154 cri.go:96] found id: ""
	I1227 09:15:39.928232  311154 ssh_runner.go:195] Run: sudo runc list -f json
	I1227 09:15:39.947923  311154 out.go:203] 
	W1227 09:15:39.951001  311154 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-27T09:15:39Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-27T09:15:39Z" level=error msg="open /run/runc: no such file or directory"
	
	W1227 09:15:39.951032  311154 out.go:285] * 
	* 
	W1227 09:15:39.954325  311154 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_efe3f0a65eabdab15324ffdebd5a66da17706a9c_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_efe3f0a65eabdab15324ffdebd5a66da17706a9c_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1227 09:15:39.957168  311154 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1057: failed to disable headlamp addon: args "out/minikube-linux-arm64 -p addons-730938 addons disable headlamp --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/Headlamp (3.29s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.39s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:842: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:353: "cloud-spanner-emulator-5649ccbc87-29jvv" [7ad6d14e-4664-4f72-9e76-257724eada12] Running
addons_test.go:842: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.003634519s
addons_test.go:1055: (dbg) Run:  out/minikube-linux-arm64 -p addons-730938 addons disable cloud-spanner --alsologtostderr -v=1
addons_test.go:1055: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-730938 addons disable cloud-spanner --alsologtostderr -v=1: exit status 11 (380.903355ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1227 09:15:36.513784  310551 out.go:360] Setting OutFile to fd 1 ...
	I1227 09:15:36.514990  310551 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 09:15:36.515039  310551 out.go:374] Setting ErrFile to fd 2...
	I1227 09:15:36.515065  310551 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 09:15:36.515532  310551 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22344-301174/.minikube/bin
	I1227 09:15:36.517171  310551 mustload.go:66] Loading cluster: addons-730938
	I1227 09:15:36.519512  310551 config.go:182] Loaded profile config "addons-730938": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1227 09:15:36.519580  310551 addons.go:622] checking whether the cluster is paused
	I1227 09:15:36.519756  310551 config.go:182] Loaded profile config "addons-730938": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1227 09:15:36.519825  310551 host.go:66] Checking if "addons-730938" exists ...
	I1227 09:15:36.520400  310551 cli_runner.go:164] Run: docker container inspect addons-730938 --format={{.State.Status}}
	I1227 09:15:36.541983  310551 ssh_runner.go:195] Run: systemctl --version
	I1227 09:15:36.542060  310551 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-730938
	I1227 09:15:36.560122  310551 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33141 SSHKeyPath:/home/jenkins/minikube-integration/22344-301174/.minikube/machines/addons-730938/id_rsa Username:docker}
	I1227 09:15:36.675341  310551 cri.go:61] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1227 09:15:36.675415  310551 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1227 09:15:36.784942  310551 cri.go:96] found id: "1ba75b0e93cffeb9564d91bca0b56788c9988cf89785108eae6062d001ca1281"
	I1227 09:15:36.784966  310551 cri.go:96] found id: "bbba8dda954f74d2b2da7d42fcddbc4a1fdc4f4cf71a813b6c9860194a9621a6"
	I1227 09:15:36.784972  310551 cri.go:96] found id: "4f72823b84ebaa52891326b45fea830564bc3f49742b98eb173b49f92ce39b32"
	I1227 09:15:36.784976  310551 cri.go:96] found id: "22217205eb48a1ee3a6775fdd98e76ee8373818a2b962307148599ff6b4db89e"
	I1227 09:15:36.784979  310551 cri.go:96] found id: "97995be12aa464d51de8c36bec6f5b1c1434d2781ebab58eab6b6752af262ea9"
	I1227 09:15:36.784985  310551 cri.go:96] found id: "baf605d1813f1a9e9ed65db58157c7d94524aa89a258af323d12fa9e4e351ef4"
	I1227 09:15:36.784988  310551 cri.go:96] found id: "ea32f9690b19df5cc84a7588918620d2604fee37e5842259d8b41cd049af6b6b"
	I1227 09:15:36.784992  310551 cri.go:96] found id: "5cfe51c2b0a5da8e25cb261e4c52a197a97decd71f95204d7eaea008d0f884fc"
	I1227 09:15:36.784995  310551 cri.go:96] found id: "1587e53995e7f96157748b5b13fab257e13953e779f1b7d30e16c119415a5b28"
	I1227 09:15:36.785006  310551 cri.go:96] found id: "8d7c1678ab0865d6ff304859f9df7461c0b5fb240cccc28dce48d48b2c901d89"
	I1227 09:15:36.785010  310551 cri.go:96] found id: "76711d56ef9e83a83cfb91ab3e51f8eb3b7c37d33894e104276a47089d5daddd"
	I1227 09:15:36.785013  310551 cri.go:96] found id: "d3944a9f1f7d06e620da4fa1914d56a87ca4643a0c71a22346a4e69d166fa3fe"
	I1227 09:15:36.785024  310551 cri.go:96] found id: "ba6025b2bde56161b845a13d2b999f561ffcac9f93c817ee3dbb6e2cb0c42a53"
	I1227 09:15:36.785028  310551 cri.go:96] found id: "f5b8cfdd7b740ad5559a17b57ce6491d162516e94859fe3823c8fa0b8a3aeabb"
	I1227 09:15:36.785030  310551 cri.go:96] found id: "0ffe4ed190eb97e926994dfff9d7230b901fcb07d9b6f49de7216fac62ddc0dd"
	I1227 09:15:36.785036  310551 cri.go:96] found id: "92def02a9b247e15cdb1e61f48495c2505c703028c797b88c3d5486cd99c1921"
	I1227 09:15:36.785039  310551 cri.go:96] found id: "f2eee0c100ef359713536b5061784d84f3ab2e488484b19a6e5121205290199a"
	I1227 09:15:36.785043  310551 cri.go:96] found id: "ce77df4270ad600c2b64995a72ad36c61f8fe73f2347dc1e676d504603cfa764"
	I1227 09:15:36.785047  310551 cri.go:96] found id: "b7820e8e00a4a323234a16edadf1567d38a8353d5bf1183b5df375c072552575"
	I1227 09:15:36.785050  310551 cri.go:96] found id: "375550202d2a9cc42610c65fc954d2a96c60b729e6cdac6a36597b93d67c37b3"
	I1227 09:15:36.785054  310551 cri.go:96] found id: "906652711aa1d15066fd10c3fb9e2e07fcd4427364118efb6e268960ee5f8e23"
	I1227 09:15:36.785057  310551 cri.go:96] found id: "3acfe18930cb522267d2e4c6153dae8a9824a81243600ff5b20ce7d2561fc94e"
	I1227 09:15:36.785061  310551 cri.go:96] found id: "6bc7835ee8cc0444db6317a1fa1a95a206d02f7f7a29035f8adadd2213326620"
	I1227 09:15:36.785064  310551 cri.go:96] found id: ""
	I1227 09:15:36.785116  310551 ssh_runner.go:195] Run: sudo runc list -f json
	I1227 09:15:36.804028  310551 out.go:203] 
	W1227 09:15:36.807314  310551 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-27T09:15:36Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-27T09:15:36Z" level=error msg="open /run/runc: no such file or directory"
	
	W1227 09:15:36.807348  310551 out.go:285] * 
	* 
	W1227 09:15:36.813221  310551 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_e93ff976b7e98e1dc466aded9385c0856b6d1b41_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_e93ff976b7e98e1dc466aded9385c0856b6d1b41_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1227 09:15:36.816415  310551 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1057: failed to disable cloud-spanner addon: args "out/minikube-linux-arm64 -p addons-730938 addons disable cloud-spanner --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/CloudSpanner (5.39s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (8.49s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:951: (dbg) Run:  kubectl --context addons-730938 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:957: (dbg) Run:  kubectl --context addons-730938 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:961: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:403: (dbg) Run:  kubectl --context addons-730938 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-730938 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-730938 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-730938 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-730938 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:964: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:353: "test-local-path" [164e368f-a5a3-4605-9bac-4ada46dbe522] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:353: "test-local-path" [164e368f-a5a3-4605-9bac-4ada46dbe522] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:353: "test-local-path" [164e368f-a5a3-4605-9bac-4ada46dbe522] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:964: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 3.004464972s
addons_test.go:969: (dbg) Run:  kubectl --context addons-730938 get pvc test-pvc -o=json
addons_test.go:978: (dbg) Run:  out/minikube-linux-arm64 -p addons-730938 ssh "cat /opt/local-path-provisioner/pvc-aee992c0-fa66-4d17-ace8-b295e4d945de_default_test-pvc/file1"
addons_test.go:990: (dbg) Run:  kubectl --context addons-730938 delete pod test-local-path
addons_test.go:994: (dbg) Run:  kubectl --context addons-730938 delete pvc test-pvc
addons_test.go:1055: (dbg) Run:  out/minikube-linux-arm64 -p addons-730938 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:1055: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-730938 addons disable storage-provisioner-rancher --alsologtostderr -v=1: exit status 11 (312.196574ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1227 09:15:36.414874  310533 out.go:360] Setting OutFile to fd 1 ...
	I1227 09:15:36.416461  310533 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 09:15:36.416532  310533 out.go:374] Setting ErrFile to fd 2...
	I1227 09:15:36.416559  310533 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 09:15:36.416979  310533 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22344-301174/.minikube/bin
	I1227 09:15:36.417723  310533 mustload.go:66] Loading cluster: addons-730938
	I1227 09:15:36.418184  310533 config.go:182] Loaded profile config "addons-730938": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1227 09:15:36.418205  310533 addons.go:622] checking whether the cluster is paused
	I1227 09:15:36.418324  310533 config.go:182] Loaded profile config "addons-730938": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1227 09:15:36.418341  310533 host.go:66] Checking if "addons-730938" exists ...
	I1227 09:15:36.418879  310533 cli_runner.go:164] Run: docker container inspect addons-730938 --format={{.State.Status}}
	I1227 09:15:36.437634  310533 ssh_runner.go:195] Run: systemctl --version
	I1227 09:15:36.437705  310533 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-730938
	I1227 09:15:36.461015  310533 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33141 SSHKeyPath:/home/jenkins/minikube-integration/22344-301174/.minikube/machines/addons-730938/id_rsa Username:docker}
	I1227 09:15:36.583349  310533 cri.go:61] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1227 09:15:36.583429  310533 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1227 09:15:36.622224  310533 cri.go:96] found id: "1ba75b0e93cffeb9564d91bca0b56788c9988cf89785108eae6062d001ca1281"
	I1227 09:15:36.622245  310533 cri.go:96] found id: "bbba8dda954f74d2b2da7d42fcddbc4a1fdc4f4cf71a813b6c9860194a9621a6"
	I1227 09:15:36.622250  310533 cri.go:96] found id: "4f72823b84ebaa52891326b45fea830564bc3f49742b98eb173b49f92ce39b32"
	I1227 09:15:36.622253  310533 cri.go:96] found id: "22217205eb48a1ee3a6775fdd98e76ee8373818a2b962307148599ff6b4db89e"
	I1227 09:15:36.622256  310533 cri.go:96] found id: "97995be12aa464d51de8c36bec6f5b1c1434d2781ebab58eab6b6752af262ea9"
	I1227 09:15:36.622260  310533 cri.go:96] found id: "baf605d1813f1a9e9ed65db58157c7d94524aa89a258af323d12fa9e4e351ef4"
	I1227 09:15:36.622263  310533 cri.go:96] found id: "ea32f9690b19df5cc84a7588918620d2604fee37e5842259d8b41cd049af6b6b"
	I1227 09:15:36.622267  310533 cri.go:96] found id: "5cfe51c2b0a5da8e25cb261e4c52a197a97decd71f95204d7eaea008d0f884fc"
	I1227 09:15:36.622270  310533 cri.go:96] found id: "1587e53995e7f96157748b5b13fab257e13953e779f1b7d30e16c119415a5b28"
	I1227 09:15:36.622276  310533 cri.go:96] found id: "8d7c1678ab0865d6ff304859f9df7461c0b5fb240cccc28dce48d48b2c901d89"
	I1227 09:15:36.622279  310533 cri.go:96] found id: "76711d56ef9e83a83cfb91ab3e51f8eb3b7c37d33894e104276a47089d5daddd"
	I1227 09:15:36.622282  310533 cri.go:96] found id: "d3944a9f1f7d06e620da4fa1914d56a87ca4643a0c71a22346a4e69d166fa3fe"
	I1227 09:15:36.622285  310533 cri.go:96] found id: "ba6025b2bde56161b845a13d2b999f561ffcac9f93c817ee3dbb6e2cb0c42a53"
	I1227 09:15:36.622288  310533 cri.go:96] found id: "f5b8cfdd7b740ad5559a17b57ce6491d162516e94859fe3823c8fa0b8a3aeabb"
	I1227 09:15:36.622291  310533 cri.go:96] found id: "0ffe4ed190eb97e926994dfff9d7230b901fcb07d9b6f49de7216fac62ddc0dd"
	I1227 09:15:36.622296  310533 cri.go:96] found id: "92def02a9b247e15cdb1e61f48495c2505c703028c797b88c3d5486cd99c1921"
	I1227 09:15:36.622299  310533 cri.go:96] found id: "f2eee0c100ef359713536b5061784d84f3ab2e488484b19a6e5121205290199a"
	I1227 09:15:36.622308  310533 cri.go:96] found id: "ce77df4270ad600c2b64995a72ad36c61f8fe73f2347dc1e676d504603cfa764"
	I1227 09:15:36.622311  310533 cri.go:96] found id: "b7820e8e00a4a323234a16edadf1567d38a8353d5bf1183b5df375c072552575"
	I1227 09:15:36.622314  310533 cri.go:96] found id: "375550202d2a9cc42610c65fc954d2a96c60b729e6cdac6a36597b93d67c37b3"
	I1227 09:15:36.622318  310533 cri.go:96] found id: "906652711aa1d15066fd10c3fb9e2e07fcd4427364118efb6e268960ee5f8e23"
	I1227 09:15:36.622321  310533 cri.go:96] found id: "3acfe18930cb522267d2e4c6153dae8a9824a81243600ff5b20ce7d2561fc94e"
	I1227 09:15:36.622324  310533 cri.go:96] found id: "6bc7835ee8cc0444db6317a1fa1a95a206d02f7f7a29035f8adadd2213326620"
	I1227 09:15:36.622327  310533 cri.go:96] found id: ""
	I1227 09:15:36.622378  310533 ssh_runner.go:195] Run: sudo runc list -f json
	I1227 09:15:36.658412  310533 out.go:203] 
	W1227 09:15:36.661965  310533 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-27T09:15:36Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-27T09:15:36Z" level=error msg="open /run/runc: no such file or directory"
	
	W1227 09:15:36.662000  310533 out.go:285] * 
	* 
	W1227 09:15:36.665481  310533 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_e8b2053d4ef30ba659303f708d034237180eb1ed_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_e8b2053d4ef30ba659303f708d034237180eb1ed_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1227 09:15:36.671988  310533 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1057: failed to disable storage-provisioner-rancher addon: args "out/minikube-linux-arm64 -p addons-730938 addons disable storage-provisioner-rancher --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/LocalPath (8.49s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (6.27s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:1027: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:353: "nvidia-device-plugin-daemonset-vmvrx" [e2b6c834-084a-4efc-8620-1ace4680f81e] Running
addons_test.go:1027: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 6.004403075s
addons_test.go:1055: (dbg) Run:  out/minikube-linux-arm64 -p addons-730938 addons disable nvidia-device-plugin --alsologtostderr -v=1
addons_test.go:1055: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-730938 addons disable nvidia-device-plugin --alsologtostderr -v=1: exit status 11 (261.765842ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1227 09:15:27.976169  310110 out.go:360] Setting OutFile to fd 1 ...
	I1227 09:15:27.977248  310110 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 09:15:27.977263  310110 out.go:374] Setting ErrFile to fd 2...
	I1227 09:15:27.977270  310110 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 09:15:27.977725  310110 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22344-301174/.minikube/bin
	I1227 09:15:27.978068  310110 mustload.go:66] Loading cluster: addons-730938
	I1227 09:15:27.978523  310110 config.go:182] Loaded profile config "addons-730938": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1227 09:15:27.978550  310110 addons.go:622] checking whether the cluster is paused
	I1227 09:15:27.978670  310110 config.go:182] Loaded profile config "addons-730938": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1227 09:15:27.978687  310110 host.go:66] Checking if "addons-730938" exists ...
	I1227 09:15:27.979209  310110 cli_runner.go:164] Run: docker container inspect addons-730938 --format={{.State.Status}}
	I1227 09:15:27.998916  310110 ssh_runner.go:195] Run: systemctl --version
	I1227 09:15:27.998975  310110 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-730938
	I1227 09:15:28.020317  310110 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33141 SSHKeyPath:/home/jenkins/minikube-integration/22344-301174/.minikube/machines/addons-730938/id_rsa Username:docker}
	I1227 09:15:28.121187  310110 cri.go:61] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1227 09:15:28.121313  310110 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1227 09:15:28.152142  310110 cri.go:96] found id: "1ba75b0e93cffeb9564d91bca0b56788c9988cf89785108eae6062d001ca1281"
	I1227 09:15:28.152166  310110 cri.go:96] found id: "bbba8dda954f74d2b2da7d42fcddbc4a1fdc4f4cf71a813b6c9860194a9621a6"
	I1227 09:15:28.152172  310110 cri.go:96] found id: "4f72823b84ebaa52891326b45fea830564bc3f49742b98eb173b49f92ce39b32"
	I1227 09:15:28.152175  310110 cri.go:96] found id: "22217205eb48a1ee3a6775fdd98e76ee8373818a2b962307148599ff6b4db89e"
	I1227 09:15:28.152179  310110 cri.go:96] found id: "97995be12aa464d51de8c36bec6f5b1c1434d2781ebab58eab6b6752af262ea9"
	I1227 09:15:28.152183  310110 cri.go:96] found id: "baf605d1813f1a9e9ed65db58157c7d94524aa89a258af323d12fa9e4e351ef4"
	I1227 09:15:28.152186  310110 cri.go:96] found id: "ea32f9690b19df5cc84a7588918620d2604fee37e5842259d8b41cd049af6b6b"
	I1227 09:15:28.152189  310110 cri.go:96] found id: "5cfe51c2b0a5da8e25cb261e4c52a197a97decd71f95204d7eaea008d0f884fc"
	I1227 09:15:28.152192  310110 cri.go:96] found id: "1587e53995e7f96157748b5b13fab257e13953e779f1b7d30e16c119415a5b28"
	I1227 09:15:28.152224  310110 cri.go:96] found id: "8d7c1678ab0865d6ff304859f9df7461c0b5fb240cccc28dce48d48b2c901d89"
	I1227 09:15:28.152234  310110 cri.go:96] found id: "76711d56ef9e83a83cfb91ab3e51f8eb3b7c37d33894e104276a47089d5daddd"
	I1227 09:15:28.152238  310110 cri.go:96] found id: "d3944a9f1f7d06e620da4fa1914d56a87ca4643a0c71a22346a4e69d166fa3fe"
	I1227 09:15:28.152242  310110 cri.go:96] found id: "ba6025b2bde56161b845a13d2b999f561ffcac9f93c817ee3dbb6e2cb0c42a53"
	I1227 09:15:28.152245  310110 cri.go:96] found id: "f5b8cfdd7b740ad5559a17b57ce6491d162516e94859fe3823c8fa0b8a3aeabb"
	I1227 09:15:28.152248  310110 cri.go:96] found id: "0ffe4ed190eb97e926994dfff9d7230b901fcb07d9b6f49de7216fac62ddc0dd"
	I1227 09:15:28.152258  310110 cri.go:96] found id: "92def02a9b247e15cdb1e61f48495c2505c703028c797b88c3d5486cd99c1921"
	I1227 09:15:28.152261  310110 cri.go:96] found id: "f2eee0c100ef359713536b5061784d84f3ab2e488484b19a6e5121205290199a"
	I1227 09:15:28.152266  310110 cri.go:96] found id: "ce77df4270ad600c2b64995a72ad36c61f8fe73f2347dc1e676d504603cfa764"
	I1227 09:15:28.152271  310110 cri.go:96] found id: "b7820e8e00a4a323234a16edadf1567d38a8353d5bf1183b5df375c072552575"
	I1227 09:15:28.152275  310110 cri.go:96] found id: "375550202d2a9cc42610c65fc954d2a96c60b729e6cdac6a36597b93d67c37b3"
	I1227 09:15:28.152294  310110 cri.go:96] found id: "906652711aa1d15066fd10c3fb9e2e07fcd4427364118efb6e268960ee5f8e23"
	I1227 09:15:28.152302  310110 cri.go:96] found id: "3acfe18930cb522267d2e4c6153dae8a9824a81243600ff5b20ce7d2561fc94e"
	I1227 09:15:28.152305  310110 cri.go:96] found id: "6bc7835ee8cc0444db6317a1fa1a95a206d02f7f7a29035f8adadd2213326620"
	I1227 09:15:28.152308  310110 cri.go:96] found id: ""
	I1227 09:15:28.152370  310110 ssh_runner.go:195] Run: sudo runc list -f json
	I1227 09:15:28.168455  310110 out.go:203] 
	W1227 09:15:28.171450  310110 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-27T09:15:28Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-27T09:15:28Z" level=error msg="open /run/runc: no such file or directory"
	
	W1227 09:15:28.171484  310110 out.go:285] * 
	* 
	W1227 09:15:28.174783  310110 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_47e1a72799625313bd916979b0f8aa84efd54736_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_47e1a72799625313bd916979b0f8aa84efd54736_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1227 09:15:28.177902  310110 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1057: failed to disable nvidia-device-plugin addon: args "out/minikube-linux-arm64 -p addons-730938 addons disable nvidia-device-plugin --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/NvidiaDevicePlugin (6.27s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (5.27s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:1049: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:353: "yakd-dashboard-865bfb49b9-2ht94" [f75416f0-03ed-4f9d-b445-e9390594f5c1] Running
addons_test.go:1049: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 5.003665181s
addons_test.go:1055: (dbg) Run:  out/minikube-linux-arm64 -p addons-730938 addons disable yakd --alsologtostderr -v=1
addons_test.go:1055: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-730938 addons disable yakd --alsologtostderr -v=1: exit status 11 (268.207818ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1227 09:15:21.703134  310019 out.go:360] Setting OutFile to fd 1 ...
	I1227 09:15:21.703953  310019 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 09:15:21.703968  310019 out.go:374] Setting ErrFile to fd 2...
	I1227 09:15:21.703975  310019 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 09:15:21.704282  310019 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22344-301174/.minikube/bin
	I1227 09:15:21.704641  310019 mustload.go:66] Loading cluster: addons-730938
	I1227 09:15:21.705068  310019 config.go:182] Loaded profile config "addons-730938": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1227 09:15:21.705107  310019 addons.go:622] checking whether the cluster is paused
	I1227 09:15:21.705246  310019 config.go:182] Loaded profile config "addons-730938": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1227 09:15:21.705264  310019 host.go:66] Checking if "addons-730938" exists ...
	I1227 09:15:21.705840  310019 cli_runner.go:164] Run: docker container inspect addons-730938 --format={{.State.Status}}
	I1227 09:15:21.728864  310019 ssh_runner.go:195] Run: systemctl --version
	I1227 09:15:21.728926  310019 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-730938
	I1227 09:15:21.748248  310019 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33141 SSHKeyPath:/home/jenkins/minikube-integration/22344-301174/.minikube/machines/addons-730938/id_rsa Username:docker}
	I1227 09:15:21.853320  310019 cri.go:61] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1227 09:15:21.853412  310019 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1227 09:15:21.885589  310019 cri.go:96] found id: "1ba75b0e93cffeb9564d91bca0b56788c9988cf89785108eae6062d001ca1281"
	I1227 09:15:21.885610  310019 cri.go:96] found id: "bbba8dda954f74d2b2da7d42fcddbc4a1fdc4f4cf71a813b6c9860194a9621a6"
	I1227 09:15:21.885615  310019 cri.go:96] found id: "4f72823b84ebaa52891326b45fea830564bc3f49742b98eb173b49f92ce39b32"
	I1227 09:15:21.885619  310019 cri.go:96] found id: "22217205eb48a1ee3a6775fdd98e76ee8373818a2b962307148599ff6b4db89e"
	I1227 09:15:21.885623  310019 cri.go:96] found id: "97995be12aa464d51de8c36bec6f5b1c1434d2781ebab58eab6b6752af262ea9"
	I1227 09:15:21.885627  310019 cri.go:96] found id: "baf605d1813f1a9e9ed65db58157c7d94524aa89a258af323d12fa9e4e351ef4"
	I1227 09:15:21.885630  310019 cri.go:96] found id: "ea32f9690b19df5cc84a7588918620d2604fee37e5842259d8b41cd049af6b6b"
	I1227 09:15:21.885634  310019 cri.go:96] found id: "5cfe51c2b0a5da8e25cb261e4c52a197a97decd71f95204d7eaea008d0f884fc"
	I1227 09:15:21.885645  310019 cri.go:96] found id: "1587e53995e7f96157748b5b13fab257e13953e779f1b7d30e16c119415a5b28"
	I1227 09:15:21.885651  310019 cri.go:96] found id: "8d7c1678ab0865d6ff304859f9df7461c0b5fb240cccc28dce48d48b2c901d89"
	I1227 09:15:21.885655  310019 cri.go:96] found id: "76711d56ef9e83a83cfb91ab3e51f8eb3b7c37d33894e104276a47089d5daddd"
	I1227 09:15:21.885658  310019 cri.go:96] found id: "d3944a9f1f7d06e620da4fa1914d56a87ca4643a0c71a22346a4e69d166fa3fe"
	I1227 09:15:21.885661  310019 cri.go:96] found id: "ba6025b2bde56161b845a13d2b999f561ffcac9f93c817ee3dbb6e2cb0c42a53"
	I1227 09:15:21.885664  310019 cri.go:96] found id: "f5b8cfdd7b740ad5559a17b57ce6491d162516e94859fe3823c8fa0b8a3aeabb"
	I1227 09:15:21.885667  310019 cri.go:96] found id: "0ffe4ed190eb97e926994dfff9d7230b901fcb07d9b6f49de7216fac62ddc0dd"
	I1227 09:15:21.885672  310019 cri.go:96] found id: "92def02a9b247e15cdb1e61f48495c2505c703028c797b88c3d5486cd99c1921"
	I1227 09:15:21.885675  310019 cri.go:96] found id: "f2eee0c100ef359713536b5061784d84f3ab2e488484b19a6e5121205290199a"
	I1227 09:15:21.885686  310019 cri.go:96] found id: "ce77df4270ad600c2b64995a72ad36c61f8fe73f2347dc1e676d504603cfa764"
	I1227 09:15:21.885691  310019 cri.go:96] found id: "b7820e8e00a4a323234a16edadf1567d38a8353d5bf1183b5df375c072552575"
	I1227 09:15:21.885694  310019 cri.go:96] found id: "375550202d2a9cc42610c65fc954d2a96c60b729e6cdac6a36597b93d67c37b3"
	I1227 09:15:21.885700  310019 cri.go:96] found id: "906652711aa1d15066fd10c3fb9e2e07fcd4427364118efb6e268960ee5f8e23"
	I1227 09:15:21.885703  310019 cri.go:96] found id: "3acfe18930cb522267d2e4c6153dae8a9824a81243600ff5b20ce7d2561fc94e"
	I1227 09:15:21.885706  310019 cri.go:96] found id: "6bc7835ee8cc0444db6317a1fa1a95a206d02f7f7a29035f8adadd2213326620"
	I1227 09:15:21.885709  310019 cri.go:96] found id: ""
	I1227 09:15:21.885763  310019 ssh_runner.go:195] Run: sudo runc list -f json
	I1227 09:15:21.901359  310019 out.go:203] 
	W1227 09:15:21.904194  310019 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-27T09:15:21Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-27T09:15:21Z" level=error msg="open /run/runc: no such file or directory"
	
	W1227 09:15:21.904219  310019 out.go:285] * 
	* 
	W1227 09:15:21.907598  310019 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_82e5d844def28f20a5cac88dc27578ab5d1e7e1a_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_82e5d844def28f20a5cac88dc27578ab5d1e7e1a_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1227 09:15:21.910535  310019 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1057: failed to disable yakd addon: args "out/minikube-linux-arm64 -p addons-730938 addons disable yakd --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/Yakd (5.27s)

                                                
                                    
x
+
TestForceSystemdFlag (504.74s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-flag-779725 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
E1227 09:57:20.766380  303043 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22344-301174/.minikube/profiles/functional-725125/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
docker_test.go:91: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p force-systemd-flag-779725 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: exit status 109 (8m20.76773122s)

                                                
                                                
-- stdout --
	* [force-systemd-flag-779725] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=22344
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22344-301174/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22344-301174/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	* Using Docker driver with root privileges
	* Starting "force-systemd-flag-779725" primary control-plane node in "force-systemd-flag-779725" cluster
	* Pulling base image v0.0.48-1766570851-22316 ...
	* Preparing Kubernetes v1.35.0 on CRI-O 1.34.3 ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1227 09:55:52.985585  484533 out.go:360] Setting OutFile to fd 1 ...
	I1227 09:55:52.985780  484533 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 09:55:52.985806  484533 out.go:374] Setting ErrFile to fd 2...
	I1227 09:55:52.985825  484533 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 09:55:52.986303  484533 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22344-301174/.minikube/bin
	I1227 09:55:52.986835  484533 out.go:368] Setting JSON to false
	I1227 09:55:52.987745  484533 start.go:133] hostinfo: {"hostname":"ip-172-31-31-251","uptime":9502,"bootTime":1766819851,"procs":177,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1227 09:55:52.987863  484533 start.go:143] virtualization:  
	I1227 09:55:52.991395  484533 out.go:179] * [force-systemd-flag-779725] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1227 09:55:52.993846  484533 out.go:179]   - MINIKUBE_LOCATION=22344
	I1227 09:55:52.993919  484533 notify.go:221] Checking for updates...
	I1227 09:55:53.000344  484533 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1227 09:55:53.003707  484533 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22344-301174/kubeconfig
	I1227 09:55:53.007511  484533 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22344-301174/.minikube
	I1227 09:55:53.010720  484533 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1227 09:55:53.013752  484533 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1227 09:55:53.017387  484533 config.go:182] Loaded profile config "force-systemd-env-029895": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1227 09:55:53.017511  484533 driver.go:422] Setting default libvirt URI to qemu:///system
	I1227 09:55:53.047437  484533 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1227 09:55:53.047564  484533 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1227 09:55:53.110402  484533 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-27 09:55:53.10014062 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pa
th:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1227 09:55:53.110514  484533 docker.go:319] overlay module found
	I1227 09:55:53.114785  484533 out.go:179] * Using the docker driver based on user configuration
	I1227 09:55:53.117515  484533 start.go:309] selected driver: docker
	I1227 09:55:53.117531  484533 start.go:928] validating driver "docker" against <nil>
	I1227 09:55:53.117550  484533 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1227 09:55:53.118403  484533 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1227 09:55:53.181601  484533 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-27 09:55:53.167902272 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1227 09:55:53.181751  484533 start_flags.go:333] no existing cluster config was found, will generate one from the flags 
	I1227 09:55:53.181970  484533 start_flags.go:1001] Wait components to verify : map[apiserver:true system_pods:true]
	I1227 09:55:53.184753  484533 out.go:179] * Using Docker driver with root privileges
	I1227 09:55:53.187465  484533 cni.go:84] Creating CNI manager for ""
	I1227 09:55:53.187535  484533 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1227 09:55:53.187549  484533 start_flags.go:342] Found "CNI" CNI - setting NetworkPlugin=cni
	I1227 09:55:53.187643  484533 start.go:353] cluster config:
	{Name:force-systemd-flag-779725 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:force-systemd-flag-779725 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluste
r.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1227 09:55:53.190621  484533 out.go:179] * Starting "force-systemd-flag-779725" primary control-plane node in "force-systemd-flag-779725" cluster
	I1227 09:55:53.193330  484533 cache.go:134] Beginning downloading kic base image for docker with crio
	I1227 09:55:53.196297  484533 out.go:179] * Pulling base image v0.0.48-1766570851-22316 ...
	I1227 09:55:53.199155  484533 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime crio
	I1227 09:55:53.199207  484533 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22344-301174/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-cri-o-overlay-arm64.tar.lz4
	I1227 09:55:53.199220  484533 cache.go:65] Caching tarball of preloaded images
	I1227 09:55:53.199229  484533 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a in local docker daemon
	I1227 09:55:53.199305  484533 preload.go:251] Found /home/jenkins/minikube-integration/22344-301174/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1227 09:55:53.199316  484533 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0 on crio
	I1227 09:55:53.199434  484533 profile.go:143] Saving config to /home/jenkins/minikube-integration/22344-301174/.minikube/profiles/force-systemd-flag-779725/config.json ...
	I1227 09:55:53.199453  484533 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22344-301174/.minikube/profiles/force-systemd-flag-779725/config.json: {Name:mk96df6b6bceeb873dcb64d2217c60d1a3551e06 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 09:55:53.218541  484533 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a in local docker daemon, skipping pull
	I1227 09:55:53.218566  484533 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a exists in daemon, skipping load
	I1227 09:55:53.218587  484533 cache.go:243] Successfully downloaded all kic artifacts
	I1227 09:55:53.218617  484533 start.go:360] acquireMachinesLock for force-systemd-flag-779725: {Name:mkfa95052f8385e546a22dbee7799fa0cde0dd51 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1227 09:55:53.218736  484533 start.go:364] duration metric: took 98.331µs to acquireMachinesLock for "force-systemd-flag-779725"
	I1227 09:55:53.218768  484533 start.go:93] Provisioning new machine with config: &{Name:force-systemd-flag-779725 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:force-systemd-flag-779725 Namespace:default APIServer
HAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:
SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false} &{Name: IP: Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1227 09:55:53.218843  484533 start.go:125] createHost starting for "" (driver="docker")
	I1227 09:55:53.222123  484533 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1227 09:55:53.222408  484533 start.go:159] libmachine.API.Create for "force-systemd-flag-779725" (driver="docker")
	I1227 09:55:53.222449  484533 client.go:173] LocalClient.Create starting
	I1227 09:55:53.222519  484533 main.go:144] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22344-301174/.minikube/certs/ca.pem
	I1227 09:55:53.222563  484533 main.go:144] libmachine: Decoding PEM data...
	I1227 09:55:53.222582  484533 main.go:144] libmachine: Parsing certificate...
	I1227 09:55:53.222634  484533 main.go:144] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22344-301174/.minikube/certs/cert.pem
	I1227 09:55:53.222660  484533 main.go:144] libmachine: Decoding PEM data...
	I1227 09:55:53.222672  484533 main.go:144] libmachine: Parsing certificate...
	I1227 09:55:53.223033  484533 cli_runner.go:164] Run: docker network inspect force-systemd-flag-779725 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1227 09:55:53.239101  484533 cli_runner.go:211] docker network inspect force-systemd-flag-779725 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1227 09:55:53.239200  484533 network_create.go:284] running [docker network inspect force-systemd-flag-779725] to gather additional debugging logs...
	I1227 09:55:53.239227  484533 cli_runner.go:164] Run: docker network inspect force-systemd-flag-779725
	W1227 09:55:53.255088  484533 cli_runner.go:211] docker network inspect force-systemd-flag-779725 returned with exit code 1
	I1227 09:55:53.255128  484533 network_create.go:287] error running [docker network inspect force-systemd-flag-779725]: docker network inspect force-systemd-flag-779725: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network force-systemd-flag-779725 not found
	I1227 09:55:53.255148  484533 network_create.go:289] output of [docker network inspect force-systemd-flag-779725]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network force-systemd-flag-779725 not found
	
	** /stderr **
	I1227 09:55:53.255269  484533 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1227 09:55:53.272556  484533 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-cc45c0939b74 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:be:a9:eb:a6:c1:01} reservation:<nil>}
	I1227 09:55:53.272985  484533 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-b122a856da6d IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:2e:eb:89:95:c1:62} reservation:<nil>}
	I1227 09:55:53.273283  484533 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-a1a16649dea9 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:9e:51:2b:90:d1:ea} reservation:<nil>}
	I1227 09:55:53.273750  484533 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001a210c0}
	I1227 09:55:53.273775  484533 network_create.go:124] attempt to create docker network force-systemd-flag-779725 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I1227 09:55:53.273840  484533 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=force-systemd-flag-779725 force-systemd-flag-779725
	I1227 09:55:53.332153  484533 network_create.go:108] docker network force-systemd-flag-779725 192.168.76.0/24 created
	I1227 09:55:53.332185  484533 kic.go:121] calculated static IP "192.168.76.2" for the "force-systemd-flag-779725" container
	I1227 09:55:53.332269  484533 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1227 09:55:53.349497  484533 cli_runner.go:164] Run: docker volume create force-systemd-flag-779725 --label name.minikube.sigs.k8s.io=force-systemd-flag-779725 --label created_by.minikube.sigs.k8s.io=true
	I1227 09:55:53.367363  484533 oci.go:103] Successfully created a docker volume force-systemd-flag-779725
	I1227 09:55:53.367455  484533 cli_runner.go:164] Run: docker run --rm --name force-systemd-flag-779725-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=force-systemd-flag-779725 --entrypoint /usr/bin/test -v force-systemd-flag-779725:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a -d /var/lib
	I1227 09:55:53.914927  484533 oci.go:107] Successfully prepared a docker volume force-systemd-flag-779725
	I1227 09:55:53.914996  484533 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime crio
	I1227 09:55:53.915017  484533 kic.go:194] Starting extracting preloaded images to volume ...
	I1227 09:55:53.915088  484533 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22344-301174/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v force-systemd-flag-779725:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a -I lz4 -xf /preloaded.tar -C /extractDir
	I1227 09:55:57.830866  484533 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22344-301174/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v force-systemd-flag-779725:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a -I lz4 -xf /preloaded.tar -C /extractDir: (3.915713139s)
	I1227 09:55:57.830903  484533 kic.go:203] duration metric: took 3.915882421s to extract preloaded images to volume ...
	W1227 09:55:57.831035  484533 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1227 09:55:57.831151  484533 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1227 09:55:57.883318  484533 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname force-systemd-flag-779725 --name force-systemd-flag-779725 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=force-systemd-flag-779725 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=force-systemd-flag-779725 --network force-systemd-flag-779725 --ip 192.168.76.2 --volume force-systemd-flag-779725:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a
	I1227 09:55:58.204888  484533 cli_runner.go:164] Run: docker container inspect force-systemd-flag-779725 --format={{.State.Running}}
	I1227 09:55:58.226405  484533 cli_runner.go:164] Run: docker container inspect force-systemd-flag-779725 --format={{.State.Status}}
	I1227 09:55:58.245312  484533 cli_runner.go:164] Run: docker exec force-systemd-flag-779725 stat /var/lib/dpkg/alternatives/iptables
	I1227 09:55:58.292687  484533 oci.go:144] the created container "force-systemd-flag-779725" has a running status.
	I1227 09:55:58.292719  484533 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/22344-301174/.minikube/machines/force-systemd-flag-779725/id_rsa...
	I1227 09:55:58.595860  484533 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22344-301174/.minikube/machines/force-systemd-flag-779725/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I1227 09:55:58.595909  484533 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22344-301174/.minikube/machines/force-systemd-flag-779725/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1227 09:55:58.623920  484533 cli_runner.go:164] Run: docker container inspect force-systemd-flag-779725 --format={{.State.Status}}
	I1227 09:55:58.641703  484533 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1227 09:55:58.641729  484533 kic_runner.go:114] Args: [docker exec --privileged force-systemd-flag-779725 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1227 09:55:58.697285  484533 cli_runner.go:164] Run: docker container inspect force-systemd-flag-779725 --format={{.State.Status}}
	I1227 09:55:58.720797  484533 machine.go:94] provisionDockerMachine start ...
	I1227 09:55:58.721011  484533 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-779725
	I1227 09:55:58.748299  484533 main.go:144] libmachine: Using SSH client type: native
	I1227 09:55:58.748638  484533 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dbbe0] 0x3de0e0 <nil>  [] 0s} 127.0.0.1 33411 <nil> <nil>}
	I1227 09:55:58.748647  484533 main.go:144] libmachine: About to run SSH command:
	hostname
	I1227 09:55:58.749426  484533 main.go:144] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:33558->127.0.0.1:33411: read: connection reset by peer
	I1227 09:56:01.890073  484533 main.go:144] libmachine: SSH cmd err, output: <nil>: force-systemd-flag-779725
	
	I1227 09:56:01.890094  484533 ubuntu.go:182] provisioning hostname "force-systemd-flag-779725"
	I1227 09:56:01.890204  484533 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-779725
	I1227 09:56:01.921774  484533 main.go:144] libmachine: Using SSH client type: native
	I1227 09:56:01.922103  484533 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dbbe0] 0x3de0e0 <nil>  [] 0s} 127.0.0.1 33411 <nil> <nil>}
	I1227 09:56:01.922114  484533 main.go:144] libmachine: About to run SSH command:
	sudo hostname force-systemd-flag-779725 && echo "force-systemd-flag-779725" | sudo tee /etc/hostname
	I1227 09:56:02.080023  484533 main.go:144] libmachine: SSH cmd err, output: <nil>: force-systemd-flag-779725
	
	I1227 09:56:02.080102  484533 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-779725
	I1227 09:56:02.099159  484533 main.go:144] libmachine: Using SSH client type: native
	I1227 09:56:02.099474  484533 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dbbe0] 0x3de0e0 <nil>  [] 0s} 127.0.0.1 33411 <nil> <nil>}
	I1227 09:56:02.099496  484533 main.go:144] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sforce-systemd-flag-779725' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 force-systemd-flag-779725/g' /etc/hosts;
				else 
					echo '127.0.1.1 force-systemd-flag-779725' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1227 09:56:02.238676  484533 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	I1227 09:56:02.238704  484533 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22344-301174/.minikube CaCertPath:/home/jenkins/minikube-integration/22344-301174/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22344-301174/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22344-301174/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22344-301174/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22344-301174/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22344-301174/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22344-301174/.minikube}
	I1227 09:56:02.238737  484533 ubuntu.go:190] setting up certificates
	I1227 09:56:02.238747  484533 provision.go:84] configureAuth start
	I1227 09:56:02.238816  484533 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" force-systemd-flag-779725
	I1227 09:56:02.256388  484533 provision.go:143] copyHostCerts
	I1227 09:56:02.256433  484533 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22344-301174/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/22344-301174/.minikube/ca.pem
	I1227 09:56:02.256466  484533 exec_runner.go:144] found /home/jenkins/minikube-integration/22344-301174/.minikube/ca.pem, removing ...
	I1227 09:56:02.256477  484533 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22344-301174/.minikube/ca.pem
	I1227 09:56:02.256557  484533 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22344-301174/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22344-301174/.minikube/ca.pem (1082 bytes)
	I1227 09:56:02.256643  484533 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22344-301174/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/22344-301174/.minikube/cert.pem
	I1227 09:56:02.256665  484533 exec_runner.go:144] found /home/jenkins/minikube-integration/22344-301174/.minikube/cert.pem, removing ...
	I1227 09:56:02.256670  484533 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22344-301174/.minikube/cert.pem
	I1227 09:56:02.256701  484533 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22344-301174/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22344-301174/.minikube/cert.pem (1123 bytes)
	I1227 09:56:02.256743  484533 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22344-301174/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/22344-301174/.minikube/key.pem
	I1227 09:56:02.256763  484533 exec_runner.go:144] found /home/jenkins/minikube-integration/22344-301174/.minikube/key.pem, removing ...
	I1227 09:56:02.256770  484533 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22344-301174/.minikube/key.pem
	I1227 09:56:02.256793  484533 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22344-301174/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22344-301174/.minikube/key.pem (1675 bytes)
	I1227 09:56:02.256843  484533 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22344-301174/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22344-301174/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22344-301174/.minikube/certs/ca-key.pem org=jenkins.force-systemd-flag-779725 san=[127.0.0.1 192.168.76.2 force-systemd-flag-779725 localhost minikube]
	I1227 09:56:02.820175  484533 provision.go:177] copyRemoteCerts
	I1227 09:56:02.820242  484533 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1227 09:56:02.820293  484533 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-779725
	I1227 09:56:02.839095  484533 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33411 SSHKeyPath:/home/jenkins/minikube-integration/22344-301174/.minikube/machines/force-systemd-flag-779725/id_rsa Username:docker}
	I1227 09:56:02.937862  484533 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22344-301174/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1227 09:56:02.937917  484533 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22344-301174/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1227 09:56:02.954900  484533 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22344-301174/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1227 09:56:02.955012  484533 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22344-301174/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I1227 09:56:02.972777  484533 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22344-301174/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1227 09:56:02.972837  484533 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22344-301174/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1227 09:56:02.989993  484533 provision.go:87] duration metric: took 751.225708ms to configureAuth
	I1227 09:56:02.990022  484533 ubuntu.go:206] setting minikube options for container-runtime
	I1227 09:56:02.990286  484533 config.go:182] Loaded profile config "force-systemd-flag-779725": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1227 09:56:02.990400  484533 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-779725
	I1227 09:56:03.009371  484533 main.go:144] libmachine: Using SSH client type: native
	I1227 09:56:03.009721  484533 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dbbe0] 0x3de0e0 <nil>  [] 0s} 127.0.0.1 33411 <nil> <nil>}
	I1227 09:56:03.009743  484533 main.go:144] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1227 09:56:03.297764  484533 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1227 09:56:03.297784  484533 machine.go:97] duration metric: took 4.576962743s to provisionDockerMachine
	I1227 09:56:03.297795  484533 client.go:176] duration metric: took 10.075336911s to LocalClient.Create
	I1227 09:56:03.297811  484533 start.go:167] duration metric: took 10.075406179s to libmachine.API.Create "force-systemd-flag-779725"
	I1227 09:56:03.297818  484533 start.go:293] postStartSetup for "force-systemd-flag-779725" (driver="docker")
	I1227 09:56:03.297829  484533 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1227 09:56:03.297891  484533 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1227 09:56:03.297940  484533 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-779725
	I1227 09:56:03.317175  484533 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33411 SSHKeyPath:/home/jenkins/minikube-integration/22344-301174/.minikube/machines/force-systemd-flag-779725/id_rsa Username:docker}
	I1227 09:56:03.420137  484533 ssh_runner.go:195] Run: cat /etc/os-release
	I1227 09:56:03.425386  484533 main.go:144] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1227 09:56:03.425413  484533 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1227 09:56:03.425425  484533 filesync.go:126] Scanning /home/jenkins/minikube-integration/22344-301174/.minikube/addons for local assets ...
	I1227 09:56:03.425486  484533 filesync.go:126] Scanning /home/jenkins/minikube-integration/22344-301174/.minikube/files for local assets ...
	I1227 09:56:03.425575  484533 filesync.go:149] local asset: /home/jenkins/minikube-integration/22344-301174/.minikube/files/etc/ssl/certs/3030432.pem -> 3030432.pem in /etc/ssl/certs
	I1227 09:56:03.425587  484533 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22344-301174/.minikube/files/etc/ssl/certs/3030432.pem -> /etc/ssl/certs/3030432.pem
	I1227 09:56:03.425700  484533 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1227 09:56:03.434974  484533 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22344-301174/.minikube/files/etc/ssl/certs/3030432.pem --> /etc/ssl/certs/3030432.pem (1708 bytes)
	I1227 09:56:03.456435  484533 start.go:296] duration metric: took 158.601608ms for postStartSetup
	I1227 09:56:03.456816  484533 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" force-systemd-flag-779725
	I1227 09:56:03.474010  484533 profile.go:143] Saving config to /home/jenkins/minikube-integration/22344-301174/.minikube/profiles/force-systemd-flag-779725/config.json ...
	I1227 09:56:03.474343  484533 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1227 09:56:03.474394  484533 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-779725
	I1227 09:56:03.491294  484533 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33411 SSHKeyPath:/home/jenkins/minikube-integration/22344-301174/.minikube/machines/force-systemd-flag-779725/id_rsa Username:docker}
	I1227 09:56:03.587080  484533 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1227 09:56:03.591521  484533 start.go:128] duration metric: took 10.372663152s to createHost
	I1227 09:56:03.591548  484533 start.go:83] releasing machines lock for "force-systemd-flag-779725", held for 10.372799235s
	I1227 09:56:03.591617  484533 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" force-systemd-flag-779725
	I1227 09:56:03.608861  484533 ssh_runner.go:195] Run: cat /version.json
	I1227 09:56:03.608919  484533 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-779725
	I1227 09:56:03.609175  484533 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1227 09:56:03.609235  484533 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-779725
	I1227 09:56:03.630341  484533 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33411 SSHKeyPath:/home/jenkins/minikube-integration/22344-301174/.minikube/machines/force-systemd-flag-779725/id_rsa Username:docker}
	I1227 09:56:03.632306  484533 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33411 SSHKeyPath:/home/jenkins/minikube-integration/22344-301174/.minikube/machines/force-systemd-flag-779725/id_rsa Username:docker}
	I1227 09:56:03.824484  484533 ssh_runner.go:195] Run: systemctl --version
	I1227 09:56:03.831153  484533 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1227 09:56:03.866389  484533 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1227 09:56:03.870816  484533 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1227 09:56:03.870913  484533 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1227 09:56:03.899637  484533 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1227 09:56:03.899664  484533 start.go:496] detecting cgroup driver to use...
	I1227 09:56:03.899679  484533 start.go:500] using "systemd" cgroup driver as enforced via flags
	I1227 09:56:03.899734  484533 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1227 09:56:03.917759  484533 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1227 09:56:03.930590  484533 docker.go:218] disabling cri-docker service (if available) ...
	I1227 09:56:03.930658  484533 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1227 09:56:03.949239  484533 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1227 09:56:03.967893  484533 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1227 09:56:04.087613  484533 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1227 09:56:04.236434  484533 docker.go:234] disabling docker service ...
	I1227 09:56:04.236533  484533 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1227 09:56:04.258720  484533 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1227 09:56:04.272670  484533 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1227 09:56:04.397721  484533 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1227 09:56:04.522806  484533 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1227 09:56:04.536017  484533 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1227 09:56:04.550767  484533 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1227 09:56:04.550852  484533 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 09:56:04.560105  484533 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1227 09:56:04.560201  484533 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 09:56:04.570142  484533 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 09:56:04.579772  484533 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 09:56:04.588919  484533 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1227 09:56:04.597463  484533 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 09:56:04.607037  484533 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 09:56:04.621447  484533 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 09:56:04.631131  484533 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1227 09:56:04.638826  484533 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1227 09:56:04.646373  484533 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1227 09:56:04.761276  484533 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1227 09:56:04.955089  484533 start.go:553] Will wait 60s for socket path /var/run/crio/crio.sock
	I1227 09:56:04.955160  484533 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1227 09:56:04.959251  484533 start.go:574] Will wait 60s for crictl version
	I1227 09:56:04.959361  484533 ssh_runner.go:195] Run: which crictl
	I1227 09:56:04.963057  484533 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1227 09:56:04.986231  484533 start.go:590] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1227 09:56:04.986384  484533 ssh_runner.go:195] Run: crio --version
	I1227 09:56:05.016778  484533 ssh_runner.go:195] Run: crio --version
	I1227 09:56:05.050525  484533 out.go:179] * Preparing Kubernetes v1.35.0 on CRI-O 1.34.3 ...
	I1227 09:56:05.053292  484533 cli_runner.go:164] Run: docker network inspect force-systemd-flag-779725 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1227 09:56:05.068946  484533 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1227 09:56:05.072677  484533 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1227 09:56:05.082493  484533 kubeadm.go:884] updating cluster {Name:force-systemd-flag-779725 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:force-systemd-flag-779725 Namespace:default APIServerHAVIP: APIServerNam
e:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuth
Sock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false} ...
	I1227 09:56:05.082626  484533 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime crio
	I1227 09:56:05.082693  484533 ssh_runner.go:195] Run: sudo crictl images --output json
	I1227 09:56:05.121874  484533 crio.go:561] all images are preloaded for cri-o runtime.
	I1227 09:56:05.121901  484533 crio.go:433] Images already preloaded, skipping extraction
	I1227 09:56:05.121957  484533 ssh_runner.go:195] Run: sudo crictl images --output json
	I1227 09:56:05.148169  484533 crio.go:561] all images are preloaded for cri-o runtime.
	I1227 09:56:05.148235  484533 cache_images.go:86] Images are preloaded, skipping loading
	I1227 09:56:05.148258  484533 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.35.0 crio true true} ...
	I1227 09:56:05.148376  484533 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=force-systemd-flag-779725 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0 ClusterName:force-systemd-flag-779725 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1227 09:56:05.148475  484533 ssh_runner.go:195] Run: crio config
	I1227 09:56:05.206823  484533 cni.go:84] Creating CNI manager for ""
	I1227 09:56:05.206849  484533 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1227 09:56:05.206863  484533 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1227 09:56:05.206887  484533 kubeadm.go:197] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.35.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:force-systemd-flag-779725 NodeName:force-systemd-flag-779725 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt Sta
ticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1227 09:56:05.207016  484533 kubeadm.go:203] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "force-systemd-flag-779725"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1227 09:56:05.207095  484533 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0
	I1227 09:56:05.215092  484533 binaries.go:51] Found k8s binaries, skipping transfer
	I1227 09:56:05.215172  484533 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1227 09:56:05.222984  484533 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (375 bytes)
	I1227 09:56:05.235863  484533 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1227 09:56:05.249729  484533 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2221 bytes)
	I1227 09:56:05.262651  484533 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1227 09:56:05.266142  484533 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1227 09:56:05.276049  484533 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1227 09:56:05.391751  484533 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1227 09:56:05.409597  484533 certs.go:69] Setting up /home/jenkins/minikube-integration/22344-301174/.minikube/profiles/force-systemd-flag-779725 for IP: 192.168.76.2
	I1227 09:56:05.409620  484533 certs.go:195] generating shared ca certs ...
	I1227 09:56:05.409637  484533 certs.go:227] acquiring lock for ca certs: {Name:mk8bd99999eb300c33129cb11fd55a1be3001328 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 09:56:05.409782  484533 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22344-301174/.minikube/ca.key
	I1227 09:56:05.409833  484533 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22344-301174/.minikube/proxy-client-ca.key
	I1227 09:56:05.409843  484533 certs.go:257] generating profile certs ...
	I1227 09:56:05.409896  484533 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22344-301174/.minikube/profiles/force-systemd-flag-779725/client.key
	I1227 09:56:05.409921  484533 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22344-301174/.minikube/profiles/force-systemd-flag-779725/client.crt with IP's: []
	I1227 09:56:05.819192  484533 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22344-301174/.minikube/profiles/force-systemd-flag-779725/client.crt ...
	I1227 09:56:05.819226  484533 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22344-301174/.minikube/profiles/force-systemd-flag-779725/client.crt: {Name:mkd1d275c3c275bb893e96ad8a5f4872b9397052 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 09:56:05.819425  484533 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22344-301174/.minikube/profiles/force-systemd-flag-779725/client.key ...
	I1227 09:56:05.819439  484533 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22344-301174/.minikube/profiles/force-systemd-flag-779725/client.key: {Name:mkdfedbef7b759979f4447f7a607c257e91a7898 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 09:56:05.819535  484533 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22344-301174/.minikube/profiles/force-systemd-flag-779725/apiserver.key.b02328ec
	I1227 09:56:05.819551  484533 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22344-301174/.minikube/profiles/force-systemd-flag-779725/apiserver.crt.b02328ec with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.76.2]
	I1227 09:56:06.116438  484533 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22344-301174/.minikube/profiles/force-systemd-flag-779725/apiserver.crt.b02328ec ...
	I1227 09:56:06.116469  484533 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22344-301174/.minikube/profiles/force-systemd-flag-779725/apiserver.crt.b02328ec: {Name:mk761e2004e9f22386d59f827364db9f82f7df23 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 09:56:06.116661  484533 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22344-301174/.minikube/profiles/force-systemd-flag-779725/apiserver.key.b02328ec ...
	I1227 09:56:06.116675  484533 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22344-301174/.minikube/profiles/force-systemd-flag-779725/apiserver.key.b02328ec: {Name:mkc6c7023e856a9d390622f91a838a5e786a71be Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 09:56:06.116764  484533 certs.go:382] copying /home/jenkins/minikube-integration/22344-301174/.minikube/profiles/force-systemd-flag-779725/apiserver.crt.b02328ec -> /home/jenkins/minikube-integration/22344-301174/.minikube/profiles/force-systemd-flag-779725/apiserver.crt
	I1227 09:56:06.116843  484533 certs.go:386] copying /home/jenkins/minikube-integration/22344-301174/.minikube/profiles/force-systemd-flag-779725/apiserver.key.b02328ec -> /home/jenkins/minikube-integration/22344-301174/.minikube/profiles/force-systemd-flag-779725/apiserver.key
	I1227 09:56:06.116933  484533 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22344-301174/.minikube/profiles/force-systemd-flag-779725/proxy-client.key
	I1227 09:56:06.116952  484533 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22344-301174/.minikube/profiles/force-systemd-flag-779725/proxy-client.crt with IP's: []
	I1227 09:56:06.250408  484533 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22344-301174/.minikube/profiles/force-systemd-flag-779725/proxy-client.crt ...
	I1227 09:56:06.250442  484533 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22344-301174/.minikube/profiles/force-systemd-flag-779725/proxy-client.crt: {Name:mkd8f7c3da466bc4f00268e09334061825876390 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 09:56:06.250650  484533 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22344-301174/.minikube/profiles/force-systemd-flag-779725/proxy-client.key ...
	I1227 09:56:06.250665  484533 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22344-301174/.minikube/profiles/force-systemd-flag-779725/proxy-client.key: {Name:mk3b09064dff809062bef247e863b0fbfa5fc48d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 09:56:06.250760  484533 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22344-301174/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1227 09:56:06.250783  484533 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22344-301174/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1227 09:56:06.250798  484533 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22344-301174/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1227 09:56:06.250815  484533 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22344-301174/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1227 09:56:06.250836  484533 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22344-301174/.minikube/profiles/force-systemd-flag-779725/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1227 09:56:06.250850  484533 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22344-301174/.minikube/profiles/force-systemd-flag-779725/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1227 09:56:06.250867  484533 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22344-301174/.minikube/profiles/force-systemd-flag-779725/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1227 09:56:06.250878  484533 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22344-301174/.minikube/profiles/force-systemd-flag-779725/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1227 09:56:06.250939  484533 certs.go:484] found cert: /home/jenkins/minikube-integration/22344-301174/.minikube/certs/303043.pem (1338 bytes)
	W1227 09:56:06.250983  484533 certs.go:480] ignoring /home/jenkins/minikube-integration/22344-301174/.minikube/certs/303043_empty.pem, impossibly tiny 0 bytes
	I1227 09:56:06.250996  484533 certs.go:484] found cert: /home/jenkins/minikube-integration/22344-301174/.minikube/certs/ca-key.pem (1675 bytes)
	I1227 09:56:06.251022  484533 certs.go:484] found cert: /home/jenkins/minikube-integration/22344-301174/.minikube/certs/ca.pem (1082 bytes)
	I1227 09:56:06.251051  484533 certs.go:484] found cert: /home/jenkins/minikube-integration/22344-301174/.minikube/certs/cert.pem (1123 bytes)
	I1227 09:56:06.251078  484533 certs.go:484] found cert: /home/jenkins/minikube-integration/22344-301174/.minikube/certs/key.pem (1675 bytes)
	I1227 09:56:06.251127  484533 certs.go:484] found cert: /home/jenkins/minikube-integration/22344-301174/.minikube/files/etc/ssl/certs/3030432.pem (1708 bytes)
	I1227 09:56:06.251161  484533 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22344-301174/.minikube/files/etc/ssl/certs/3030432.pem -> /usr/share/ca-certificates/3030432.pem
	I1227 09:56:06.251179  484533 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22344-301174/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1227 09:56:06.251190  484533 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22344-301174/.minikube/certs/303043.pem -> /usr/share/ca-certificates/303043.pem
	I1227 09:56:06.251756  484533 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22344-301174/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1227 09:56:06.272067  484533 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22344-301174/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1227 09:56:06.290346  484533 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22344-301174/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1227 09:56:06.307837  484533 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22344-301174/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1227 09:56:06.325835  484533 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22344-301174/.minikube/profiles/force-systemd-flag-779725/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I1227 09:56:06.343691  484533 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22344-301174/.minikube/profiles/force-systemd-flag-779725/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1227 09:56:06.361569  484533 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22344-301174/.minikube/profiles/force-systemd-flag-779725/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1227 09:56:06.378946  484533 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22344-301174/.minikube/profiles/force-systemd-flag-779725/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1227 09:56:06.398560  484533 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22344-301174/.minikube/files/etc/ssl/certs/3030432.pem --> /usr/share/ca-certificates/3030432.pem (1708 bytes)
	I1227 09:56:06.417740  484533 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22344-301174/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1227 09:56:06.435383  484533 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22344-301174/.minikube/certs/303043.pem --> /usr/share/ca-certificates/303043.pem (1338 bytes)
	I1227 09:56:06.452867  484533 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (722 bytes)
	I1227 09:56:06.465304  484533 ssh_runner.go:195] Run: openssl version
	I1227 09:56:06.471936  484533 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1227 09:56:06.479535  484533 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1227 09:56:06.487152  484533 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1227 09:56:06.490950  484533 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 27 09:13 /usr/share/ca-certificates/minikubeCA.pem
	I1227 09:56:06.491022  484533 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1227 09:56:06.532152  484533 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1227 09:56:06.539692  484533 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1227 09:56:06.547003  484533 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/303043.pem
	I1227 09:56:06.554481  484533 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/303043.pem /etc/ssl/certs/303043.pem
	I1227 09:56:06.561894  484533 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/303043.pem
	I1227 09:56:06.565876  484533 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 27 09:17 /usr/share/ca-certificates/303043.pem
	I1227 09:56:06.565948  484533 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/303043.pem
	I1227 09:56:06.607064  484533 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1227 09:56:06.614989  484533 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/303043.pem /etc/ssl/certs/51391683.0
	I1227 09:56:06.622266  484533 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/3030432.pem
	I1227 09:56:06.629626  484533 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/3030432.pem /etc/ssl/certs/3030432.pem
	I1227 09:56:06.637235  484533 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3030432.pem
	I1227 09:56:06.640795  484533 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 27 09:17 /usr/share/ca-certificates/3030432.pem
	I1227 09:56:06.640900  484533 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3030432.pem
	I1227 09:56:06.681904  484533 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1227 09:56:06.689401  484533 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/3030432.pem /etc/ssl/certs/3ec20f2e.0
	I1227 09:56:06.696833  484533 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1227 09:56:06.700609  484533 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1227 09:56:06.700663  484533 kubeadm.go:401] StartCluster: {Name:force-systemd-flag-779725 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:force-systemd-flag-779725 Namespace:default APIServerHAVIP: APIServerName:m
inikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSoc
k: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1227 09:56:06.700744  484533 cri.go:61] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1227 09:56:06.700807  484533 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1227 09:56:06.727176  484533 cri.go:96] found id: ""
	I1227 09:56:06.727247  484533 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1227 09:56:06.735188  484533 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1227 09:56:06.743319  484533 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1227 09:56:06.743410  484533 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1227 09:56:06.751614  484533 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1227 09:56:06.751680  484533 kubeadm.go:158] found existing configuration files:
	
	I1227 09:56:06.751746  484533 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1227 09:56:06.759507  484533 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1227 09:56:06.759571  484533 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1227 09:56:06.766845  484533 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1227 09:56:06.774563  484533 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1227 09:56:06.774631  484533 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1227 09:56:06.781805  484533 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1227 09:56:06.789306  484533 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1227 09:56:06.789389  484533 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1227 09:56:06.797071  484533 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1227 09:56:06.804915  484533 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1227 09:56:06.804991  484533 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1227 09:56:06.812295  484533 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1227 09:56:06.849359  484533 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0
	I1227 09:56:06.849523  484533 kubeadm.go:319] [preflight] Running pre-flight checks
	I1227 09:56:06.941272  484533 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1227 09:56:06.941349  484533 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1227 09:56:06.941390  484533 kubeadm.go:319] OS: Linux
	I1227 09:56:06.941440  484533 kubeadm.go:319] CGROUPS_CPU: enabled
	I1227 09:56:06.941491  484533 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1227 09:56:06.941541  484533 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1227 09:56:06.941600  484533 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1227 09:56:06.941651  484533 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1227 09:56:06.941702  484533 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1227 09:56:06.941753  484533 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1227 09:56:06.941804  484533 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1227 09:56:06.941854  484533 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1227 09:56:07.013986  484533 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1227 09:56:07.014230  484533 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1227 09:56:07.014385  484533 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1227 09:56:07.026541  484533 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1227 09:56:07.033211  484533 out.go:252]   - Generating certificates and keys ...
	I1227 09:56:07.033314  484533 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1227 09:56:07.033397  484533 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1227 09:56:07.147983  484533 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1227 09:56:07.196342  484533 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1227 09:56:07.292471  484533 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1227 09:56:07.406603  484533 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1227 09:56:07.797541  484533 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1227 09:56:07.797702  484533 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [force-systemd-flag-779725 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1227 09:56:08.092997  484533 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1227 09:56:08.093189  484533 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [force-systemd-flag-779725 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1227 09:56:08.332578  484533 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1227 09:56:08.826008  484533 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1227 09:56:09.186234  484533 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1227 09:56:09.186774  484533 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1227 09:56:09.323204  484533 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1227 09:56:09.676751  484533 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1227 09:56:09.832347  484533 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1227 09:56:10.112003  484533 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1227 09:56:10.327460  484533 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1227 09:56:10.328358  484533 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1227 09:56:10.331150  484533 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1227 09:56:10.336747  484533 out.go:252]   - Booting up control plane ...
	I1227 09:56:10.336912  484533 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1227 09:56:10.337014  484533 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1227 09:56:10.337120  484533 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1227 09:56:10.351272  484533 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1227 09:56:10.351587  484533 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1227 09:56:10.360769  484533 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1227 09:56:10.360949  484533 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1227 09:56:10.361018  484533 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1227 09:56:10.495827  484533 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1227 09:56:10.495946  484533 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1227 10:00:10.496698  484533 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.000942644s
	I1227 10:00:10.496730  484533 kubeadm.go:319] 
	I1227 10:00:10.496789  484533 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1227 10:00:10.496827  484533 kubeadm.go:319] 	- The kubelet is not running
	I1227 10:00:10.496936  484533 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1227 10:00:10.496945  484533 kubeadm.go:319] 
	I1227 10:00:10.497048  484533 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1227 10:00:10.497084  484533 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1227 10:00:10.497119  484533 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1227 10:00:10.497127  484533 kubeadm.go:319] 
	I1227 10:00:10.512739  484533 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1227 10:00:10.513169  484533 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1227 10:00:10.513287  484533 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1227 10:00:10.513526  484533 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1227 10:00:10.513534  484533 kubeadm.go:319] 
	I1227 10:00:10.513603  484533 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	W1227 10:00:10.513743  484533 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [force-systemd-flag-779725 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [force-systemd-flag-779725 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000942644s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [force-systemd-flag-779725 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [force-systemd-flag-779725 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000942644s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	I1227 10:00:10.513833  484533 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1227 10:00:10.945663  484533 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1227 10:00:10.962736  484533 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1227 10:00:10.962805  484533 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1227 10:00:10.974233  484533 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1227 10:00:10.974259  484533 kubeadm.go:158] found existing configuration files:
	
	I1227 10:00:10.974312  484533 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1227 10:00:10.984084  484533 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1227 10:00:10.984155  484533 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1227 10:00:10.992747  484533 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1227 10:00:11.002221  484533 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1227 10:00:11.002302  484533 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1227 10:00:11.013560  484533 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1227 10:00:11.023445  484533 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1227 10:00:11.023515  484533 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1227 10:00:11.033374  484533 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1227 10:00:11.044183  484533 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1227 10:00:11.044265  484533 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1227 10:00:11.053082  484533 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1227 10:00:11.114718  484533 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0
	I1227 10:00:11.115046  484533 kubeadm.go:319] [preflight] Running pre-flight checks
	I1227 10:00:11.208881  484533 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1227 10:00:11.208952  484533 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1227 10:00:11.208993  484533 kubeadm.go:319] OS: Linux
	I1227 10:00:11.209043  484533 kubeadm.go:319] CGROUPS_CPU: enabled
	I1227 10:00:11.209096  484533 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1227 10:00:11.209147  484533 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1227 10:00:11.209199  484533 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1227 10:00:11.209249  484533 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1227 10:00:11.209305  484533 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1227 10:00:11.209356  484533 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1227 10:00:11.209409  484533 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1227 10:00:11.209459  484533 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1227 10:00:11.292695  484533 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1227 10:00:11.292850  484533 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1227 10:00:11.292982  484533 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1227 10:00:11.302691  484533 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1227 10:00:11.307682  484533 out.go:252]   - Generating certificates and keys ...
	I1227 10:00:11.307842  484533 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1227 10:00:11.307943  484533 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1227 10:00:11.308050  484533 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1227 10:00:11.308151  484533 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1227 10:00:11.308259  484533 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1227 10:00:11.308351  484533 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1227 10:00:11.308444  484533 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1227 10:00:11.308552  484533 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1227 10:00:11.308661  484533 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1227 10:00:11.308778  484533 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1227 10:00:11.308853  484533 kubeadm.go:319] [certs] Using the existing "sa" key
	I1227 10:00:11.308930  484533 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1227 10:00:11.911987  484533 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1227 10:00:12.188169  484533 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1227 10:00:12.376543  484533 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1227 10:00:12.810540  484533 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1227 10:00:12.909733  484533 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1227 10:00:12.914047  484533 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1227 10:00:12.914141  484533 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1227 10:00:12.917330  484533 out.go:252]   - Booting up control plane ...
	I1227 10:00:12.917452  484533 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1227 10:00:12.917776  484533 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1227 10:00:12.917872  484533 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1227 10:00:12.960022  484533 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1227 10:00:12.960353  484533 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1227 10:00:12.969156  484533 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1227 10:00:12.969480  484533 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1227 10:00:12.970864  484533 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1227 10:00:13.228211  484533 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1227 10:00:13.228346  484533 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1227 10:04:13.223020  484533 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.001216783s
	I1227 10:04:13.223055  484533 kubeadm.go:319] 
	I1227 10:04:13.223162  484533 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1227 10:04:13.223220  484533 kubeadm.go:319] 	- The kubelet is not running
	I1227 10:04:13.223667  484533 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1227 10:04:13.223680  484533 kubeadm.go:319] 
	I1227 10:04:13.223872  484533 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1227 10:04:13.224056  484533 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1227 10:04:13.224112  484533 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1227 10:04:13.224118  484533 kubeadm.go:319] 
	I1227 10:04:13.230061  484533 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1227 10:04:13.230502  484533 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1227 10:04:13.230617  484533 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1227 10:04:13.230854  484533 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1227 10:04:13.230864  484533 kubeadm.go:319] 
	I1227 10:04:13.230933  484533 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1227 10:04:13.230993  484533 kubeadm.go:403] duration metric: took 8m6.530333747s to StartCluster
	I1227 10:04:13.231031  484533 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1227 10:04:13.231094  484533 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1227 10:04:13.259685  484533 cri.go:96] found id: ""
	I1227 10:04:13.259719  484533 logs.go:282] 0 containers: []
	W1227 10:04:13.259728  484533 logs.go:284] No container was found matching "kube-apiserver"
	I1227 10:04:13.259734  484533 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1227 10:04:13.259797  484533 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1227 10:04:13.289141  484533 cri.go:96] found id: ""
	I1227 10:04:13.289173  484533 logs.go:282] 0 containers: []
	W1227 10:04:13.289182  484533 logs.go:284] No container was found matching "etcd"
	I1227 10:04:13.289194  484533 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1227 10:04:13.289261  484533 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1227 10:04:13.317196  484533 cri.go:96] found id: ""
	I1227 10:04:13.317223  484533 logs.go:282] 0 containers: []
	W1227 10:04:13.317231  484533 logs.go:284] No container was found matching "coredns"
	I1227 10:04:13.317237  484533 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1227 10:04:13.317295  484533 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1227 10:04:13.346839  484533 cri.go:96] found id: ""
	I1227 10:04:13.346880  484533 logs.go:282] 0 containers: []
	W1227 10:04:13.346890  484533 logs.go:284] No container was found matching "kube-scheduler"
	I1227 10:04:13.346897  484533 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1227 10:04:13.346959  484533 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1227 10:04:13.381429  484533 cri.go:96] found id: ""
	I1227 10:04:13.381452  484533 logs.go:282] 0 containers: []
	W1227 10:04:13.381472  484533 logs.go:284] No container was found matching "kube-proxy"
	I1227 10:04:13.381479  484533 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1227 10:04:13.381547  484533 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1227 10:04:13.421505  484533 cri.go:96] found id: ""
	I1227 10:04:13.421532  484533 logs.go:282] 0 containers: []
	W1227 10:04:13.421540  484533 logs.go:284] No container was found matching "kube-controller-manager"
	I1227 10:04:13.421548  484533 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1227 10:04:13.421608  484533 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1227 10:04:13.454402  484533 cri.go:96] found id: ""
	I1227 10:04:13.454476  484533 logs.go:282] 0 containers: []
	W1227 10:04:13.454513  484533 logs.go:284] No container was found matching "kindnet"
	I1227 10:04:13.454544  484533 logs.go:123] Gathering logs for CRI-O ...
	I1227 10:04:13.454573  484533 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1227 10:04:13.491644  484533 logs.go:123] Gathering logs for container status ...
	I1227 10:04:13.491679  484533 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1227 10:04:13.524206  484533 logs.go:123] Gathering logs for kubelet ...
	I1227 10:04:13.524234  484533 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1227 10:04:13.591823  484533 logs.go:123] Gathering logs for dmesg ...
	I1227 10:04:13.591861  484533 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1227 10:04:13.609186  484533 logs.go:123] Gathering logs for describe nodes ...
	I1227 10:04:13.609215  484533 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1227 10:04:13.684353  484533 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1227 10:04:13.675473    4904 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 10:04:13.676200    4904 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 10:04:13.677792    4904 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 10:04:13.678207    4904 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 10:04:13.679912    4904 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1227 10:04:13.675473    4904 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 10:04:13.676200    4904 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 10:04:13.677792    4904 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 10:04:13.678207    4904 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 10:04:13.679912    4904 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	W1227 10:04:13.684433  484533 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001216783s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	W1227 10:04:13.684480  484533 out.go:285] * 
	* 
	W1227 10:04:13.684550  484533 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001216783s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001216783s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1227 10:04:13.684583  484533 out.go:285] * 
	* 
	W1227 10:04:13.684830  484533 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1227 10:04:13.694710  484533 out.go:203] 
	W1227 10:04:13.698439  484533 out.go:285] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001216783s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001216783s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1227 10:04:13.698500  484533 out.go:285] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1227 10:04:13.698524  484533 out.go:285] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1227 10:04:13.702331  484533 out.go:203] 

                                                
                                                
** /stderr **
docker_test.go:93: failed to start minikube with args: "out/minikube-linux-arm64 start -p force-systemd-flag-779725 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio" : exit status 109
docker_test.go:132: (dbg) Run:  out/minikube-linux-arm64 -p force-systemd-flag-779725 ssh "cat /etc/crio/crio.conf.d/02-crio.conf"
docker_test.go:106: *** TestForceSystemdFlag FAILED at 2025-12-27 10:04:14.052257129 +0000 UTC m=+3100.598657179
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestForceSystemdFlag]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestForceSystemdFlag]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect force-systemd-flag-779725
helpers_test.go:244: (dbg) docker inspect force-systemd-flag-779725:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "195a68301e5300b759b102a1f6c64d8560ef2078b32e0701146596a4f37de169",
	        "Created": "2025-12-27T09:55:57.89771499Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 484959,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-27T09:55:57.971140361Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:713d59eed1cb7ea89d31e3cbef8f6274a6ab7509a421e96cf0197c334e76398d",
	        "ResolvConfPath": "/var/lib/docker/containers/195a68301e5300b759b102a1f6c64d8560ef2078b32e0701146596a4f37de169/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/195a68301e5300b759b102a1f6c64d8560ef2078b32e0701146596a4f37de169/hostname",
	        "HostsPath": "/var/lib/docker/containers/195a68301e5300b759b102a1f6c64d8560ef2078b32e0701146596a4f37de169/hosts",
	        "LogPath": "/var/lib/docker/containers/195a68301e5300b759b102a1f6c64d8560ef2078b32e0701146596a4f37de169/195a68301e5300b759b102a1f6c64d8560ef2078b32e0701146596a4f37de169-json.log",
	        "Name": "/force-systemd-flag-779725",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "force-systemd-flag-779725:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "force-systemd-flag-779725",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "195a68301e5300b759b102a1f6c64d8560ef2078b32e0701146596a4f37de169",
	                "LowerDir": "/var/lib/docker/overlay2/baf18d5755463fa2c4db6f3a8e15acaeba24945682eab3226b1231928cee4712-init/diff:/var/lib/docker/overlay2/888349771a41a46b09e7de676064af9effbe2f5ae2a8ba49ad062335fb2a70e5/diff",
	                "MergedDir": "/var/lib/docker/overlay2/baf18d5755463fa2c4db6f3a8e15acaeba24945682eab3226b1231928cee4712/merged",
	                "UpperDir": "/var/lib/docker/overlay2/baf18d5755463fa2c4db6f3a8e15acaeba24945682eab3226b1231928cee4712/diff",
	                "WorkDir": "/var/lib/docker/overlay2/baf18d5755463fa2c4db6f3a8e15acaeba24945682eab3226b1231928cee4712/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "force-systemd-flag-779725",
	                "Source": "/var/lib/docker/volumes/force-systemd-flag-779725/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "force-systemd-flag-779725",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "force-systemd-flag-779725",
	                "name.minikube.sigs.k8s.io": "force-systemd-flag-779725",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "e0e013ab8e06b30e8a72d1783eac5a2918e255ef2ae55357ba33076e1c0a7d65",
	            "SandboxKey": "/var/run/docker/netns/e0e013ab8e06",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33411"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33412"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33415"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33413"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33414"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "force-systemd-flag-779725": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "d2:81:7b:fb:ec:b7",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "489f01168e32896a6dbd8c4ed0dcb4a0431e06ebb160a2fa17327250e4021611",
	                    "EndpointID": "640ced9e6b6aa54f90fd34a6b08adb456ee9aaadbd0c2141cc04ca98949f7fc4",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "force-systemd-flag-779725",
	                        "195a68301e53"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p force-systemd-flag-779725 -n force-systemd-flag-779725
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p force-systemd-flag-779725 -n force-systemd-flag-779725: exit status 6 (347.669262ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1227 10:04:14.403964  507827 status.go:458] kubeconfig endpoint: get endpoint: "force-systemd-flag-779725" does not appear in /home/jenkins/minikube-integration/22344-301174/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:248: status error: exit status 6 (may be ok)
helpers_test.go:253: <<< TestForceSystemdFlag FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestForceSystemdFlag]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p force-systemd-flag-779725 logs -n 25
helpers_test.go:261: TestForceSystemdFlag logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────
────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │          PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────
────┤
	│ delete  │ -p cilium-246753                                                                                                                                                                                                                              │ cilium-246753             │ jenkins │ v1.37.0 │ 27 Dec 25 09:52 UTC │ 27 Dec 25 09:52 UTC │
	│ start   │ -p cert-expiration-028595 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio                                                                                                                                        │ cert-expiration-028595    │ jenkins │ v1.37.0 │ 27 Dec 25 09:52 UTC │ 27 Dec 25 09:52 UTC │
	│ start   │ -p cert-expiration-028595 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio                                                                                                                                     │ cert-expiration-028595    │ jenkins │ v1.37.0 │ 27 Dec 25 09:55 UTC │ 27 Dec 25 09:55 UTC │
	│ delete  │ -p cert-expiration-028595                                                                                                                                                                                                                     │ cert-expiration-028595    │ jenkins │ v1.37.0 │ 27 Dec 25 09:55 UTC │ 27 Dec 25 09:55 UTC │
	│ start   │ -p force-systemd-flag-779725 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                                                                                                                   │ force-systemd-flag-779725 │ jenkins │ v1.37.0 │ 27 Dec 25 09:55 UTC │                     │
	│ delete  │ -p force-systemd-env-029895                                                                                                                                                                                                                   │ force-systemd-env-029895  │ jenkins │ v1.37.0 │ 27 Dec 25 09:58 UTC │ 27 Dec 25 09:58 UTC │
	│ start   │ -p cert-options-057459 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio                     │ cert-options-057459       │ jenkins │ v1.37.0 │ 27 Dec 25 09:58 UTC │ 27 Dec 25 09:59 UTC │
	│ ssh     │ cert-options-057459 ssh openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt                                                                                                                                                   │ cert-options-057459       │ jenkins │ v1.37.0 │ 27 Dec 25 09:59 UTC │ 27 Dec 25 09:59 UTC │
	│ ssh     │ -p cert-options-057459 -- sudo cat /etc/kubernetes/admin.conf                                                                                                                                                                                 │ cert-options-057459       │ jenkins │ v1.37.0 │ 27 Dec 25 09:59 UTC │ 27 Dec 25 09:59 UTC │
	│ delete  │ -p cert-options-057459                                                                                                                                                                                                                        │ cert-options-057459       │ jenkins │ v1.37.0 │ 27 Dec 25 09:59 UTC │ 27 Dec 25 09:59 UTC │
	│ start   │ -p old-k8s-version-156305 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-156305    │ jenkins │ v1.37.0 │ 27 Dec 25 09:59 UTC │ 27 Dec 25 10:00 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-156305 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-156305    │ jenkins │ v1.37.0 │ 27 Dec 25 10:00 UTC │                     │
	│ stop    │ -p old-k8s-version-156305 --alsologtostderr -v=3                                                                                                                                                                                              │ old-k8s-version-156305    │ jenkins │ v1.37.0 │ 27 Dec 25 10:00 UTC │ 27 Dec 25 10:00 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-156305 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-156305    │ jenkins │ v1.37.0 │ 27 Dec 25 10:00 UTC │ 27 Dec 25 10:00 UTC │
	│ start   │ -p old-k8s-version-156305 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-156305    │ jenkins │ v1.37.0 │ 27 Dec 25 10:00 UTC │ 27 Dec 25 10:01 UTC │
	│ image   │ old-k8s-version-156305 image list --format=json                                                                                                                                                                                               │ old-k8s-version-156305    │ jenkins │ v1.37.0 │ 27 Dec 25 10:01 UTC │ 27 Dec 25 10:01 UTC │
	│ pause   │ -p old-k8s-version-156305 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-156305    │ jenkins │ v1.37.0 │ 27 Dec 25 10:01 UTC │                     │
	│ delete  │ -p old-k8s-version-156305                                                                                                                                                                                                                     │ old-k8s-version-156305    │ jenkins │ v1.37.0 │ 27 Dec 25 10:02 UTC │ 27 Dec 25 10:02 UTC │
	│ delete  │ -p old-k8s-version-156305                                                                                                                                                                                                                     │ old-k8s-version-156305    │ jenkins │ v1.37.0 │ 27 Dec 25 10:02 UTC │ 27 Dec 25 10:02 UTC │
	│ start   │ -p no-preload-021144 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0                                                                                       │ no-preload-021144         │ jenkins │ v1.37.0 │ 27 Dec 25 10:02 UTC │ 27 Dec 25 10:03 UTC │
	│ addons  │ enable metrics-server -p no-preload-021144 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-021144         │ jenkins │ v1.37.0 │ 27 Dec 25 10:03 UTC │                     │
	│ stop    │ -p no-preload-021144 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-021144         │ jenkins │ v1.37.0 │ 27 Dec 25 10:03 UTC │ 27 Dec 25 10:03 UTC │
	│ addons  │ enable dashboard -p no-preload-021144 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-021144         │ jenkins │ v1.37.0 │ 27 Dec 25 10:03 UTC │ 27 Dec 25 10:03 UTC │
	│ start   │ -p no-preload-021144 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0                                                                                       │ no-preload-021144         │ jenkins │ v1.37.0 │ 27 Dec 25 10:03 UTC │                     │
	│ ssh     │ force-systemd-flag-779725 ssh cat /etc/crio/crio.conf.d/02-crio.conf                                                                                                                                                                          │ force-systemd-flag-779725 │ jenkins │ v1.37.0 │ 27 Dec 25 10:04 UTC │ 27 Dec 25 10:04 UTC │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────
────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/27 10:03:26
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1227 10:03:26.045517  505250 out.go:360] Setting OutFile to fd 1 ...
	I1227 10:03:26.045637  505250 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 10:03:26.045648  505250 out.go:374] Setting ErrFile to fd 2...
	I1227 10:03:26.045654  505250 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 10:03:26.045922  505250 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22344-301174/.minikube/bin
	I1227 10:03:26.046372  505250 out.go:368] Setting JSON to false
	I1227 10:03:26.047318  505250 start.go:133] hostinfo: {"hostname":"ip-172-31-31-251","uptime":9955,"bootTime":1766819851,"procs":176,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1227 10:03:26.047396  505250 start.go:143] virtualization:  
	I1227 10:03:26.051906  505250 out.go:179] * [no-preload-021144] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1227 10:03:26.054931  505250 out.go:179]   - MINIKUBE_LOCATION=22344
	I1227 10:03:26.055117  505250 notify.go:221] Checking for updates...
	I1227 10:03:26.060952  505250 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1227 10:03:26.063893  505250 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22344-301174/kubeconfig
	I1227 10:03:26.067003  505250 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22344-301174/.minikube
	I1227 10:03:26.069963  505250 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1227 10:03:26.072860  505250 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1227 10:03:26.076344  505250 config.go:182] Loaded profile config "no-preload-021144": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1227 10:03:26.076878  505250 driver.go:422] Setting default libvirt URI to qemu:///system
	I1227 10:03:26.099457  505250 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1227 10:03:26.099618  505250 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1227 10:03:26.175074  505250 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-27 10:03:26.164697704 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1227 10:03:26.175189  505250 docker.go:319] overlay module found
	I1227 10:03:26.178421  505250 out.go:179] * Using the docker driver based on existing profile
	I1227 10:03:26.181361  505250 start.go:309] selected driver: docker
	I1227 10:03:26.181383  505250 start.go:928] validating driver "docker" against &{Name:no-preload-021144 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:no-preload-021144 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9
p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1227 10:03:26.181515  505250 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1227 10:03:26.182336  505250 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1227 10:03:26.252562  505250 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-27 10:03:26.243394907 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1227 10:03:26.252886  505250 start_flags.go:1019] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1227 10:03:26.252912  505250 cni.go:84] Creating CNI manager for ""
	I1227 10:03:26.252966  505250 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1227 10:03:26.253007  505250 start.go:353] cluster config:
	{Name:no-preload-021144 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:no-preload-021144 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false
DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1227 10:03:26.256190  505250 out.go:179] * Starting "no-preload-021144" primary control-plane node in "no-preload-021144" cluster
	I1227 10:03:26.259020  505250 cache.go:134] Beginning downloading kic base image for docker with crio
	I1227 10:03:26.261969  505250 out.go:179] * Pulling base image v0.0.48-1766570851-22316 ...
	I1227 10:03:26.264868  505250 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime crio
	I1227 10:03:26.264952  505250 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a in local docker daemon
	I1227 10:03:26.265024  505250 profile.go:143] Saving config to /home/jenkins/minikube-integration/22344-301174/.minikube/profiles/no-preload-021144/config.json ...
	I1227 10:03:26.265290  505250 cache.go:107] acquiring lock: {Name:mk7d95993b5087d5334ae23cc35b07dd938b4c75 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1227 10:03:26.265370  505250 cache.go:115] /home/jenkins/minikube-integration/22344-301174/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1227 10:03:26.265380  505250 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/22344-301174/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 101.261µs
	I1227 10:03:26.265398  505250 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/22344-301174/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1227 10:03:26.265411  505250 cache.go:107] acquiring lock: {Name:mk6192369ad8584a99a6720429a8e6ed9f2d2233 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1227 10:03:26.265442  505250 cache.go:115] /home/jenkins/minikube-integration/22344-301174/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0 exists
	I1227 10:03:26.265447  505250 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.35.0" -> "/home/jenkins/minikube-integration/22344-301174/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0" took 37.523µs
	I1227 10:03:26.265453  505250 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.35.0 -> /home/jenkins/minikube-integration/22344-301174/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0 succeeded
	I1227 10:03:26.265462  505250 cache.go:107] acquiring lock: {Name:mkf532e70fa97678d09d9e1a398534a24cbf9538 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1227 10:03:26.265488  505250 cache.go:115] /home/jenkins/minikube-integration/22344-301174/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0 exists
	I1227 10:03:26.265492  505250 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.35.0" -> "/home/jenkins/minikube-integration/22344-301174/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0" took 32.017µs
	I1227 10:03:26.265498  505250 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.35.0 -> /home/jenkins/minikube-integration/22344-301174/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0 succeeded
	I1227 10:03:26.265507  505250 cache.go:107] acquiring lock: {Name:mk0c3ba49bab6e0c44483449eacbd8852cc4fa46 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1227 10:03:26.265532  505250 cache.go:115] /home/jenkins/minikube-integration/22344-301174/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0 exists
	I1227 10:03:26.265537  505250 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.35.0" -> "/home/jenkins/minikube-integration/22344-301174/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0" took 31.18µs
	I1227 10:03:26.265542  505250 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.35.0 -> /home/jenkins/minikube-integration/22344-301174/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0 succeeded
	I1227 10:03:26.265555  505250 cache.go:107] acquiring lock: {Name:mkab5988ea0c107a79947dffe93ac31b732eff3a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1227 10:03:26.265580  505250 cache.go:115] /home/jenkins/minikube-integration/22344-301174/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0 exists
	I1227 10:03:26.265585  505250 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.35.0" -> "/home/jenkins/minikube-integration/22344-301174/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0" took 31.237µs
	I1227 10:03:26.265590  505250 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.35.0 -> /home/jenkins/minikube-integration/22344-301174/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0 succeeded
	I1227 10:03:26.265598  505250 cache.go:107] acquiring lock: {Name:mkb370ce4e4194287b205d66c7b65e6a2ed45413 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1227 10:03:26.265625  505250 cache.go:115] /home/jenkins/minikube-integration/22344-301174/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 exists
	I1227 10:03:26.265629  505250 cache.go:96] cache image "registry.k8s.io/pause:3.10.1" -> "/home/jenkins/minikube-integration/22344-301174/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1" took 32µs
	I1227 10:03:26.265635  505250 cache.go:80] save to tar file registry.k8s.io/pause:3.10.1 -> /home/jenkins/minikube-integration/22344-301174/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 succeeded
	I1227 10:03:26.265654  505250 cache.go:107] acquiring lock: {Name:mkafd8402b85c8e9941d589b0a0272c8df27837d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1227 10:03:26.265680  505250 cache.go:115] /home/jenkins/minikube-integration/22344-301174/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.6-0 exists
	I1227 10:03:26.265684  505250 cache.go:96] cache image "registry.k8s.io/etcd:3.6.6-0" -> "/home/jenkins/minikube-integration/22344-301174/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.6-0" took 31.311µs
	I1227 10:03:26.265690  505250 cache.go:80] save to tar file registry.k8s.io/etcd:3.6.6-0 -> /home/jenkins/minikube-integration/22344-301174/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.6-0 succeeded
	I1227 10:03:26.265698  505250 cache.go:107] acquiring lock: {Name:mkffbc7f5ad1358fd7e7925aa1649b58cadec1a3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1227 10:03:26.265723  505250 cache.go:115] /home/jenkins/minikube-integration/22344-301174/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1 exists
	I1227 10:03:26.265727  505250 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.13.1" -> "/home/jenkins/minikube-integration/22344-301174/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1" took 29.703µs
	I1227 10:03:26.265732  505250 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.13.1 -> /home/jenkins/minikube-integration/22344-301174/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1 succeeded
	I1227 10:03:26.265738  505250 cache.go:87] Successfully saved all images to host disk.
	I1227 10:03:26.285649  505250 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a in local docker daemon, skipping pull
	I1227 10:03:26.285671  505250 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a exists in daemon, skipping load
	I1227 10:03:26.285692  505250 cache.go:243] Successfully downloaded all kic artifacts
	I1227 10:03:26.285727  505250 start.go:360] acquireMachinesLock for no-preload-021144: {Name:mk023bae09bbe814fea61a003c760e0dae17d436 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1227 10:03:26.285787  505250 start.go:364] duration metric: took 38.991µs to acquireMachinesLock for "no-preload-021144"
	I1227 10:03:26.285809  505250 start.go:96] Skipping create...Using existing machine configuration
	I1227 10:03:26.285817  505250 fix.go:54] fixHost starting: 
	I1227 10:03:26.286084  505250 cli_runner.go:164] Run: docker container inspect no-preload-021144 --format={{.State.Status}}
	I1227 10:03:26.302179  505250 fix.go:112] recreateIfNeeded on no-preload-021144: state=Stopped err=<nil>
	W1227 10:03:26.302211  505250 fix.go:138] unexpected machine state, will restart: <nil>
	I1227 10:03:26.307407  505250 out.go:252] * Restarting existing docker container for "no-preload-021144" ...
	I1227 10:03:26.307499  505250 cli_runner.go:164] Run: docker start no-preload-021144
	I1227 10:03:26.560617  505250 cli_runner.go:164] Run: docker container inspect no-preload-021144 --format={{.State.Status}}
	I1227 10:03:26.581124  505250 kic.go:430] container "no-preload-021144" state is running.
	I1227 10:03:26.581643  505250 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-021144
	I1227 10:03:26.604064  505250 profile.go:143] Saving config to /home/jenkins/minikube-integration/22344-301174/.minikube/profiles/no-preload-021144/config.json ...
	I1227 10:03:26.604298  505250 machine.go:94] provisionDockerMachine start ...
	I1227 10:03:26.604364  505250 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-021144
	I1227 10:03:26.625578  505250 main.go:144] libmachine: Using SSH client type: native
	I1227 10:03:26.625902  505250 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dbbe0] 0x3de0e0 <nil>  [] 0s} 127.0.0.1 33436 <nil> <nil>}
	I1227 10:03:26.625912  505250 main.go:144] libmachine: About to run SSH command:
	hostname
	I1227 10:03:26.626479  505250 main.go:144] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:59190->127.0.0.1:33436: read: connection reset by peer
	I1227 10:03:29.765734  505250 main.go:144] libmachine: SSH cmd err, output: <nil>: no-preload-021144
	
	I1227 10:03:29.765756  505250 ubuntu.go:182] provisioning hostname "no-preload-021144"
	I1227 10:03:29.765829  505250 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-021144
	I1227 10:03:29.783746  505250 main.go:144] libmachine: Using SSH client type: native
	I1227 10:03:29.784062  505250 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dbbe0] 0x3de0e0 <nil>  [] 0s} 127.0.0.1 33436 <nil> <nil>}
	I1227 10:03:29.784073  505250 main.go:144] libmachine: About to run SSH command:
	sudo hostname no-preload-021144 && echo "no-preload-021144" | sudo tee /etc/hostname
	I1227 10:03:29.933385  505250 main.go:144] libmachine: SSH cmd err, output: <nil>: no-preload-021144
	
	I1227 10:03:29.933543  505250 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-021144
	I1227 10:03:29.956736  505250 main.go:144] libmachine: Using SSH client type: native
	I1227 10:03:29.957050  505250 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dbbe0] 0x3de0e0 <nil>  [] 0s} 127.0.0.1 33436 <nil> <nil>}
	I1227 10:03:29.957066  505250 main.go:144] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-021144' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-021144/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-021144' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1227 10:03:30.123213  505250 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	I1227 10:03:30.123247  505250 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22344-301174/.minikube CaCertPath:/home/jenkins/minikube-integration/22344-301174/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22344-301174/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22344-301174/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22344-301174/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22344-301174/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22344-301174/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22344-301174/.minikube}
	I1227 10:03:30.123291  505250 ubuntu.go:190] setting up certificates
	I1227 10:03:30.123303  505250 provision.go:84] configureAuth start
	I1227 10:03:30.123376  505250 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-021144
	I1227 10:03:30.144992  505250 provision.go:143] copyHostCerts
	I1227 10:03:30.145070  505250 exec_runner.go:144] found /home/jenkins/minikube-integration/22344-301174/.minikube/ca.pem, removing ...
	I1227 10:03:30.145094  505250 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22344-301174/.minikube/ca.pem
	I1227 10:03:30.145186  505250 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22344-301174/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22344-301174/.minikube/ca.pem (1082 bytes)
	I1227 10:03:30.145313  505250 exec_runner.go:144] found /home/jenkins/minikube-integration/22344-301174/.minikube/cert.pem, removing ...
	I1227 10:03:30.145324  505250 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22344-301174/.minikube/cert.pem
	I1227 10:03:30.145353  505250 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22344-301174/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22344-301174/.minikube/cert.pem (1123 bytes)
	I1227 10:03:30.145418  505250 exec_runner.go:144] found /home/jenkins/minikube-integration/22344-301174/.minikube/key.pem, removing ...
	I1227 10:03:30.145428  505250 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22344-301174/.minikube/key.pem
	I1227 10:03:30.145453  505250 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22344-301174/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22344-301174/.minikube/key.pem (1675 bytes)
	I1227 10:03:30.145541  505250 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22344-301174/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22344-301174/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22344-301174/.minikube/certs/ca-key.pem org=jenkins.no-preload-021144 san=[127.0.0.1 192.168.85.2 localhost minikube no-preload-021144]
	I1227 10:03:30.397207  505250 provision.go:177] copyRemoteCerts
	I1227 10:03:30.397275  505250 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1227 10:03:30.397321  505250 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-021144
	I1227 10:03:30.414937  505250 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33436 SSHKeyPath:/home/jenkins/minikube-integration/22344-301174/.minikube/machines/no-preload-021144/id_rsa Username:docker}
	I1227 10:03:30.514071  505250 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22344-301174/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1227 10:03:30.531418  505250 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22344-301174/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1227 10:03:30.549317  505250 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22344-301174/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1227 10:03:30.566928  505250 provision.go:87] duration metric: took 443.597565ms to configureAuth
	I1227 10:03:30.566998  505250 ubuntu.go:206] setting minikube options for container-runtime
	I1227 10:03:30.567230  505250 config.go:182] Loaded profile config "no-preload-021144": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1227 10:03:30.567347  505250 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-021144
	I1227 10:03:30.584900  505250 main.go:144] libmachine: Using SSH client type: native
	I1227 10:03:30.585227  505250 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dbbe0] 0x3de0e0 <nil>  [] 0s} 127.0.0.1 33436 <nil> <nil>}
	I1227 10:03:30.585254  505250 main.go:144] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1227 10:03:30.934457  505250 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1227 10:03:30.934489  505250 machine.go:97] duration metric: took 4.330171704s to provisionDockerMachine
	I1227 10:03:30.934501  505250 start.go:293] postStartSetup for "no-preload-021144" (driver="docker")
	I1227 10:03:30.934512  505250 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1227 10:03:30.934577  505250 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1227 10:03:30.934622  505250 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-021144
	I1227 10:03:30.955450  505250 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33436 SSHKeyPath:/home/jenkins/minikube-integration/22344-301174/.minikube/machines/no-preload-021144/id_rsa Username:docker}
	I1227 10:03:31.058484  505250 ssh_runner.go:195] Run: cat /etc/os-release
	I1227 10:03:31.062189  505250 main.go:144] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1227 10:03:31.062220  505250 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1227 10:03:31.062232  505250 filesync.go:126] Scanning /home/jenkins/minikube-integration/22344-301174/.minikube/addons for local assets ...
	I1227 10:03:31.062296  505250 filesync.go:126] Scanning /home/jenkins/minikube-integration/22344-301174/.minikube/files for local assets ...
	I1227 10:03:31.062386  505250 filesync.go:149] local asset: /home/jenkins/minikube-integration/22344-301174/.minikube/files/etc/ssl/certs/3030432.pem -> 3030432.pem in /etc/ssl/certs
	I1227 10:03:31.062490  505250 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1227 10:03:31.070327  505250 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22344-301174/.minikube/files/etc/ssl/certs/3030432.pem --> /etc/ssl/certs/3030432.pem (1708 bytes)
	I1227 10:03:31.088837  505250 start.go:296] duration metric: took 154.320086ms for postStartSetup
	I1227 10:03:31.088934  505250 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1227 10:03:31.088986  505250 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-021144
	I1227 10:03:31.107035  505250 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33436 SSHKeyPath:/home/jenkins/minikube-integration/22344-301174/.minikube/machines/no-preload-021144/id_rsa Username:docker}
	I1227 10:03:31.203420  505250 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1227 10:03:31.208287  505250 fix.go:56] duration metric: took 4.922463874s for fixHost
	I1227 10:03:31.208316  505250 start.go:83] releasing machines lock for "no-preload-021144", held for 4.922517717s
	I1227 10:03:31.208398  505250 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-021144
	I1227 10:03:31.225516  505250 ssh_runner.go:195] Run: cat /version.json
	I1227 10:03:31.225585  505250 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-021144
	I1227 10:03:31.225641  505250 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1227 10:03:31.225694  505250 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-021144
	I1227 10:03:31.245040  505250 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33436 SSHKeyPath:/home/jenkins/minikube-integration/22344-301174/.minikube/machines/no-preload-021144/id_rsa Username:docker}
	I1227 10:03:31.259197  505250 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33436 SSHKeyPath:/home/jenkins/minikube-integration/22344-301174/.minikube/machines/no-preload-021144/id_rsa Username:docker}
	I1227 10:03:31.341775  505250 ssh_runner.go:195] Run: systemctl --version
	I1227 10:03:31.438987  505250 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1227 10:03:31.476538  505250 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1227 10:03:31.481040  505250 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1227 10:03:31.481128  505250 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1227 10:03:31.489052  505250 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1227 10:03:31.489078  505250 start.go:496] detecting cgroup driver to use...
	I1227 10:03:31.489111  505250 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1227 10:03:31.489165  505250 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1227 10:03:31.504591  505250 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1227 10:03:31.517900  505250 docker.go:218] disabling cri-docker service (if available) ...
	I1227 10:03:31.517992  505250 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1227 10:03:31.533975  505250 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1227 10:03:31.547247  505250 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1227 10:03:31.662378  505250 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1227 10:03:31.778257  505250 docker.go:234] disabling docker service ...
	I1227 10:03:31.778360  505250 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1227 10:03:31.794248  505250 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1227 10:03:31.807397  505250 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1227 10:03:31.918727  505250 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1227 10:03:32.035451  505250 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1227 10:03:32.049429  505250 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1227 10:03:32.064137  505250 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1227 10:03:32.064246  505250 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 10:03:32.073258  505250 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1227 10:03:32.073359  505250 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 10:03:32.082629  505250 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 10:03:32.091731  505250 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 10:03:32.100613  505250 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1227 10:03:32.108738  505250 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 10:03:32.117991  505250 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 10:03:32.126639  505250 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 10:03:32.135431  505250 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1227 10:03:32.143861  505250 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1227 10:03:32.152489  505250 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1227 10:03:32.284618  505250 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1227 10:03:32.466139  505250 start.go:553] Will wait 60s for socket path /var/run/crio/crio.sock
	I1227 10:03:32.466253  505250 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1227 10:03:32.471047  505250 start.go:574] Will wait 60s for crictl version
	I1227 10:03:32.471111  505250 ssh_runner.go:195] Run: which crictl
	I1227 10:03:32.474804  505250 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1227 10:03:32.499466  505250 start.go:590] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1227 10:03:32.499555  505250 ssh_runner.go:195] Run: crio --version
	I1227 10:03:32.529202  505250 ssh_runner.go:195] Run: crio --version
	I1227 10:03:32.561070  505250 out.go:179] * Preparing Kubernetes v1.35.0 on CRI-O 1.34.3 ...
	I1227 10:03:32.564073  505250 cli_runner.go:164] Run: docker network inspect no-preload-021144 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1227 10:03:32.580642  505250 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1227 10:03:32.584665  505250 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1227 10:03:32.594667  505250 kubeadm.go:884] updating cluster {Name:no-preload-021144 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:no-preload-021144 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker
BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false} ...
	I1227 10:03:32.594790  505250 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime crio
	I1227 10:03:32.594833  505250 ssh_runner.go:195] Run: sudo crictl images --output json
	I1227 10:03:32.636122  505250 crio.go:561] all images are preloaded for cri-o runtime.
	I1227 10:03:32.636149  505250 cache_images.go:86] Images are preloaded, skipping loading
	I1227 10:03:32.636162  505250 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.35.0 crio true true} ...
	I1227 10:03:32.636254  505250 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=no-preload-021144 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0 ClusterName:no-preload-021144 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1227 10:03:32.636336  505250 ssh_runner.go:195] Run: crio config
	I1227 10:03:32.688398  505250 cni.go:84] Creating CNI manager for ""
	I1227 10:03:32.688426  505250 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1227 10:03:32.688471  505250 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1227 10:03:32.688500  505250 kubeadm.go:197] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.35.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-021144 NodeName:no-preload-021144 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc
/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock failCgroupV1:false hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1227 10:03:32.688644  505250 kubeadm.go:203] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-021144"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	failCgroupV1: false
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1227 10:03:32.688720  505250 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0
	I1227 10:03:32.696645  505250 binaries.go:51] Found k8s binaries, skipping transfer
	I1227 10:03:32.696737  505250 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1227 10:03:32.705027  505250 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I1227 10:03:32.718261  505250 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1227 10:03:32.731213  505250 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2234 bytes)
	I1227 10:03:32.744964  505250 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1227 10:03:32.748830  505250 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1227 10:03:32.758496  505250 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1227 10:03:32.886378  505250 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1227 10:03:32.919364  505250 certs.go:69] Setting up /home/jenkins/minikube-integration/22344-301174/.minikube/profiles/no-preload-021144 for IP: 192.168.85.2
	I1227 10:03:32.919431  505250 certs.go:195] generating shared ca certs ...
	I1227 10:03:32.919462  505250 certs.go:227] acquiring lock for ca certs: {Name:mk8bd99999eb300c33129cb11fd55a1be3001328 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 10:03:32.919631  505250 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22344-301174/.minikube/ca.key
	I1227 10:03:32.919719  505250 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22344-301174/.minikube/proxy-client-ca.key
	I1227 10:03:32.919744  505250 certs.go:257] generating profile certs ...
	I1227 10:03:32.919867  505250 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22344-301174/.minikube/profiles/no-preload-021144/client.key
	I1227 10:03:32.919985  505250 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22344-301174/.minikube/profiles/no-preload-021144/apiserver.key.d17a6b29
	I1227 10:03:32.920066  505250 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22344-301174/.minikube/profiles/no-preload-021144/proxy-client.key
	I1227 10:03:32.920221  505250 certs.go:484] found cert: /home/jenkins/minikube-integration/22344-301174/.minikube/certs/303043.pem (1338 bytes)
	W1227 10:03:32.920286  505250 certs.go:480] ignoring /home/jenkins/minikube-integration/22344-301174/.minikube/certs/303043_empty.pem, impossibly tiny 0 bytes
	I1227 10:03:32.920313  505250 certs.go:484] found cert: /home/jenkins/minikube-integration/22344-301174/.minikube/certs/ca-key.pem (1675 bytes)
	I1227 10:03:32.920369  505250 certs.go:484] found cert: /home/jenkins/minikube-integration/22344-301174/.minikube/certs/ca.pem (1082 bytes)
	I1227 10:03:32.920413  505250 certs.go:484] found cert: /home/jenkins/minikube-integration/22344-301174/.minikube/certs/cert.pem (1123 bytes)
	I1227 10:03:32.920464  505250 certs.go:484] found cert: /home/jenkins/minikube-integration/22344-301174/.minikube/certs/key.pem (1675 bytes)
	I1227 10:03:32.920536  505250 certs.go:484] found cert: /home/jenkins/minikube-integration/22344-301174/.minikube/files/etc/ssl/certs/3030432.pem (1708 bytes)
	I1227 10:03:32.921236  505250 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22344-301174/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1227 10:03:32.940172  505250 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22344-301174/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1227 10:03:32.974737  505250 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22344-301174/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1227 10:03:32.993043  505250 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22344-301174/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1227 10:03:33.017679  505250 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22344-301174/.minikube/profiles/no-preload-021144/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1227 10:03:33.039426  505250 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22344-301174/.minikube/profiles/no-preload-021144/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1227 10:03:33.068926  505250 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22344-301174/.minikube/profiles/no-preload-021144/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1227 10:03:33.093710  505250 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22344-301174/.minikube/profiles/no-preload-021144/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1671 bytes)
	I1227 10:03:33.118852  505250 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22344-301174/.minikube/files/etc/ssl/certs/3030432.pem --> /usr/share/ca-certificates/3030432.pem (1708 bytes)
	I1227 10:03:33.146768  505250 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22344-301174/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1227 10:03:33.165399  505250 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22344-301174/.minikube/certs/303043.pem --> /usr/share/ca-certificates/303043.pem (1338 bytes)
	I1227 10:03:33.186202  505250 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (722 bytes)
	I1227 10:03:33.200100  505250 ssh_runner.go:195] Run: openssl version
	I1227 10:03:33.206866  505250 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/3030432.pem
	I1227 10:03:33.215083  505250 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/3030432.pem /etc/ssl/certs/3030432.pem
	I1227 10:03:33.223680  505250 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3030432.pem
	I1227 10:03:33.228131  505250 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 27 09:17 /usr/share/ca-certificates/3030432.pem
	I1227 10:03:33.228217  505250 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3030432.pem
	I1227 10:03:33.273079  505250 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1227 10:03:33.280619  505250 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1227 10:03:33.287962  505250 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1227 10:03:33.295827  505250 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1227 10:03:33.300049  505250 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 27 09:13 /usr/share/ca-certificates/minikubeCA.pem
	I1227 10:03:33.300115  505250 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1227 10:03:33.341418  505250 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1227 10:03:33.348994  505250 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/303043.pem
	I1227 10:03:33.356336  505250 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/303043.pem /etc/ssl/certs/303043.pem
	I1227 10:03:33.364093  505250 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/303043.pem
	I1227 10:03:33.368136  505250 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 27 09:17 /usr/share/ca-certificates/303043.pem
	I1227 10:03:33.368202  505250 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/303043.pem
	I1227 10:03:33.409289  505250 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1227 10:03:33.416853  505250 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1227 10:03:33.420855  505250 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1227 10:03:33.462184  505250 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1227 10:03:33.503399  505250 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1227 10:03:33.544563  505250 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1227 10:03:33.598115  505250 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1227 10:03:33.685406  505250 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1227 10:03:33.790348  505250 kubeadm.go:401] StartCluster: {Name:no-preload-021144 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:no-preload-021144 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Bi
naryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1227 10:03:33.790485  505250 cri.go:61] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1227 10:03:33.790559  505250 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1227 10:03:33.836477  505250 cri.go:96] found id: "826fac1ed8726ef3a42c94e0e83f18ada09304c6805c9f89f7b3c2d04e4c1a04"
	I1227 10:03:33.836548  505250 cri.go:96] found id: "8dcf711110ef0028adc15392e388eb3ea778715b2b9bac9fb0a2657eff4887a0"
	I1227 10:03:33.836567  505250 cri.go:96] found id: "327ad0c5ea77e5cb07dbc495716c696dbfb5bd8050c9432839733c1be978ab8f"
	I1227 10:03:33.836597  505250 cri.go:96] found id: "de4d8646c03a270d9b795d812404b843b39536ef99277aa58fc56f50232ffd89"
	I1227 10:03:33.836627  505250 cri.go:96] found id: ""
	I1227 10:03:33.836695  505250 ssh_runner.go:195] Run: sudo runc list -f json
	W1227 10:03:33.855571  505250 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-27T10:03:33Z" level=error msg="open /run/runc: no such file or directory"
	I1227 10:03:33.855695  505250 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1227 10:03:33.868545  505250 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1227 10:03:33.868614  505250 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1227 10:03:33.868705  505250 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1227 10:03:33.882940  505250 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1227 10:03:33.883406  505250 kubeconfig.go:47] verify endpoint returned: get endpoint: "no-preload-021144" does not appear in /home/jenkins/minikube-integration/22344-301174/kubeconfig
	I1227 10:03:33.883548  505250 kubeconfig.go:62] /home/jenkins/minikube-integration/22344-301174/kubeconfig needs updating (will repair): [kubeconfig missing "no-preload-021144" cluster setting kubeconfig missing "no-preload-021144" context setting]
	I1227 10:03:33.883888  505250 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22344-301174/kubeconfig: {Name:mk3a3b85dc4ae4614cb3a9a7459a1ec0ee3f4d9f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 10:03:33.885315  505250 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1227 10:03:33.897625  505250 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.85.2
	I1227 10:03:33.897699  505250 kubeadm.go:602] duration metric: took 29.065396ms to restartPrimaryControlPlane
	I1227 10:03:33.897725  505250 kubeadm.go:403] duration metric: took 107.389852ms to StartCluster
	I1227 10:03:33.897766  505250 settings.go:142] acquiring lock: {Name:mkd3c0090464e277fd903dd0170c6ace5a6172ea Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 10:03:33.897847  505250 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22344-301174/kubeconfig
	I1227 10:03:33.898595  505250 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22344-301174/kubeconfig: {Name:mk3a3b85dc4ae4614cb3a9a7459a1ec0ee3f4d9f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 10:03:33.898857  505250 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1227 10:03:33.899243  505250 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1227 10:03:33.899316  505250 addons.go:70] Setting storage-provisioner=true in profile "no-preload-021144"
	I1227 10:03:33.899331  505250 addons.go:239] Setting addon storage-provisioner=true in "no-preload-021144"
	W1227 10:03:33.899337  505250 addons.go:248] addon storage-provisioner should already be in state true
	I1227 10:03:33.899359  505250 host.go:66] Checking if "no-preload-021144" exists ...
	I1227 10:03:33.900212  505250 cli_runner.go:164] Run: docker container inspect no-preload-021144 --format={{.State.Status}}
	I1227 10:03:33.900583  505250 config.go:182] Loaded profile config "no-preload-021144": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1227 10:03:33.900625  505250 addons.go:70] Setting dashboard=true in profile "no-preload-021144"
	I1227 10:03:33.900651  505250 addons.go:239] Setting addon dashboard=true in "no-preload-021144"
	W1227 10:03:33.900659  505250 addons.go:248] addon dashboard should already be in state true
	I1227 10:03:33.900692  505250 host.go:66] Checking if "no-preload-021144" exists ...
	I1227 10:03:33.900734  505250 addons.go:70] Setting default-storageclass=true in profile "no-preload-021144"
	I1227 10:03:33.900766  505250 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "no-preload-021144"
	I1227 10:03:33.901042  505250 cli_runner.go:164] Run: docker container inspect no-preload-021144 --format={{.State.Status}}
	I1227 10:03:33.901143  505250 cli_runner.go:164] Run: docker container inspect no-preload-021144 --format={{.State.Status}}
	I1227 10:03:33.911180  505250 out.go:179] * Verifying Kubernetes components...
	I1227 10:03:33.914431  505250 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1227 10:03:33.956944  505250 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1227 10:03:33.960468  505250 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1227 10:03:33.960490  505250 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1227 10:03:33.960667  505250 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-021144
	I1227 10:03:33.974695  505250 addons.go:239] Setting addon default-storageclass=true in "no-preload-021144"
	W1227 10:03:33.974716  505250 addons.go:248] addon default-storageclass should already be in state true
	I1227 10:03:33.974743  505250 host.go:66] Checking if "no-preload-021144" exists ...
	I1227 10:03:33.975158  505250 cli_runner.go:164] Run: docker container inspect no-preload-021144 --format={{.State.Status}}
	I1227 10:03:33.978218  505250 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1227 10:03:33.981167  505250 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1227 10:03:33.983934  505250 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1227 10:03:33.983959  505250 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1227 10:03:33.984041  505250 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-021144
	I1227 10:03:34.010360  505250 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33436 SSHKeyPath:/home/jenkins/minikube-integration/22344-301174/.minikube/machines/no-preload-021144/id_rsa Username:docker}
	I1227 10:03:34.019626  505250 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1227 10:03:34.019651  505250 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1227 10:03:34.019726  505250 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-021144
	I1227 10:03:34.050944  505250 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33436 SSHKeyPath:/home/jenkins/minikube-integration/22344-301174/.minikube/machines/no-preload-021144/id_rsa Username:docker}
	I1227 10:03:34.070298  505250 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33436 SSHKeyPath:/home/jenkins/minikube-integration/22344-301174/.minikube/machines/no-preload-021144/id_rsa Username:docker}
	I1227 10:03:34.221710  505250 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1227 10:03:34.254140  505250 node_ready.go:35] waiting up to 6m0s for node "no-preload-021144" to be "Ready" ...
	I1227 10:03:34.255542  505250 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1227 10:03:34.298440  505250 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1227 10:03:34.298504  505250 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1227 10:03:34.336118  505250 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1227 10:03:34.336184  505250 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1227 10:03:34.369665  505250 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1227 10:03:34.369692  505250 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1227 10:03:34.372370  505250 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1227 10:03:34.503376  505250 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1227 10:03:34.503396  505250 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1227 10:03:34.553870  505250 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1227 10:03:34.553936  505250 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1227 10:03:34.570916  505250 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1227 10:03:34.570982  505250 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1227 10:03:34.585673  505250 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1227 10:03:34.585738  505250 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1227 10:03:34.600723  505250 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1227 10:03:34.600791  505250 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1227 10:03:34.615633  505250 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1227 10:03:34.615698  505250 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1227 10:03:34.631140  505250 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1227 10:03:37.188014  505250 node_ready.go:49] node "no-preload-021144" is "Ready"
	I1227 10:03:37.188041  505250 node_ready.go:38] duration metric: took 2.933761754s for node "no-preload-021144" to be "Ready" ...
	I1227 10:03:37.188054  505250 api_server.go:52] waiting for apiserver process to appear ...
	I1227 10:03:37.188113  505250 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 10:03:38.336679  505250 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (4.081075425s)
	I1227 10:03:38.336799  505250 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (3.964409988s)
	I1227 10:03:38.337144  505250 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (3.705921841s)
	I1227 10:03:38.337335  505250 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (1.14921059s)
	I1227 10:03:38.337395  505250 api_server.go:72] duration metric: took 4.438490073s to wait for apiserver process to appear ...
	I1227 10:03:38.337414  505250 api_server.go:88] waiting for apiserver healthz status ...
	I1227 10:03:38.337460  505250 api_server.go:299] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1227 10:03:38.340357  505250 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p no-preload-021144 addons enable metrics-server
	
	I1227 10:03:38.347865  505250 api_server.go:325] https://192.168.85.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1227 10:03:38.347943  505250 api_server.go:103] status: https://192.168.85.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1227 10:03:38.357441  505250 out.go:179] * Enabled addons: storage-provisioner, dashboard, default-storageclass
	I1227 10:03:38.360219  505250 addons.go:530] duration metric: took 4.460971913s for enable addons: enabled=[storage-provisioner dashboard default-storageclass]
	I1227 10:03:38.837766  505250 api_server.go:299] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1227 10:03:38.846329  505250 api_server.go:325] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1227 10:03:38.847536  505250 api_server.go:141] control plane version: v1.35.0
	I1227 10:03:38.847566  505250 api_server.go:131] duration metric: took 510.133697ms to wait for apiserver health ...
	I1227 10:03:38.847575  505250 system_pods.go:43] waiting for kube-system pods to appear ...
	I1227 10:03:38.851496  505250 system_pods.go:59] 8 kube-system pods found
	I1227 10:03:38.851537  505250 system_pods.go:61] "coredns-7d764666f9-p7h6b" [a93fe941-a366-4e29-952d-b0141a7ddfdb] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1227 10:03:38.851568  505250 system_pods.go:61] "etcd-no-preload-021144" [0b49553e-9ec8-475e-8fc6-7c906eeaaf93] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1227 10:03:38.851586  505250 system_pods.go:61] "kindnet-hnnqk" [12c31c7c-1258-40d9-a7b8-a110007bf0d0] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1227 10:03:38.851596  505250 system_pods.go:61] "kube-apiserver-no-preload-021144" [b4f282e0-9f47-4336-ad51-0bf6f73cd7d9] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1227 10:03:38.851608  505250 system_pods.go:61] "kube-controller-manager-no-preload-021144" [ddb13d4f-7bd1-4a06-a77d-297c348063cb] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1227 10:03:38.851616  505250 system_pods.go:61] "kube-proxy-gzt2m" [f93b8a8e-6739-4118-8e21-a27511a17f92] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1227 10:03:38.851631  505250 system_pods.go:61] "kube-scheduler-no-preload-021144" [1890b5fe-4829-4d35-b5fc-f454aae53829] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1227 10:03:38.851668  505250 system_pods.go:61] "storage-provisioner" [d93d446a-5434-4190-862c-fb660b9b87df] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1227 10:03:38.851683  505250 system_pods.go:74] duration metric: took 4.100212ms to wait for pod list to return data ...
	I1227 10:03:38.851692  505250 default_sa.go:34] waiting for default service account to be created ...
	I1227 10:03:38.854480  505250 default_sa.go:45] found service account: "default"
	I1227 10:03:38.854502  505250 default_sa.go:55] duration metric: took 2.79823ms for default service account to be created ...
	I1227 10:03:38.854511  505250 system_pods.go:116] waiting for k8s-apps to be running ...
	I1227 10:03:38.857283  505250 system_pods.go:86] 8 kube-system pods found
	I1227 10:03:38.857312  505250 system_pods.go:89] "coredns-7d764666f9-p7h6b" [a93fe941-a366-4e29-952d-b0141a7ddfdb] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1227 10:03:38.857353  505250 system_pods.go:89] "etcd-no-preload-021144" [0b49553e-9ec8-475e-8fc6-7c906eeaaf93] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1227 10:03:38.857371  505250 system_pods.go:89] "kindnet-hnnqk" [12c31c7c-1258-40d9-a7b8-a110007bf0d0] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1227 10:03:38.857379  505250 system_pods.go:89] "kube-apiserver-no-preload-021144" [b4f282e0-9f47-4336-ad51-0bf6f73cd7d9] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1227 10:03:38.857387  505250 system_pods.go:89] "kube-controller-manager-no-preload-021144" [ddb13d4f-7bd1-4a06-a77d-297c348063cb] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1227 10:03:38.857397  505250 system_pods.go:89] "kube-proxy-gzt2m" [f93b8a8e-6739-4118-8e21-a27511a17f92] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1227 10:03:38.857421  505250 system_pods.go:89] "kube-scheduler-no-preload-021144" [1890b5fe-4829-4d35-b5fc-f454aae53829] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1227 10:03:38.857436  505250 system_pods.go:89] "storage-provisioner" [d93d446a-5434-4190-862c-fb660b9b87df] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1227 10:03:38.857444  505250 system_pods.go:126] duration metric: took 2.928529ms to wait for k8s-apps to be running ...
	I1227 10:03:38.857467  505250 system_svc.go:44] waiting for kubelet service to be running ....
	I1227 10:03:38.857538  505250 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1227 10:03:38.872681  505250 system_svc.go:56] duration metric: took 15.203857ms WaitForService to wait for kubelet
	I1227 10:03:38.872713  505250 kubeadm.go:587] duration metric: took 4.973805705s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1227 10:03:38.872752  505250 node_conditions.go:102] verifying NodePressure condition ...
	I1227 10:03:38.875352  505250 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1227 10:03:38.875383  505250 node_conditions.go:123] node cpu capacity is 2
	I1227 10:03:38.875397  505250 node_conditions.go:105] duration metric: took 2.635422ms to run NodePressure ...
	I1227 10:03:38.875445  505250 start.go:242] waiting for startup goroutines ...
	I1227 10:03:38.875453  505250 start.go:247] waiting for cluster config update ...
	I1227 10:03:38.875470  505250 start.go:256] writing updated cluster config ...
	I1227 10:03:38.875783  505250 ssh_runner.go:195] Run: rm -f paused
	I1227 10:03:38.879542  505250 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1227 10:03:38.883019  505250 pod_ready.go:83] waiting for pod "coredns-7d764666f9-p7h6b" in "kube-system" namespace to be "Ready" or be gone ...
	W1227 10:03:40.889367  505250 pod_ready.go:104] pod "coredns-7d764666f9-p7h6b" is not "Ready", error: <nil>
	W1227 10:03:43.389296  505250 pod_ready.go:104] pod "coredns-7d764666f9-p7h6b" is not "Ready", error: <nil>
	W1227 10:03:45.889191  505250 pod_ready.go:104] pod "coredns-7d764666f9-p7h6b" is not "Ready", error: <nil>
	W1227 10:03:47.892684  505250 pod_ready.go:104] pod "coredns-7d764666f9-p7h6b" is not "Ready", error: <nil>
	W1227 10:03:50.395592  505250 pod_ready.go:104] pod "coredns-7d764666f9-p7h6b" is not "Ready", error: <nil>
	W1227 10:03:52.889040  505250 pod_ready.go:104] pod "coredns-7d764666f9-p7h6b" is not "Ready", error: <nil>
	W1227 10:03:54.889382  505250 pod_ready.go:104] pod "coredns-7d764666f9-p7h6b" is not "Ready", error: <nil>
	W1227 10:03:57.388514  505250 pod_ready.go:104] pod "coredns-7d764666f9-p7h6b" is not "Ready", error: <nil>
	W1227 10:03:59.888574  505250 pod_ready.go:104] pod "coredns-7d764666f9-p7h6b" is not "Ready", error: <nil>
	W1227 10:04:01.888860  505250 pod_ready.go:104] pod "coredns-7d764666f9-p7h6b" is not "Ready", error: <nil>
	W1227 10:04:03.889536  505250 pod_ready.go:104] pod "coredns-7d764666f9-p7h6b" is not "Ready", error: <nil>
	W1227 10:04:05.889690  505250 pod_ready.go:104] pod "coredns-7d764666f9-p7h6b" is not "Ready", error: <nil>
	W1227 10:04:08.388785  505250 pod_ready.go:104] pod "coredns-7d764666f9-p7h6b" is not "Ready", error: <nil>
	W1227 10:04:10.390865  505250 pod_ready.go:104] pod "coredns-7d764666f9-p7h6b" is not "Ready", error: <nil>
	I1227 10:04:13.223020  484533 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.001216783s
	I1227 10:04:13.223055  484533 kubeadm.go:319] 
	I1227 10:04:13.223162  484533 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1227 10:04:13.223220  484533 kubeadm.go:319] 	- The kubelet is not running
	I1227 10:04:13.223667  484533 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1227 10:04:13.223680  484533 kubeadm.go:319] 
	I1227 10:04:13.223872  484533 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1227 10:04:13.224056  484533 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1227 10:04:13.224112  484533 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1227 10:04:13.224118  484533 kubeadm.go:319] 
	I1227 10:04:13.230061  484533 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1227 10:04:13.230502  484533 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1227 10:04:13.230617  484533 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1227 10:04:13.230854  484533 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1227 10:04:13.230864  484533 kubeadm.go:319] 
	I1227 10:04:13.230933  484533 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1227 10:04:13.230993  484533 kubeadm.go:403] duration metric: took 8m6.530333747s to StartCluster
	I1227 10:04:13.231031  484533 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1227 10:04:13.231094  484533 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1227 10:04:13.259685  484533 cri.go:96] found id: ""
	I1227 10:04:13.259719  484533 logs.go:282] 0 containers: []
	W1227 10:04:13.259728  484533 logs.go:284] No container was found matching "kube-apiserver"
	I1227 10:04:13.259734  484533 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1227 10:04:13.259797  484533 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1227 10:04:13.289141  484533 cri.go:96] found id: ""
	I1227 10:04:13.289173  484533 logs.go:282] 0 containers: []
	W1227 10:04:13.289182  484533 logs.go:284] No container was found matching "etcd"
	I1227 10:04:13.289194  484533 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1227 10:04:13.289261  484533 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1227 10:04:13.317196  484533 cri.go:96] found id: ""
	I1227 10:04:13.317223  484533 logs.go:282] 0 containers: []
	W1227 10:04:13.317231  484533 logs.go:284] No container was found matching "coredns"
	I1227 10:04:13.317237  484533 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1227 10:04:13.317295  484533 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1227 10:04:13.346839  484533 cri.go:96] found id: ""
	I1227 10:04:13.346880  484533 logs.go:282] 0 containers: []
	W1227 10:04:13.346890  484533 logs.go:284] No container was found matching "kube-scheduler"
	I1227 10:04:13.346897  484533 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1227 10:04:13.346959  484533 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1227 10:04:13.381429  484533 cri.go:96] found id: ""
	I1227 10:04:13.381452  484533 logs.go:282] 0 containers: []
	W1227 10:04:13.381472  484533 logs.go:284] No container was found matching "kube-proxy"
	I1227 10:04:13.381479  484533 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1227 10:04:13.381547  484533 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1227 10:04:13.421505  484533 cri.go:96] found id: ""
	I1227 10:04:13.421532  484533 logs.go:282] 0 containers: []
	W1227 10:04:13.421540  484533 logs.go:284] No container was found matching "kube-controller-manager"
	I1227 10:04:13.421548  484533 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1227 10:04:13.421608  484533 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1227 10:04:13.454402  484533 cri.go:96] found id: ""
	I1227 10:04:13.454476  484533 logs.go:282] 0 containers: []
	W1227 10:04:13.454513  484533 logs.go:284] No container was found matching "kindnet"
	I1227 10:04:13.454544  484533 logs.go:123] Gathering logs for CRI-O ...
	I1227 10:04:13.454573  484533 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1227 10:04:13.491644  484533 logs.go:123] Gathering logs for container status ...
	I1227 10:04:13.491679  484533 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1227 10:04:13.524206  484533 logs.go:123] Gathering logs for kubelet ...
	I1227 10:04:13.524234  484533 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1227 10:04:13.591823  484533 logs.go:123] Gathering logs for dmesg ...
	I1227 10:04:13.591861  484533 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1227 10:04:13.609186  484533 logs.go:123] Gathering logs for describe nodes ...
	I1227 10:04:13.609215  484533 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1227 10:04:13.684353  484533 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1227 10:04:13.675473    4904 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 10:04:13.676200    4904 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 10:04:13.677792    4904 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 10:04:13.678207    4904 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 10:04:13.679912    4904 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1227 10:04:13.675473    4904 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 10:04:13.676200    4904 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 10:04:13.677792    4904 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 10:04:13.678207    4904 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 10:04:13.679912    4904 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	W1227 10:04:13.684433  484533 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001216783s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	W1227 10:04:13.684480  484533 out.go:285] * 
	W1227 10:04:13.684550  484533 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001216783s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1227 10:04:13.684583  484533 out.go:285] * 
	W1227 10:04:13.684830  484533 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1227 10:04:13.694710  484533 out.go:203] 
	W1227 10:04:13.698439  484533 out.go:285] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001216783s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1227 10:04:13.698500  484533 out.go:285] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1227 10:04:13.698524  484533 out.go:285] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1227 10:04:13.702331  484533 out.go:203] 
	
	
	==> CRI-O <==
	Dec 27 09:56:04 force-systemd-flag-779725 crio[834]: time="2025-12-27T09:56:04.943379761Z" level=info msg="Registered SIGHUP reload watcher"
	Dec 27 09:56:04 force-systemd-flag-779725 crio[834]: time="2025-12-27T09:56:04.943544989Z" level=info msg="Starting seccomp notifier watcher"
	Dec 27 09:56:04 force-systemd-flag-779725 crio[834]: time="2025-12-27T09:56:04.943661372Z" level=info msg="Create NRI interface"
	Dec 27 09:56:04 force-systemd-flag-779725 crio[834]: time="2025-12-27T09:56:04.943837424Z" level=info msg="built-in NRI default validator is disabled"
	Dec 27 09:56:04 force-systemd-flag-779725 crio[834]: time="2025-12-27T09:56:04.943907078Z" level=info msg="runtime interface created"
	Dec 27 09:56:04 force-systemd-flag-779725 crio[834]: time="2025-12-27T09:56:04.943965335Z" level=info msg="Registered domain \"k8s.io\" with NRI"
	Dec 27 09:56:04 force-systemd-flag-779725 crio[834]: time="2025-12-27T09:56:04.944031494Z" level=info msg="runtime interface starting up..."
	Dec 27 09:56:04 force-systemd-flag-779725 crio[834]: time="2025-12-27T09:56:04.94409085Z" level=info msg="starting plugins..."
	Dec 27 09:56:04 force-systemd-flag-779725 crio[834]: time="2025-12-27T09:56:04.94416263Z" level=info msg="Synchronizing NRI (plugin) with current runtime state"
	Dec 27 09:56:04 force-systemd-flag-779725 crio[834]: time="2025-12-27T09:56:04.944299066Z" level=info msg="No systemd watchdog enabled"
	Dec 27 09:56:04 force-systemd-flag-779725 systemd[1]: Started crio.service - Container Runtime Interface for OCI (CRI-O).
	Dec 27 09:56:07 force-systemd-flag-779725 crio[834]: time="2025-12-27T09:56:07.017839633Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.35.0" id=a8cb9e05-5a31-4bea-95fb-4f14c3824bbd name=/runtime.v1.ImageService/ImageStatus
	Dec 27 09:56:07 force-systemd-flag-779725 crio[834]: time="2025-12-27T09:56:07.018842697Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.35.0" id=9feb120f-efb4-4161-9a4c-7a45a896cc7b name=/runtime.v1.ImageService/ImageStatus
	Dec 27 09:56:07 force-systemd-flag-779725 crio[834]: time="2025-12-27T09:56:07.019334839Z" level=info msg="Checking image status: registry.k8s.io/kube-scheduler:v1.35.0" id=a2299c18-117e-40f8-9e8c-2b1206e32c1f name=/runtime.v1.ImageService/ImageStatus
	Dec 27 09:56:07 force-systemd-flag-779725 crio[834]: time="2025-12-27T09:56:07.019830197Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.35.0" id=9ec3fa13-de2f-4d86-b315-e9d1c2678f70 name=/runtime.v1.ImageService/ImageStatus
	Dec 27 09:56:07 force-systemd-flag-779725 crio[834]: time="2025-12-27T09:56:07.020281493Z" level=info msg="Checking image status: registry.k8s.io/coredns/coredns:v1.13.1" id=c2fc66f1-de0d-4d93-aa7a-54a55d42d343 name=/runtime.v1.ImageService/ImageStatus
	Dec 27 09:56:07 force-systemd-flag-779725 crio[834]: time="2025-12-27T09:56:07.02074741Z" level=info msg="Checking image status: registry.k8s.io/pause:3.10.1" id=62ddc3c0-09fa-46c0-866a-48f28237b77c name=/runtime.v1.ImageService/ImageStatus
	Dec 27 09:56:07 force-systemd-flag-779725 crio[834]: time="2025-12-27T09:56:07.021175084Z" level=info msg="Checking image status: registry.k8s.io/etcd:3.6.6-0" id=6e4dc415-dd68-4286-8f9b-ec526330eb16 name=/runtime.v1.ImageService/ImageStatus
	Dec 27 10:00:11 force-systemd-flag-779725 crio[834]: time="2025-12-27T10:00:11.296599592Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.35.0" id=4795b3c2-fb87-43d3-bb33-3ac228fc65c8 name=/runtime.v1.ImageService/ImageStatus
	Dec 27 10:00:11 force-systemd-flag-779725 crio[834]: time="2025-12-27T10:00:11.297287067Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.35.0" id=c8b680f8-422c-4229-a77a-7d6ece592e63 name=/runtime.v1.ImageService/ImageStatus
	Dec 27 10:00:11 force-systemd-flag-779725 crio[834]: time="2025-12-27T10:00:11.297798975Z" level=info msg="Checking image status: registry.k8s.io/kube-scheduler:v1.35.0" id=29ce57a9-286e-4011-89fb-e95d0956c260 name=/runtime.v1.ImageService/ImageStatus
	Dec 27 10:00:11 force-systemd-flag-779725 crio[834]: time="2025-12-27T10:00:11.298399901Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.35.0" id=b3cda875-4453-4c58-aefa-c3804660fe9c name=/runtime.v1.ImageService/ImageStatus
	Dec 27 10:00:11 force-systemd-flag-779725 crio[834]: time="2025-12-27T10:00:11.298902513Z" level=info msg="Checking image status: registry.k8s.io/coredns/coredns:v1.13.1" id=bcc37454-5cd3-4ae5-ba7f-53b0205aa8a5 name=/runtime.v1.ImageService/ImageStatus
	Dec 27 10:00:11 force-systemd-flag-779725 crio[834]: time="2025-12-27T10:00:11.299351757Z" level=info msg="Checking image status: registry.k8s.io/pause:3.10.1" id=d955e40d-5b95-4881-92fc-3254c73bba70 name=/runtime.v1.ImageService/ImageStatus
	Dec 27 10:00:11 force-systemd-flag-779725 crio[834]: time="2025-12-27T10:00:11.29977015Z" level=info msg="Checking image status: registry.k8s.io/etcd:3.6.6-0" id=847b32a7-97fc-4aa4-87af-056b79536124 name=/runtime.v1.ImageService/ImageStatus
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1227 10:04:15.135504    5021 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 10:04:15.136371    5021 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 10:04:15.138087    5021 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 10:04:15.139027    5021 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 10:04:15.139796    5021 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Dec27 09:29] overlayfs: idmapped layers are currently not supported
	[Dec27 09:32] overlayfs: idmapped layers are currently not supported
	[Dec27 09:33] overlayfs: idmapped layers are currently not supported
	[Dec27 09:34] overlayfs: idmapped layers are currently not supported
	[Dec27 09:35] overlayfs: idmapped layers are currently not supported
	[ +35.855481] overlayfs: idmapped layers are currently not supported
	[Dec27 09:36] overlayfs: idmapped layers are currently not supported
	[Dec27 09:37] overlayfs: idmapped layers are currently not supported
	[  +4.962986] overlayfs: idmapped layers are currently not supported
	[Dec27 09:38] overlayfs: idmapped layers are currently not supported
	[ +23.051169] overlayfs: idmapped layers are currently not supported
	[Dec27 09:39] overlayfs: idmapped layers are currently not supported
	[ +34.383287] overlayfs: idmapped layers are currently not supported
	[Dec27 09:41] overlayfs: idmapped layers are currently not supported
	[Dec27 09:43] overlayfs: idmapped layers are currently not supported
	[ +35.522529] overlayfs: idmapped layers are currently not supported
	[Dec27 09:48] overlayfs: idmapped layers are currently not supported
	[Dec27 09:49] overlayfs: idmapped layers are currently not supported
	[Dec27 09:51] overlayfs: idmapped layers are currently not supported
	[Dec27 09:52] overlayfs: idmapped layers are currently not supported
	[Dec27 09:59] overlayfs: idmapped layers are currently not supported
	[ +37.649191] overlayfs: idmapped layers are currently not supported
	[Dec27 10:01] overlayfs: idmapped layers are currently not supported
	[Dec27 10:02] overlayfs: idmapped layers are currently not supported
	[Dec27 10:03] overlayfs: idmapped layers are currently not supported
	
	
	==> kernel <==
	 10:04:15 up  2:46,  0 user,  load average: 2.02, 1.82, 2.00
	Linux force-systemd-flag-779725 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 27 10:04:12 force-systemd-flag-779725 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 27 10:04:13 force-systemd-flag-779725 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 645.
	Dec 27 10:04:13 force-systemd-flag-779725 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 27 10:04:13 force-systemd-flag-779725 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 27 10:04:13 force-systemd-flag-779725 kubelet[4871]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 27 10:04:13 force-systemd-flag-779725 kubelet[4871]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 27 10:04:13 force-systemd-flag-779725 kubelet[4871]: E1227 10:04:13.459473    4871 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 27 10:04:13 force-systemd-flag-779725 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 27 10:04:13 force-systemd-flag-779725 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 27 10:04:14 force-systemd-flag-779725 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 646.
	Dec 27 10:04:14 force-systemd-flag-779725 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 27 10:04:14 force-systemd-flag-779725 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 27 10:04:14 force-systemd-flag-779725 kubelet[4917]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 27 10:04:14 force-systemd-flag-779725 kubelet[4917]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 27 10:04:14 force-systemd-flag-779725 kubelet[4917]: E1227 10:04:14.203349    4917 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 27 10:04:14 force-systemd-flag-779725 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 27 10:04:14 force-systemd-flag-779725 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 27 10:04:14 force-systemd-flag-779725 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 647.
	Dec 27 10:04:14 force-systemd-flag-779725 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 27 10:04:14 force-systemd-flag-779725 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 27 10:04:14 force-systemd-flag-779725 kubelet[4993]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 27 10:04:14 force-systemd-flag-779725 kubelet[4993]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 27 10:04:15 force-systemd-flag-779725 kubelet[4993]: E1227 10:04:15.002214    4993 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 27 10:04:15 force-systemd-flag-779725 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 27 10:04:15 force-systemd-flag-779725 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p force-systemd-flag-779725 -n force-systemd-flag-779725
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p force-systemd-flag-779725 -n force-systemd-flag-779725: exit status 6 (337.026668ms)

                                                
                                                
-- stdout --
	Stopped
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1227 10:04:15.596964  508047 status.go:458] kubeconfig endpoint: get endpoint: "force-systemd-flag-779725" does not appear in /home/jenkins/minikube-integration/22344-301174/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:263: status error: exit status 6 (may be ok)
helpers_test.go:265: "force-systemd-flag-779725" apiserver is not running, skipping kubectl commands (state="Stopped")
helpers_test.go:176: Cleaning up "force-systemd-flag-779725" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-flag-779725
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-flag-779725: (2.073945538s)
--- FAIL: TestForceSystemdFlag (504.74s)

                                                
                                    
x
+
TestForceSystemdEnv (507.57s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-env-029895 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
docker_test.go:155: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p force-systemd-env-029895 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: exit status 109 (8m24.077330707s)

                                                
                                                
-- stdout --
	* [force-systemd-env-029895] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=22344
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22344-301174/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22344-301174/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=true
	* Using the docker driver based on user configuration
	* Using Docker driver with root privileges
	* Starting "force-systemd-env-029895" primary control-plane node in "force-systemd-env-029895" cluster
	* Pulling base image v0.0.48-1766570851-22316 ...
	* Preparing Kubernetes v1.35.0 on CRI-O 1.34.3 ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1227 09:50:27.848490  467949 out.go:360] Setting OutFile to fd 1 ...
	I1227 09:50:27.848608  467949 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 09:50:27.848620  467949 out.go:374] Setting ErrFile to fd 2...
	I1227 09:50:27.848626  467949 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 09:50:27.848899  467949 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22344-301174/.minikube/bin
	I1227 09:50:27.849333  467949 out.go:368] Setting JSON to false
	I1227 09:50:27.850215  467949 start.go:133] hostinfo: {"hostname":"ip-172-31-31-251","uptime":9177,"bootTime":1766819851,"procs":174,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1227 09:50:27.850291  467949 start.go:143] virtualization:  
	I1227 09:50:27.854012  467949 out.go:179] * [force-systemd-env-029895] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1227 09:50:27.858432  467949 out.go:179]   - MINIKUBE_LOCATION=22344
	I1227 09:50:27.858568  467949 notify.go:221] Checking for updates...
	I1227 09:50:27.865009  467949 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1227 09:50:27.868328  467949 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22344-301174/kubeconfig
	I1227 09:50:27.871482  467949 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22344-301174/.minikube
	I1227 09:50:27.874603  467949 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1227 09:50:27.877639  467949 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=true
	I1227 09:50:27.881188  467949 config.go:182] Loaded profile config "running-upgrade-193962": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.0
	I1227 09:50:27.881314  467949 driver.go:422] Setting default libvirt URI to qemu:///system
	I1227 09:50:27.913520  467949 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1227 09:50:27.913644  467949 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1227 09:50:27.969646  467949 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-27 09:50:27.960508871 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1227 09:50:27.969751  467949 docker.go:319] overlay module found
	I1227 09:50:27.972952  467949 out.go:179] * Using the docker driver based on user configuration
	I1227 09:50:27.975852  467949 start.go:309] selected driver: docker
	I1227 09:50:27.975875  467949 start.go:928] validating driver "docker" against <nil>
	I1227 09:50:27.975892  467949 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1227 09:50:27.976633  467949 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1227 09:50:28.041167  467949 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-27 09:50:28.031373484 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1227 09:50:28.041316  467949 start_flags.go:333] no existing cluster config was found, will generate one from the flags 
	I1227 09:50:28.041540  467949 start_flags.go:1001] Wait components to verify : map[apiserver:true system_pods:true]
	I1227 09:50:28.044483  467949 out.go:179] * Using Docker driver with root privileges
	I1227 09:50:28.047444  467949 cni.go:84] Creating CNI manager for ""
	I1227 09:50:28.047526  467949 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1227 09:50:28.047543  467949 start_flags.go:342] Found "CNI" CNI - setting NetworkPlugin=cni
	I1227 09:50:28.047642  467949 start.go:353] cluster config:
	{Name:force-systemd-env-029895 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:force-systemd-env-029895 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.
local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1227 09:50:28.050837  467949 out.go:179] * Starting "force-systemd-env-029895" primary control-plane node in "force-systemd-env-029895" cluster
	I1227 09:50:28.053736  467949 cache.go:134] Beginning downloading kic base image for docker with crio
	I1227 09:50:28.056706  467949 out.go:179] * Pulling base image v0.0.48-1766570851-22316 ...
	I1227 09:50:28.059690  467949 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime crio
	I1227 09:50:28.059748  467949 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22344-301174/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-cri-o-overlay-arm64.tar.lz4
	I1227 09:50:28.059760  467949 cache.go:65] Caching tarball of preloaded images
	I1227 09:50:28.059787  467949 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a in local docker daemon
	I1227 09:50:28.059878  467949 preload.go:251] Found /home/jenkins/minikube-integration/22344-301174/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1227 09:50:28.059889  467949 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0 on crio
	I1227 09:50:28.060049  467949 profile.go:143] Saving config to /home/jenkins/minikube-integration/22344-301174/.minikube/profiles/force-systemd-env-029895/config.json ...
	I1227 09:50:28.060091  467949 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22344-301174/.minikube/profiles/force-systemd-env-029895/config.json: {Name:mkbd0deacee41664ec3346c809b2a72a35a3420d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 09:50:28.080565  467949 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a in local docker daemon, skipping pull
	I1227 09:50:28.080590  467949 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a exists in daemon, skipping load
	I1227 09:50:28.080613  467949 cache.go:243] Successfully downloaded all kic artifacts
	I1227 09:50:28.080651  467949 start.go:360] acquireMachinesLock for force-systemd-env-029895: {Name:mkff765438666b70f627e7893a10d52815117ebb Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1227 09:50:28.080776  467949 start.go:364] duration metric: took 102.696µs to acquireMachinesLock for "force-systemd-env-029895"
	I1227 09:50:28.080808  467949 start.go:93] Provisioning new machine with config: &{Name:force-systemd-env-029895 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:force-systemd-env-029895 Namespace:default APIServerHA
VIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SS
HAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false} &{Name: IP: Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1227 09:50:28.080887  467949 start.go:125] createHost starting for "" (driver="docker")
	I1227 09:50:28.084308  467949 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1227 09:50:28.084586  467949 start.go:159] libmachine.API.Create for "force-systemd-env-029895" (driver="docker")
	I1227 09:50:28.084632  467949 client.go:173] LocalClient.Create starting
	I1227 09:50:28.084732  467949 main.go:144] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22344-301174/.minikube/certs/ca.pem
	I1227 09:50:28.084772  467949 main.go:144] libmachine: Decoding PEM data...
	I1227 09:50:28.084796  467949 main.go:144] libmachine: Parsing certificate...
	I1227 09:50:28.084853  467949 main.go:144] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22344-301174/.minikube/certs/cert.pem
	I1227 09:50:28.084875  467949 main.go:144] libmachine: Decoding PEM data...
	I1227 09:50:28.084897  467949 main.go:144] libmachine: Parsing certificate...
	I1227 09:50:28.085288  467949 cli_runner.go:164] Run: docker network inspect force-systemd-env-029895 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1227 09:50:28.107326  467949 cli_runner.go:211] docker network inspect force-systemd-env-029895 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1227 09:50:28.107412  467949 network_create.go:284] running [docker network inspect force-systemd-env-029895] to gather additional debugging logs...
	I1227 09:50:28.107434  467949 cli_runner.go:164] Run: docker network inspect force-systemd-env-029895
	W1227 09:50:28.124519  467949 cli_runner.go:211] docker network inspect force-systemd-env-029895 returned with exit code 1
	I1227 09:50:28.124554  467949 network_create.go:287] error running [docker network inspect force-systemd-env-029895]: docker network inspect force-systemd-env-029895: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network force-systemd-env-029895 not found
	I1227 09:50:28.124584  467949 network_create.go:289] output of [docker network inspect force-systemd-env-029895]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network force-systemd-env-029895 not found
	
	** /stderr **
	I1227 09:50:28.124710  467949 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1227 09:50:28.141450  467949 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-cc45c0939b74 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:be:a9:eb:a6:c1:01} reservation:<nil>}
	I1227 09:50:28.141861  467949 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-b122a856da6d IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:2e:eb:89:95:c1:62} reservation:<nil>}
	I1227 09:50:28.142104  467949 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-a1a16649dea9 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:9e:51:2b:90:d1:ea} reservation:<nil>}
	I1227 09:50:28.142468  467949 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-efd540d3be42 IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:06:a0:1a:3d:6f:70} reservation:<nil>}
	I1227 09:50:28.142943  467949 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001a89af0}
	I1227 09:50:28.142965  467949 network_create.go:124] attempt to create docker network force-systemd-env-029895 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I1227 09:50:28.143023  467949 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=force-systemd-env-029895 force-systemd-env-029895
	I1227 09:50:28.203877  467949 network_create.go:108] docker network force-systemd-env-029895 192.168.85.0/24 created
	I1227 09:50:28.203930  467949 kic.go:121] calculated static IP "192.168.85.2" for the "force-systemd-env-029895" container
	I1227 09:50:28.204009  467949 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1227 09:50:28.221566  467949 cli_runner.go:164] Run: docker volume create force-systemd-env-029895 --label name.minikube.sigs.k8s.io=force-systemd-env-029895 --label created_by.minikube.sigs.k8s.io=true
	I1227 09:50:28.239667  467949 oci.go:103] Successfully created a docker volume force-systemd-env-029895
	I1227 09:50:28.239771  467949 cli_runner.go:164] Run: docker run --rm --name force-systemd-env-029895-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=force-systemd-env-029895 --entrypoint /usr/bin/test -v force-systemd-env-029895:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a -d /var/lib
	I1227 09:50:28.745631  467949 oci.go:107] Successfully prepared a docker volume force-systemd-env-029895
	I1227 09:50:28.745696  467949 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime crio
	I1227 09:50:28.745707  467949 kic.go:194] Starting extracting preloaded images to volume ...
	I1227 09:50:28.745800  467949 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22344-301174/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v force-systemd-env-029895:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a -I lz4 -xf /preloaded.tar -C /extractDir
	I1227 09:50:33.421302  467949 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22344-301174/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v force-systemd-env-029895:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a -I lz4 -xf /preloaded.tar -C /extractDir: (4.675459661s)
	I1227 09:50:33.421336  467949 kic.go:203] duration metric: took 4.675625668s to extract preloaded images to volume ...
	W1227 09:50:33.421481  467949 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1227 09:50:33.421590  467949 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1227 09:50:33.477215  467949 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname force-systemd-env-029895 --name force-systemd-env-029895 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=force-systemd-env-029895 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=force-systemd-env-029895 --network force-systemd-env-029895 --ip 192.168.85.2 --volume force-systemd-env-029895:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a
	I1227 09:50:33.792066  467949 cli_runner.go:164] Run: docker container inspect force-systemd-env-029895 --format={{.State.Running}}
	I1227 09:50:33.817587  467949 cli_runner.go:164] Run: docker container inspect force-systemd-env-029895 --format={{.State.Status}}
	I1227 09:50:33.847572  467949 cli_runner.go:164] Run: docker exec force-systemd-env-029895 stat /var/lib/dpkg/alternatives/iptables
	I1227 09:50:33.921575  467949 oci.go:144] the created container "force-systemd-env-029895" has a running status.
	I1227 09:50:33.921602  467949 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/22344-301174/.minikube/machines/force-systemd-env-029895/id_rsa...
	I1227 09:50:34.079766  467949 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22344-301174/.minikube/machines/force-systemd-env-029895/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I1227 09:50:34.079877  467949 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22344-301174/.minikube/machines/force-systemd-env-029895/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1227 09:50:34.106495  467949 cli_runner.go:164] Run: docker container inspect force-systemd-env-029895 --format={{.State.Status}}
	I1227 09:50:34.130801  467949 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1227 09:50:34.130820  467949 kic_runner.go:114] Args: [docker exec --privileged force-systemd-env-029895 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1227 09:50:34.198312  467949 cli_runner.go:164] Run: docker container inspect force-systemd-env-029895 --format={{.State.Status}}
	I1227 09:50:34.240729  467949 machine.go:94] provisionDockerMachine start ...
	I1227 09:50:34.240826  467949 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-029895
	I1227 09:50:34.269082  467949 main.go:144] libmachine: Using SSH client type: native
	I1227 09:50:34.269418  467949 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dbbe0] 0x3de0e0 <nil>  [] 0s} 127.0.0.1 33386 <nil> <nil>}
	I1227 09:50:34.269428  467949 main.go:144] libmachine: About to run SSH command:
	hostname
	I1227 09:50:34.270021  467949 main.go:144] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:47906->127.0.0.1:33386: read: connection reset by peer
	I1227 09:50:37.422674  467949 main.go:144] libmachine: SSH cmd err, output: <nil>: force-systemd-env-029895
	
	I1227 09:50:37.422696  467949 ubuntu.go:182] provisioning hostname "force-systemd-env-029895"
	I1227 09:50:37.422762  467949 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-029895
	I1227 09:50:37.445739  467949 main.go:144] libmachine: Using SSH client type: native
	I1227 09:50:37.446051  467949 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dbbe0] 0x3de0e0 <nil>  [] 0s} 127.0.0.1 33386 <nil> <nil>}
	I1227 09:50:37.446062  467949 main.go:144] libmachine: About to run SSH command:
	sudo hostname force-systemd-env-029895 && echo "force-systemd-env-029895" | sudo tee /etc/hostname
	I1227 09:50:37.611998  467949 main.go:144] libmachine: SSH cmd err, output: <nil>: force-systemd-env-029895
	
	I1227 09:50:37.612076  467949 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-029895
	I1227 09:50:37.634332  467949 main.go:144] libmachine: Using SSH client type: native
	I1227 09:50:37.634652  467949 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dbbe0] 0x3de0e0 <nil>  [] 0s} 127.0.0.1 33386 <nil> <nil>}
	I1227 09:50:37.634675  467949 main.go:144] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sforce-systemd-env-029895' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 force-systemd-env-029895/g' /etc/hosts;
				else 
					echo '127.0.1.1 force-systemd-env-029895' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1227 09:50:37.786720  467949 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	I1227 09:50:37.786798  467949 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22344-301174/.minikube CaCertPath:/home/jenkins/minikube-integration/22344-301174/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22344-301174/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22344-301174/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22344-301174/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22344-301174/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22344-301174/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22344-301174/.minikube}
	I1227 09:50:37.786832  467949 ubuntu.go:190] setting up certificates
	I1227 09:50:37.786871  467949 provision.go:84] configureAuth start
	I1227 09:50:37.786966  467949 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" force-systemd-env-029895
	I1227 09:50:37.805548  467949 provision.go:143] copyHostCerts
	I1227 09:50:37.805586  467949 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22344-301174/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/22344-301174/.minikube/ca.pem
	I1227 09:50:37.805617  467949 exec_runner.go:144] found /home/jenkins/minikube-integration/22344-301174/.minikube/ca.pem, removing ...
	I1227 09:50:37.805624  467949 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22344-301174/.minikube/ca.pem
	I1227 09:50:37.806014  467949 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22344-301174/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22344-301174/.minikube/ca.pem (1082 bytes)
	I1227 09:50:37.806127  467949 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22344-301174/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/22344-301174/.minikube/cert.pem
	I1227 09:50:37.806204  467949 exec_runner.go:144] found /home/jenkins/minikube-integration/22344-301174/.minikube/cert.pem, removing ...
	I1227 09:50:37.806218  467949 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22344-301174/.minikube/cert.pem
	I1227 09:50:37.806266  467949 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22344-301174/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22344-301174/.minikube/cert.pem (1123 bytes)
	I1227 09:50:37.806333  467949 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22344-301174/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/22344-301174/.minikube/key.pem
	I1227 09:50:37.806357  467949 exec_runner.go:144] found /home/jenkins/minikube-integration/22344-301174/.minikube/key.pem, removing ...
	I1227 09:50:37.806366  467949 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22344-301174/.minikube/key.pem
	I1227 09:50:37.806394  467949 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22344-301174/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22344-301174/.minikube/key.pem (1675 bytes)
	I1227 09:50:37.806453  467949 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22344-301174/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22344-301174/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22344-301174/.minikube/certs/ca-key.pem org=jenkins.force-systemd-env-029895 san=[127.0.0.1 192.168.85.2 force-systemd-env-029895 localhost minikube]
	I1227 09:50:38.232878  467949 provision.go:177] copyRemoteCerts
	I1227 09:50:38.232949  467949 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1227 09:50:38.232999  467949 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-029895
	I1227 09:50:38.251312  467949 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33386 SSHKeyPath:/home/jenkins/minikube-integration/22344-301174/.minikube/machines/force-systemd-env-029895/id_rsa Username:docker}
	I1227 09:50:38.355133  467949 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22344-301174/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1227 09:50:38.355238  467949 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22344-301174/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1227 09:50:38.390285  467949 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22344-301174/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1227 09:50:38.390398  467949 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22344-301174/.minikube/machines/server.pem --> /etc/docker/server.pem (1237 bytes)
	I1227 09:50:38.420250  467949 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22344-301174/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1227 09:50:38.420341  467949 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22344-301174/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1227 09:50:38.451766  467949 provision.go:87] duration metric: took 664.845582ms to configureAuth
	I1227 09:50:38.451835  467949 ubuntu.go:206] setting minikube options for container-runtime
	I1227 09:50:38.452046  467949 config.go:182] Loaded profile config "force-systemd-env-029895": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1227 09:50:38.452193  467949 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-029895
	I1227 09:50:38.475008  467949 main.go:144] libmachine: Using SSH client type: native
	I1227 09:50:38.475333  467949 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dbbe0] 0x3de0e0 <nil>  [] 0s} 127.0.0.1 33386 <nil> <nil>}
	I1227 09:50:38.475348  467949 main.go:144] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1227 09:50:38.903695  467949 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1227 09:50:38.903717  467949 machine.go:97] duration metric: took 4.662969238s to provisionDockerMachine
	I1227 09:50:38.903729  467949 client.go:176] duration metric: took 10.819085084s to LocalClient.Create
	I1227 09:50:38.903741  467949 start.go:167] duration metric: took 10.81915775s to libmachine.API.Create "force-systemd-env-029895"
	I1227 09:50:38.903748  467949 start.go:293] postStartSetup for "force-systemd-env-029895" (driver="docker")
	I1227 09:50:38.903757  467949 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1227 09:50:38.903822  467949 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1227 09:50:38.903910  467949 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-029895
	I1227 09:50:38.924849  467949 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33386 SSHKeyPath:/home/jenkins/minikube-integration/22344-301174/.minikube/machines/force-systemd-env-029895/id_rsa Username:docker}
	I1227 09:50:39.039396  467949 ssh_runner.go:195] Run: cat /etc/os-release
	I1227 09:50:39.049981  467949 main.go:144] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1227 09:50:39.050007  467949 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1227 09:50:39.050018  467949 filesync.go:126] Scanning /home/jenkins/minikube-integration/22344-301174/.minikube/addons for local assets ...
	I1227 09:50:39.050078  467949 filesync.go:126] Scanning /home/jenkins/minikube-integration/22344-301174/.minikube/files for local assets ...
	I1227 09:50:39.050184  467949 filesync.go:149] local asset: /home/jenkins/minikube-integration/22344-301174/.minikube/files/etc/ssl/certs/3030432.pem -> 3030432.pem in /etc/ssl/certs
	I1227 09:50:39.050193  467949 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22344-301174/.minikube/files/etc/ssl/certs/3030432.pem -> /etc/ssl/certs/3030432.pem
	I1227 09:50:39.050307  467949 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1227 09:50:39.061345  467949 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22344-301174/.minikube/files/etc/ssl/certs/3030432.pem --> /etc/ssl/certs/3030432.pem (1708 bytes)
	I1227 09:50:39.088974  467949 start.go:296] duration metric: took 185.212408ms for postStartSetup
	I1227 09:50:39.089436  467949 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" force-systemd-env-029895
	I1227 09:50:39.120280  467949 profile.go:143] Saving config to /home/jenkins/minikube-integration/22344-301174/.minikube/profiles/force-systemd-env-029895/config.json ...
	I1227 09:50:39.120645  467949 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1227 09:50:39.120713  467949 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-029895
	I1227 09:50:39.159839  467949 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33386 SSHKeyPath:/home/jenkins/minikube-integration/22344-301174/.minikube/machines/force-systemd-env-029895/id_rsa Username:docker}
	I1227 09:50:39.286690  467949 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1227 09:50:39.294814  467949 start.go:128] duration metric: took 11.213910971s to createHost
	I1227 09:50:39.294870  467949 start.go:83] releasing machines lock for "force-systemd-env-029895", held for 11.214063244s
	I1227 09:50:39.295002  467949 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" force-systemd-env-029895
	I1227 09:50:39.324100  467949 ssh_runner.go:195] Run: cat /version.json
	I1227 09:50:39.324171  467949 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-029895
	I1227 09:50:39.324470  467949 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1227 09:50:39.324540  467949 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-029895
	I1227 09:50:39.365079  467949 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33386 SSHKeyPath:/home/jenkins/minikube-integration/22344-301174/.minikube/machines/force-systemd-env-029895/id_rsa Username:docker}
	I1227 09:50:39.365793  467949 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33386 SSHKeyPath:/home/jenkins/minikube-integration/22344-301174/.minikube/machines/force-systemd-env-029895/id_rsa Username:docker}
	I1227 09:50:39.619815  467949 ssh_runner.go:195] Run: systemctl --version
	I1227 09:50:39.630770  467949 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1227 09:50:39.701721  467949 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1227 09:50:39.706704  467949 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1227 09:50:39.706822  467949 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1227 09:50:39.770587  467949 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1227 09:50:39.770664  467949 start.go:496] detecting cgroup driver to use...
	I1227 09:50:39.770698  467949 start.go:500] using "systemd" cgroup driver as enforced via flags
	I1227 09:50:39.770790  467949 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1227 09:50:39.805936  467949 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1227 09:50:39.832018  467949 docker.go:218] disabling cri-docker service (if available) ...
	I1227 09:50:39.832172  467949 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1227 09:50:39.857540  467949 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1227 09:50:39.887011  467949 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1227 09:50:40.130707  467949 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1227 09:50:40.366577  467949 docker.go:234] disabling docker service ...
	I1227 09:50:40.366734  467949 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1227 09:50:40.405653  467949 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1227 09:50:40.428497  467949 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1227 09:50:40.648802  467949 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1227 09:50:40.871906  467949 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1227 09:50:40.898589  467949 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1227 09:50:40.928676  467949 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1227 09:50:40.928824  467949 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 09:50:40.943695  467949 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1227 09:50:40.943842  467949 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 09:50:40.954368  467949 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 09:50:40.972879  467949 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 09:50:40.996783  467949 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1227 09:50:41.011743  467949 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 09:50:41.028876  467949 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 09:50:41.052982  467949 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 09:50:41.067496  467949 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1227 09:50:41.082682  467949 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1227 09:50:41.091006  467949 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1227 09:50:41.298112  467949 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1227 09:50:41.573502  467949 start.go:553] Will wait 60s for socket path /var/run/crio/crio.sock
	I1227 09:50:41.573617  467949 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1227 09:50:41.578688  467949 start.go:574] Will wait 60s for crictl version
	I1227 09:50:41.578809  467949 ssh_runner.go:195] Run: which crictl
	I1227 09:50:41.582855  467949 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1227 09:50:41.634697  467949 start.go:590] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1227 09:50:41.634851  467949 ssh_runner.go:195] Run: crio --version
	I1227 09:50:41.693957  467949 ssh_runner.go:195] Run: crio --version
	I1227 09:50:41.750208  467949 out.go:179] * Preparing Kubernetes v1.35.0 on CRI-O 1.34.3 ...
	I1227 09:50:41.753156  467949 cli_runner.go:164] Run: docker network inspect force-systemd-env-029895 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1227 09:50:41.780846  467949 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1227 09:50:41.792436  467949 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1227 09:50:41.817487  467949 kubeadm.go:884] updating cluster {Name:force-systemd-env-029895 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:force-systemd-env-029895 Namespace:default APIServerHAVIP: APIServerName:
minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSo
ck: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false} ...
	I1227 09:50:41.817614  467949 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime crio
	I1227 09:50:41.817671  467949 ssh_runner.go:195] Run: sudo crictl images --output json
	I1227 09:50:41.876771  467949 crio.go:561] all images are preloaded for cri-o runtime.
	I1227 09:50:41.876792  467949 crio.go:433] Images already preloaded, skipping extraction
	I1227 09:50:41.876850  467949 ssh_runner.go:195] Run: sudo crictl images --output json
	I1227 09:50:41.925823  467949 crio.go:561] all images are preloaded for cri-o runtime.
	I1227 09:50:41.925844  467949 cache_images.go:86] Images are preloaded, skipping loading
	I1227 09:50:41.925851  467949 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.35.0 crio true true} ...
	I1227 09:50:41.925942  467949 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=force-systemd-env-029895 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0 ClusterName:force-systemd-env-029895 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1227 09:50:41.926021  467949 ssh_runner.go:195] Run: crio config
	I1227 09:50:42.018878  467949 cni.go:84] Creating CNI manager for ""
	I1227 09:50:42.018950  467949 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1227 09:50:42.018987  467949 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1227 09:50:42.019049  467949 kubeadm.go:197] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.35.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:force-systemd-env-029895 NodeName:force-systemd-env-029895 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt Stati
cPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1227 09:50:42.019220  467949 kubeadm.go:203] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "force-systemd-env-029895"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1227 09:50:42.019327  467949 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0
	I1227 09:50:42.035286  467949 binaries.go:51] Found k8s binaries, skipping transfer
	I1227 09:50:42.035409  467949 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1227 09:50:42.048383  467949 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (374 bytes)
	I1227 09:50:42.065243  467949 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1227 09:50:42.094120  467949 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2220 bytes)
	I1227 09:50:42.120908  467949 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1227 09:50:42.127375  467949 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1227 09:50:42.146721  467949 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1227 09:50:42.346491  467949 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1227 09:50:42.367389  467949 certs.go:69] Setting up /home/jenkins/minikube-integration/22344-301174/.minikube/profiles/force-systemd-env-029895 for IP: 192.168.85.2
	I1227 09:50:42.367408  467949 certs.go:195] generating shared ca certs ...
	I1227 09:50:42.367424  467949 certs.go:227] acquiring lock for ca certs: {Name:mk8bd99999eb300c33129cb11fd55a1be3001328 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 09:50:42.367562  467949 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22344-301174/.minikube/ca.key
	I1227 09:50:42.367616  467949 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22344-301174/.minikube/proxy-client-ca.key
	I1227 09:50:42.367628  467949 certs.go:257] generating profile certs ...
	I1227 09:50:42.367681  467949 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22344-301174/.minikube/profiles/force-systemd-env-029895/client.key
	I1227 09:50:42.367706  467949 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22344-301174/.minikube/profiles/force-systemd-env-029895/client.crt with IP's: []
	I1227 09:50:42.521032  467949 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22344-301174/.minikube/profiles/force-systemd-env-029895/client.crt ...
	I1227 09:50:42.521068  467949 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22344-301174/.minikube/profiles/force-systemd-env-029895/client.crt: {Name:mkf3380ea134bbc98e056bc577fb21cf92acc5f0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 09:50:42.521268  467949 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22344-301174/.minikube/profiles/force-systemd-env-029895/client.key ...
	I1227 09:50:42.521285  467949 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22344-301174/.minikube/profiles/force-systemd-env-029895/client.key: {Name:mkd3112be8f4b56c36ce38dd92933ee9e23cd02d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 09:50:42.521381  467949 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22344-301174/.minikube/profiles/force-systemd-env-029895/apiserver.key.b841662a
	I1227 09:50:42.521403  467949 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22344-301174/.minikube/profiles/force-systemd-env-029895/apiserver.crt.b841662a with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.85.2]
	I1227 09:50:42.836290  467949 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22344-301174/.minikube/profiles/force-systemd-env-029895/apiserver.crt.b841662a ...
	I1227 09:50:42.836364  467949 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22344-301174/.minikube/profiles/force-systemd-env-029895/apiserver.crt.b841662a: {Name:mkf348bd85de7ad1845b51f3f8328c29096bc4d6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 09:50:42.836596  467949 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22344-301174/.minikube/profiles/force-systemd-env-029895/apiserver.key.b841662a ...
	I1227 09:50:42.836634  467949 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22344-301174/.minikube/profiles/force-systemd-env-029895/apiserver.key.b841662a: {Name:mk2b6343e9c0f6b9c13080fdae9ea6469625d7a1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 09:50:42.836791  467949 certs.go:382] copying /home/jenkins/minikube-integration/22344-301174/.minikube/profiles/force-systemd-env-029895/apiserver.crt.b841662a -> /home/jenkins/minikube-integration/22344-301174/.minikube/profiles/force-systemd-env-029895/apiserver.crt
	I1227 09:50:42.836912  467949 certs.go:386] copying /home/jenkins/minikube-integration/22344-301174/.minikube/profiles/force-systemd-env-029895/apiserver.key.b841662a -> /home/jenkins/minikube-integration/22344-301174/.minikube/profiles/force-systemd-env-029895/apiserver.key
	I1227 09:50:42.837016  467949 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22344-301174/.minikube/profiles/force-systemd-env-029895/proxy-client.key
	I1227 09:50:42.837053  467949 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22344-301174/.minikube/profiles/force-systemd-env-029895/proxy-client.crt with IP's: []
	I1227 09:50:42.968481  467949 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22344-301174/.minikube/profiles/force-systemd-env-029895/proxy-client.crt ...
	I1227 09:50:42.968518  467949 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22344-301174/.minikube/profiles/force-systemd-env-029895/proxy-client.crt: {Name:mk04da7d86f49811d4f4f7ab5ec4ed2708d0c2b7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 09:50:42.968712  467949 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22344-301174/.minikube/profiles/force-systemd-env-029895/proxy-client.key ...
	I1227 09:50:42.968728  467949 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22344-301174/.minikube/profiles/force-systemd-env-029895/proxy-client.key: {Name:mk58655e514ab34f6fced4e8d45531aab0b7796e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 09:50:42.968803  467949 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22344-301174/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1227 09:50:42.968828  467949 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22344-301174/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1227 09:50:42.968846  467949 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22344-301174/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1227 09:50:42.968864  467949 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22344-301174/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1227 09:50:42.968882  467949 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22344-301174/.minikube/profiles/force-systemd-env-029895/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1227 09:50:42.968895  467949 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22344-301174/.minikube/profiles/force-systemd-env-029895/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1227 09:50:42.968910  467949 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22344-301174/.minikube/profiles/force-systemd-env-029895/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1227 09:50:42.968921  467949 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22344-301174/.minikube/profiles/force-systemd-env-029895/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1227 09:50:42.968970  467949 certs.go:484] found cert: /home/jenkins/minikube-integration/22344-301174/.minikube/certs/303043.pem (1338 bytes)
	W1227 09:50:42.969016  467949 certs.go:480] ignoring /home/jenkins/minikube-integration/22344-301174/.minikube/certs/303043_empty.pem, impossibly tiny 0 bytes
	I1227 09:50:42.969029  467949 certs.go:484] found cert: /home/jenkins/minikube-integration/22344-301174/.minikube/certs/ca-key.pem (1675 bytes)
	I1227 09:50:42.969056  467949 certs.go:484] found cert: /home/jenkins/minikube-integration/22344-301174/.minikube/certs/ca.pem (1082 bytes)
	I1227 09:50:42.969088  467949 certs.go:484] found cert: /home/jenkins/minikube-integration/22344-301174/.minikube/certs/cert.pem (1123 bytes)
	I1227 09:50:42.969117  467949 certs.go:484] found cert: /home/jenkins/minikube-integration/22344-301174/.minikube/certs/key.pem (1675 bytes)
	I1227 09:50:42.969172  467949 certs.go:484] found cert: /home/jenkins/minikube-integration/22344-301174/.minikube/files/etc/ssl/certs/3030432.pem (1708 bytes)
	I1227 09:50:42.969207  467949 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22344-301174/.minikube/files/etc/ssl/certs/3030432.pem -> /usr/share/ca-certificates/3030432.pem
	I1227 09:50:42.969225  467949 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22344-301174/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1227 09:50:42.969238  467949 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22344-301174/.minikube/certs/303043.pem -> /usr/share/ca-certificates/303043.pem
	I1227 09:50:42.969767  467949 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22344-301174/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1227 09:50:42.998609  467949 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22344-301174/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1227 09:50:43.027493  467949 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22344-301174/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1227 09:50:43.057682  467949 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22344-301174/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1227 09:50:43.090143  467949 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22344-301174/.minikube/profiles/force-systemd-env-029895/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I1227 09:50:43.120515  467949 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22344-301174/.minikube/profiles/force-systemd-env-029895/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1227 09:50:43.172400  467949 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22344-301174/.minikube/profiles/force-systemd-env-029895/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1227 09:50:43.194101  467949 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22344-301174/.minikube/profiles/force-systemd-env-029895/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1227 09:50:43.227587  467949 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22344-301174/.minikube/files/etc/ssl/certs/3030432.pem --> /usr/share/ca-certificates/3030432.pem (1708 bytes)
	I1227 09:50:43.271410  467949 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22344-301174/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1227 09:50:43.304002  467949 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22344-301174/.minikube/certs/303043.pem --> /usr/share/ca-certificates/303043.pem (1338 bytes)
	I1227 09:50:43.334754  467949 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (722 bytes)
	I1227 09:50:43.359399  467949 ssh_runner.go:195] Run: openssl version
	I1227 09:50:43.370795  467949 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/3030432.pem
	I1227 09:50:43.382719  467949 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/3030432.pem /etc/ssl/certs/3030432.pem
	I1227 09:50:43.395340  467949 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3030432.pem
	I1227 09:50:43.399408  467949 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 27 09:17 /usr/share/ca-certificates/3030432.pem
	I1227 09:50:43.399480  467949 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3030432.pem
	I1227 09:50:43.453488  467949 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1227 09:50:43.470849  467949 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/3030432.pem /etc/ssl/certs/3ec20f2e.0
	I1227 09:50:43.484245  467949 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1227 09:50:43.494703  467949 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1227 09:50:43.504366  467949 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1227 09:50:43.508578  467949 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 27 09:13 /usr/share/ca-certificates/minikubeCA.pem
	I1227 09:50:43.508647  467949 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1227 09:50:43.552705  467949 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1227 09:50:43.560290  467949 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1227 09:50:43.567672  467949 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/303043.pem
	I1227 09:50:43.579973  467949 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/303043.pem /etc/ssl/certs/303043.pem
	I1227 09:50:43.591600  467949 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/303043.pem
	I1227 09:50:43.595908  467949 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 27 09:17 /usr/share/ca-certificates/303043.pem
	I1227 09:50:43.595975  467949 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/303043.pem
	I1227 09:50:43.642062  467949 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1227 09:50:43.649625  467949 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/303043.pem /etc/ssl/certs/51391683.0
	I1227 09:50:43.657188  467949 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1227 09:50:43.661917  467949 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1227 09:50:43.661976  467949 kubeadm.go:401] StartCluster: {Name:force-systemd-env-029895 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:force-systemd-env-029895 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock:
SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1227 09:50:43.662044  467949 cri.go:61] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1227 09:50:43.662116  467949 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1227 09:50:43.707549  467949 cri.go:96] found id: ""
	I1227 09:50:43.707658  467949 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1227 09:50:43.724120  467949 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1227 09:50:43.732957  467949 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1227 09:50:43.733035  467949 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1227 09:50:43.749699  467949 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1227 09:50:43.749722  467949 kubeadm.go:158] found existing configuration files:
	
	I1227 09:50:43.749778  467949 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1227 09:50:43.763637  467949 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1227 09:50:43.763705  467949 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1227 09:50:43.775926  467949 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1227 09:50:43.790419  467949 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1227 09:50:43.790487  467949 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1227 09:50:43.803864  467949 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1227 09:50:43.816579  467949 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1227 09:50:43.816654  467949 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1227 09:50:43.832010  467949 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1227 09:50:43.845064  467949 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1227 09:50:43.845130  467949 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1227 09:50:43.857658  467949 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1227 09:50:43.929081  467949 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0
	I1227 09:50:43.934703  467949 kubeadm.go:319] [preflight] Running pre-flight checks
	I1227 09:50:44.097197  467949 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1227 09:50:44.097275  467949 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1227 09:50:44.097316  467949 kubeadm.go:319] OS: Linux
	I1227 09:50:44.097366  467949 kubeadm.go:319] CGROUPS_CPU: enabled
	I1227 09:50:44.097427  467949 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1227 09:50:44.097477  467949 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1227 09:50:44.097529  467949 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1227 09:50:44.097588  467949 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1227 09:50:44.097641  467949 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1227 09:50:44.097692  467949 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1227 09:50:44.097745  467949 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1227 09:50:44.097795  467949 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1227 09:50:44.210688  467949 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1227 09:50:44.210807  467949 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1227 09:50:44.210903  467949 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1227 09:50:44.226657  467949 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1227 09:50:44.230723  467949 out.go:252]   - Generating certificates and keys ...
	I1227 09:50:44.230826  467949 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1227 09:50:44.230922  467949 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1227 09:50:44.520736  467949 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1227 09:50:44.871933  467949 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1227 09:50:45.040150  467949 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1227 09:50:45.482485  467949 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1227 09:50:45.800068  467949 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1227 09:50:45.800725  467949 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [force-systemd-env-029895 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1227 09:50:46.133222  467949 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1227 09:50:46.133601  467949 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [force-systemd-env-029895 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1227 09:50:46.707581  467949 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1227 09:50:46.862845  467949 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1227 09:50:46.947035  467949 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1227 09:50:46.947345  467949 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1227 09:50:47.830488  467949 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1227 09:50:48.306527  467949 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1227 09:50:48.479400  467949 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1227 09:50:48.733352  467949 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1227 09:50:49.224716  467949 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1227 09:50:49.224825  467949 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1227 09:50:49.239141  467949 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1227 09:50:49.242475  467949 out.go:252]   - Booting up control plane ...
	I1227 09:50:49.242579  467949 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1227 09:50:49.242659  467949 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1227 09:50:49.243215  467949 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1227 09:50:49.262725  467949 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1227 09:50:49.263577  467949 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1227 09:50:49.273232  467949 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1227 09:50:49.278665  467949 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1227 09:50:49.279104  467949 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1227 09:50:49.490064  467949 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1227 09:50:49.490278  467949 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1227 09:54:49.490320  467949 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.000437611s
	I1227 09:54:49.490393  467949 kubeadm.go:319] 
	I1227 09:54:49.490489  467949 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1227 09:54:49.490529  467949 kubeadm.go:319] 	- The kubelet is not running
	I1227 09:54:49.490694  467949 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1227 09:54:49.490717  467949 kubeadm.go:319] 
	I1227 09:54:49.490858  467949 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1227 09:54:49.490919  467949 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1227 09:54:49.490968  467949 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1227 09:54:49.490977  467949 kubeadm.go:319] 
	I1227 09:54:49.496899  467949 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1227 09:54:49.497369  467949 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1227 09:54:49.497501  467949 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1227 09:54:49.497764  467949 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1227 09:54:49.497777  467949 kubeadm.go:319] 
	I1227 09:54:49.497842  467949 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	W1227 09:54:49.497997  467949 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [force-systemd-env-029895 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [force-systemd-env-029895 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000437611s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [force-systemd-env-029895 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [force-systemd-env-029895 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000437611s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	I1227 09:54:49.498105  467949 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1227 09:54:49.919529  467949 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1227 09:54:49.933324  467949 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1227 09:54:49.933402  467949 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1227 09:54:49.941828  467949 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1227 09:54:49.941849  467949 kubeadm.go:158] found existing configuration files:
	
	I1227 09:54:49.941901  467949 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1227 09:54:49.950009  467949 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1227 09:54:49.950079  467949 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1227 09:54:49.958368  467949 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1227 09:54:49.966767  467949 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1227 09:54:49.966837  467949 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1227 09:54:49.974999  467949 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1227 09:54:49.985052  467949 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1227 09:54:49.985119  467949 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1227 09:54:49.993523  467949 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1227 09:54:50.015275  467949 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1227 09:54:50.015369  467949 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1227 09:54:50.028856  467949 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1227 09:54:50.150876  467949 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1227 09:54:50.151286  467949 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1227 09:54:50.227560  467949 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1227 09:58:51.403412  467949 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1227 09:58:51.403440  467949 kubeadm.go:319] 
	I1227 09:58:51.403514  467949 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1227 09:58:51.409072  467949 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0
	I1227 09:58:51.409131  467949 kubeadm.go:319] [preflight] Running pre-flight checks
	I1227 09:58:51.409226  467949 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1227 09:58:51.409285  467949 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1227 09:58:51.409322  467949 kubeadm.go:319] OS: Linux
	I1227 09:58:51.409371  467949 kubeadm.go:319] CGROUPS_CPU: enabled
	I1227 09:58:51.409423  467949 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1227 09:58:51.409486  467949 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1227 09:58:51.409539  467949 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1227 09:58:51.409591  467949 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1227 09:58:51.409643  467949 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1227 09:58:51.409691  467949 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1227 09:58:51.409742  467949 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1227 09:58:51.409791  467949 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1227 09:58:51.409868  467949 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1227 09:58:51.409970  467949 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1227 09:58:51.410066  467949 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1227 09:58:51.410138  467949 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1227 09:58:51.413848  467949 out.go:252]   - Generating certificates and keys ...
	I1227 09:58:51.413937  467949 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1227 09:58:51.414014  467949 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1227 09:58:51.414089  467949 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1227 09:58:51.414194  467949 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1227 09:58:51.414271  467949 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1227 09:58:51.414323  467949 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1227 09:58:51.414382  467949 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1227 09:58:51.414440  467949 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1227 09:58:51.414509  467949 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1227 09:58:51.414577  467949 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1227 09:58:51.414612  467949 kubeadm.go:319] [certs] Using the existing "sa" key
	I1227 09:58:51.414664  467949 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1227 09:58:51.414712  467949 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1227 09:58:51.414765  467949 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1227 09:58:51.414815  467949 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1227 09:58:51.414874  467949 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1227 09:58:51.414931  467949 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1227 09:58:51.415010  467949 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1227 09:58:51.415072  467949 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1227 09:58:51.420021  467949 out.go:252]   - Booting up control plane ...
	I1227 09:58:51.420150  467949 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1227 09:58:51.420243  467949 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1227 09:58:51.420319  467949 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1227 09:58:51.420436  467949 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1227 09:58:51.420536  467949 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1227 09:58:51.420642  467949 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1227 09:58:51.420727  467949 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1227 09:58:51.420767  467949 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1227 09:58:51.420899  467949 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1227 09:58:51.421005  467949 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1227 09:58:51.421070  467949 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.001610122s
	I1227 09:58:51.421074  467949 kubeadm.go:319] 
	I1227 09:58:51.421131  467949 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1227 09:58:51.421164  467949 kubeadm.go:319] 	- The kubelet is not running
	I1227 09:58:51.421278  467949 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1227 09:58:51.421283  467949 kubeadm.go:319] 
	I1227 09:58:51.421393  467949 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1227 09:58:51.421426  467949 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1227 09:58:51.421457  467949 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1227 09:58:51.421519  467949 kubeadm.go:403] duration metric: took 8m7.759551334s to StartCluster
	I1227 09:58:51.421551  467949 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1227 09:58:51.421611  467949 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1227 09:58:51.421710  467949 kubeadm.go:319] 
	I1227 09:58:51.472151  467949 cri.go:96] found id: ""
	I1227 09:58:51.472181  467949 logs.go:282] 0 containers: []
	W1227 09:58:51.472190  467949 logs.go:284] No container was found matching "kube-apiserver"
	I1227 09:58:51.472196  467949 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1227 09:58:51.472271  467949 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1227 09:58:51.507048  467949 cri.go:96] found id: ""
	I1227 09:58:51.507075  467949 logs.go:282] 0 containers: []
	W1227 09:58:51.507084  467949 logs.go:284] No container was found matching "etcd"
	I1227 09:58:51.507090  467949 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1227 09:58:51.507151  467949 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1227 09:58:51.532175  467949 cri.go:96] found id: ""
	I1227 09:58:51.532203  467949 logs.go:282] 0 containers: []
	W1227 09:58:51.532212  467949 logs.go:284] No container was found matching "coredns"
	I1227 09:58:51.532219  467949 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1227 09:58:51.532279  467949 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1227 09:58:51.560804  467949 cri.go:96] found id: ""
	I1227 09:58:51.560837  467949 logs.go:282] 0 containers: []
	W1227 09:58:51.560846  467949 logs.go:284] No container was found matching "kube-scheduler"
	I1227 09:58:51.560853  467949 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1227 09:58:51.560910  467949 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1227 09:58:51.588389  467949 cri.go:96] found id: ""
	I1227 09:58:51.588415  467949 logs.go:282] 0 containers: []
	W1227 09:58:51.588424  467949 logs.go:284] No container was found matching "kube-proxy"
	I1227 09:58:51.588431  467949 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1227 09:58:51.588490  467949 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1227 09:58:51.617006  467949 cri.go:96] found id: ""
	I1227 09:58:51.617033  467949 logs.go:282] 0 containers: []
	W1227 09:58:51.617042  467949 logs.go:284] No container was found matching "kube-controller-manager"
	I1227 09:58:51.617048  467949 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1227 09:58:51.617106  467949 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1227 09:58:51.642364  467949 cri.go:96] found id: ""
	I1227 09:58:51.642387  467949 logs.go:282] 0 containers: []
	W1227 09:58:51.642395  467949 logs.go:284] No container was found matching "kindnet"
	I1227 09:58:51.642405  467949 logs.go:123] Gathering logs for dmesg ...
	I1227 09:58:51.642417  467949 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1227 09:58:51.658409  467949 logs.go:123] Gathering logs for describe nodes ...
	I1227 09:58:51.658440  467949 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1227 09:58:51.723748  467949 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1227 09:58:51.715496    4958 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 09:58:51.715973    4958 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 09:58:51.717642    4958 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 09:58:51.718073    4958 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 09:58:51.719759    4958 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1227 09:58:51.715496    4958 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 09:58:51.715973    4958 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 09:58:51.717642    4958 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 09:58:51.718073    4958 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 09:58:51.719759    4958 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1227 09:58:51.723772  467949 logs.go:123] Gathering logs for CRI-O ...
	I1227 09:58:51.723785  467949 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1227 09:58:51.757945  467949 logs.go:123] Gathering logs for container status ...
	I1227 09:58:51.757981  467949 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1227 09:58:51.789642  467949 logs.go:123] Gathering logs for kubelet ...
	I1227 09:58:51.789679  467949 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1227 09:58:51.857608  467949 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001610122s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	W1227 09:58:51.857679  467949 out.go:285] * 
	* 
	W1227 09:58:51.857785  467949 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001610122s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001610122s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1227 09:58:51.857802  467949 out.go:285] * 
	* 
	W1227 09:58:51.858054  467949 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1227 09:58:51.862978  467949 out.go:203] 
	W1227 09:58:51.866601  467949 out.go:285] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001610122s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001610122s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1227 09:58:51.866700  467949 out.go:285] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1227 09:58:51.866724  467949 out.go:285] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1227 09:58:51.870333  467949 out.go:203] 

                                                
                                                
** /stderr **
docker_test.go:157: failed to start minikube with args: "out/minikube-linux-arm64 start -p force-systemd-env-029895 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio" : exit status 109
docker_test.go:166: *** TestForceSystemdEnv FAILED at 2025-12-27 09:58:51.920763917 +0000 UTC m=+2778.467163983
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestForceSystemdEnv]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestForceSystemdEnv]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect force-systemd-env-029895
helpers_test.go:244: (dbg) docker inspect force-systemd-env-029895:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "5b8e3875448ea4225d671b859a8517942448a355dfce91433aed3a15b7c14751",
	        "Created": "2025-12-27T09:50:33.492160278Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 468480,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-27T09:50:33.568282115Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:713d59eed1cb7ea89d31e3cbef8f6274a6ab7509a421e96cf0197c334e76398d",
	        "ResolvConfPath": "/var/lib/docker/containers/5b8e3875448ea4225d671b859a8517942448a355dfce91433aed3a15b7c14751/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/5b8e3875448ea4225d671b859a8517942448a355dfce91433aed3a15b7c14751/hostname",
	        "HostsPath": "/var/lib/docker/containers/5b8e3875448ea4225d671b859a8517942448a355dfce91433aed3a15b7c14751/hosts",
	        "LogPath": "/var/lib/docker/containers/5b8e3875448ea4225d671b859a8517942448a355dfce91433aed3a15b7c14751/5b8e3875448ea4225d671b859a8517942448a355dfce91433aed3a15b7c14751-json.log",
	        "Name": "/force-systemd-env-029895",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "force-systemd-env-029895:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "force-systemd-env-029895",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "5b8e3875448ea4225d671b859a8517942448a355dfce91433aed3a15b7c14751",
	                "LowerDir": "/var/lib/docker/overlay2/408ddf594cb9f8ca0ab4155b8e5e206e20e385865b99fbb75e29cc692fa3f05b-init/diff:/var/lib/docker/overlay2/888349771a41a46b09e7de676064af9effbe2f5ae2a8ba49ad062335fb2a70e5/diff",
	                "MergedDir": "/var/lib/docker/overlay2/408ddf594cb9f8ca0ab4155b8e5e206e20e385865b99fbb75e29cc692fa3f05b/merged",
	                "UpperDir": "/var/lib/docker/overlay2/408ddf594cb9f8ca0ab4155b8e5e206e20e385865b99fbb75e29cc692fa3f05b/diff",
	                "WorkDir": "/var/lib/docker/overlay2/408ddf594cb9f8ca0ab4155b8e5e206e20e385865b99fbb75e29cc692fa3f05b/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "force-systemd-env-029895",
	                "Source": "/var/lib/docker/volumes/force-systemd-env-029895/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "force-systemd-env-029895",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "force-systemd-env-029895",
	                "name.minikube.sigs.k8s.io": "force-systemd-env-029895",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "709b92f18d01b54f956f3efb79aa4a29a6957926c8eee64117d1875a4164a6cd",
	            "SandboxKey": "/var/run/docker/netns/709b92f18d01",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33386"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33387"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33390"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33388"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33389"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "force-systemd-env-029895": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "ae:ad:e1:3c:47:e1",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "e19e087901f5fb383d372bb204a1fbf3507896553e97636e00233fce8d4ffeb1",
	                    "EndpointID": "e6b08d6e3904d07cd20cba764fa8bdd4ad7d0ae085de04580086f2efd044a43d",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "force-systemd-env-029895",
	                        "5b8e3875448e"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p force-systemd-env-029895 -n force-systemd-env-029895
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p force-systemd-env-029895 -n force-systemd-env-029895: exit status 6 (341.503902ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1227 09:58:52.280606  489044 status.go:458] kubeconfig endpoint: get endpoint: "force-systemd-env-029895" does not appear in /home/jenkins/minikube-integration/22344-301174/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:248: status error: exit status 6 (may be ok)
helpers_test.go:253: <<< TestForceSystemdEnv FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestForceSystemdEnv]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p force-systemd-env-029895 logs -n 25
helpers_test.go:261: TestForceSystemdEnv logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                            ARGS                                                             │          PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ -p cilium-246753 sudo cat /etc/kubernetes/kubelet.conf                                                                      │ cilium-246753             │ jenkins │ v1.37.0 │ 27 Dec 25 09:52 UTC │                     │
	│ ssh     │ -p cilium-246753 sudo cat /var/lib/kubelet/config.yaml                                                                      │ cilium-246753             │ jenkins │ v1.37.0 │ 27 Dec 25 09:52 UTC │                     │
	│ ssh     │ -p cilium-246753 sudo systemctl status docker --all --full --no-pager                                                       │ cilium-246753             │ jenkins │ v1.37.0 │ 27 Dec 25 09:52 UTC │                     │
	│ ssh     │ -p cilium-246753 sudo systemctl cat docker --no-pager                                                                       │ cilium-246753             │ jenkins │ v1.37.0 │ 27 Dec 25 09:52 UTC │                     │
	│ ssh     │ -p cilium-246753 sudo cat /etc/docker/daemon.json                                                                           │ cilium-246753             │ jenkins │ v1.37.0 │ 27 Dec 25 09:52 UTC │                     │
	│ ssh     │ -p cilium-246753 sudo docker system info                                                                                    │ cilium-246753             │ jenkins │ v1.37.0 │ 27 Dec 25 09:52 UTC │                     │
	│ ssh     │ -p cilium-246753 sudo systemctl status cri-docker --all --full --no-pager                                                   │ cilium-246753             │ jenkins │ v1.37.0 │ 27 Dec 25 09:52 UTC │                     │
	│ ssh     │ -p cilium-246753 sudo systemctl cat cri-docker --no-pager                                                                   │ cilium-246753             │ jenkins │ v1.37.0 │ 27 Dec 25 09:52 UTC │                     │
	│ ssh     │ -p cilium-246753 sudo cat /etc/systemd/system/cri-docker.service.d/10-cni.conf                                              │ cilium-246753             │ jenkins │ v1.37.0 │ 27 Dec 25 09:52 UTC │                     │
	│ ssh     │ -p cilium-246753 sudo cat /usr/lib/systemd/system/cri-docker.service                                                        │ cilium-246753             │ jenkins │ v1.37.0 │ 27 Dec 25 09:52 UTC │                     │
	│ ssh     │ -p cilium-246753 sudo cri-dockerd --version                                                                                 │ cilium-246753             │ jenkins │ v1.37.0 │ 27 Dec 25 09:52 UTC │                     │
	│ ssh     │ -p cilium-246753 sudo systemctl status containerd --all --full --no-pager                                                   │ cilium-246753             │ jenkins │ v1.37.0 │ 27 Dec 25 09:52 UTC │                     │
	│ ssh     │ -p cilium-246753 sudo systemctl cat containerd --no-pager                                                                   │ cilium-246753             │ jenkins │ v1.37.0 │ 27 Dec 25 09:52 UTC │                     │
	│ ssh     │ -p cilium-246753 sudo cat /lib/systemd/system/containerd.service                                                            │ cilium-246753             │ jenkins │ v1.37.0 │ 27 Dec 25 09:52 UTC │                     │
	│ ssh     │ -p cilium-246753 sudo cat /etc/containerd/config.toml                                                                       │ cilium-246753             │ jenkins │ v1.37.0 │ 27 Dec 25 09:52 UTC │                     │
	│ ssh     │ -p cilium-246753 sudo containerd config dump                                                                                │ cilium-246753             │ jenkins │ v1.37.0 │ 27 Dec 25 09:52 UTC │                     │
	│ ssh     │ -p cilium-246753 sudo systemctl status crio --all --full --no-pager                                                         │ cilium-246753             │ jenkins │ v1.37.0 │ 27 Dec 25 09:52 UTC │                     │
	│ ssh     │ -p cilium-246753 sudo systemctl cat crio --no-pager                                                                         │ cilium-246753             │ jenkins │ v1.37.0 │ 27 Dec 25 09:52 UTC │                     │
	│ ssh     │ -p cilium-246753 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                               │ cilium-246753             │ jenkins │ v1.37.0 │ 27 Dec 25 09:52 UTC │                     │
	│ ssh     │ -p cilium-246753 sudo crio config                                                                                           │ cilium-246753             │ jenkins │ v1.37.0 │ 27 Dec 25 09:52 UTC │                     │
	│ delete  │ -p cilium-246753                                                                                                            │ cilium-246753             │ jenkins │ v1.37.0 │ 27 Dec 25 09:52 UTC │ 27 Dec 25 09:52 UTC │
	│ start   │ -p cert-expiration-028595 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio                      │ cert-expiration-028595    │ jenkins │ v1.37.0 │ 27 Dec 25 09:52 UTC │ 27 Dec 25 09:52 UTC │
	│ start   │ -p cert-expiration-028595 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio                   │ cert-expiration-028595    │ jenkins │ v1.37.0 │ 27 Dec 25 09:55 UTC │ 27 Dec 25 09:55 UTC │
	│ delete  │ -p cert-expiration-028595                                                                                                   │ cert-expiration-028595    │ jenkins │ v1.37.0 │ 27 Dec 25 09:55 UTC │ 27 Dec 25 09:55 UTC │
	│ start   │ -p force-systemd-flag-779725 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio │ force-systemd-flag-779725 │ jenkins │ v1.37.0 │ 27 Dec 25 09:55 UTC │                     │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/27 09:55:52
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1227 09:55:52.985585  484533 out.go:360] Setting OutFile to fd 1 ...
	I1227 09:55:52.985780  484533 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 09:55:52.985806  484533 out.go:374] Setting ErrFile to fd 2...
	I1227 09:55:52.985825  484533 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 09:55:52.986303  484533 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22344-301174/.minikube/bin
	I1227 09:55:52.986835  484533 out.go:368] Setting JSON to false
	I1227 09:55:52.987745  484533 start.go:133] hostinfo: {"hostname":"ip-172-31-31-251","uptime":9502,"bootTime":1766819851,"procs":177,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1227 09:55:52.987863  484533 start.go:143] virtualization:  
	I1227 09:55:52.991395  484533 out.go:179] * [force-systemd-flag-779725] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1227 09:55:52.993846  484533 out.go:179]   - MINIKUBE_LOCATION=22344
	I1227 09:55:52.993919  484533 notify.go:221] Checking for updates...
	I1227 09:55:53.000344  484533 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1227 09:55:53.003707  484533 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22344-301174/kubeconfig
	I1227 09:55:53.007511  484533 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22344-301174/.minikube
	I1227 09:55:53.010720  484533 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1227 09:55:53.013752  484533 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1227 09:55:53.017387  484533 config.go:182] Loaded profile config "force-systemd-env-029895": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1227 09:55:53.017511  484533 driver.go:422] Setting default libvirt URI to qemu:///system
	I1227 09:55:53.047437  484533 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1227 09:55:53.047564  484533 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1227 09:55:53.110402  484533 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-27 09:55:53.10014062 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pa
th:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1227 09:55:53.110514  484533 docker.go:319] overlay module found
	I1227 09:55:53.114785  484533 out.go:179] * Using the docker driver based on user configuration
	I1227 09:55:53.117515  484533 start.go:309] selected driver: docker
	I1227 09:55:53.117531  484533 start.go:928] validating driver "docker" against <nil>
	I1227 09:55:53.117550  484533 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1227 09:55:53.118403  484533 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1227 09:55:53.181601  484533 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-27 09:55:53.167902272 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1227 09:55:53.181751  484533 start_flags.go:333] no existing cluster config was found, will generate one from the flags 
	I1227 09:55:53.181970  484533 start_flags.go:1001] Wait components to verify : map[apiserver:true system_pods:true]
	I1227 09:55:53.184753  484533 out.go:179] * Using Docker driver with root privileges
	I1227 09:55:53.187465  484533 cni.go:84] Creating CNI manager for ""
	I1227 09:55:53.187535  484533 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1227 09:55:53.187549  484533 start_flags.go:342] Found "CNI" CNI - setting NetworkPlugin=cni
	I1227 09:55:53.187643  484533 start.go:353] cluster config:
	{Name:force-systemd-flag-779725 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:force-systemd-flag-779725 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluste
r.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1227 09:55:53.190621  484533 out.go:179] * Starting "force-systemd-flag-779725" primary control-plane node in "force-systemd-flag-779725" cluster
	I1227 09:55:53.193330  484533 cache.go:134] Beginning downloading kic base image for docker with crio
	I1227 09:55:53.196297  484533 out.go:179] * Pulling base image v0.0.48-1766570851-22316 ...
	I1227 09:55:53.199155  484533 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime crio
	I1227 09:55:53.199207  484533 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22344-301174/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-cri-o-overlay-arm64.tar.lz4
	I1227 09:55:53.199220  484533 cache.go:65] Caching tarball of preloaded images
	I1227 09:55:53.199229  484533 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a in local docker daemon
	I1227 09:55:53.199305  484533 preload.go:251] Found /home/jenkins/minikube-integration/22344-301174/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1227 09:55:53.199316  484533 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0 on crio
	I1227 09:55:53.199434  484533 profile.go:143] Saving config to /home/jenkins/minikube-integration/22344-301174/.minikube/profiles/force-systemd-flag-779725/config.json ...
	I1227 09:55:53.199453  484533 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22344-301174/.minikube/profiles/force-systemd-flag-779725/config.json: {Name:mk96df6b6bceeb873dcb64d2217c60d1a3551e06 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 09:55:53.218541  484533 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a in local docker daemon, skipping pull
	I1227 09:55:53.218566  484533 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a exists in daemon, skipping load
	I1227 09:55:53.218587  484533 cache.go:243] Successfully downloaded all kic artifacts
	I1227 09:55:53.218617  484533 start.go:360] acquireMachinesLock for force-systemd-flag-779725: {Name:mkfa95052f8385e546a22dbee7799fa0cde0dd51 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1227 09:55:53.218736  484533 start.go:364] duration metric: took 98.331µs to acquireMachinesLock for "force-systemd-flag-779725"
	I1227 09:55:53.218768  484533 start.go:93] Provisioning new machine with config: &{Name:force-systemd-flag-779725 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:force-systemd-flag-779725 Namespace:default APIServer
HAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:
SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false} &{Name: IP: Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1227 09:55:53.218843  484533 start.go:125] createHost starting for "" (driver="docker")
	I1227 09:55:53.222123  484533 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1227 09:55:53.222408  484533 start.go:159] libmachine.API.Create for "force-systemd-flag-779725" (driver="docker")
	I1227 09:55:53.222449  484533 client.go:173] LocalClient.Create starting
	I1227 09:55:53.222519  484533 main.go:144] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22344-301174/.minikube/certs/ca.pem
	I1227 09:55:53.222563  484533 main.go:144] libmachine: Decoding PEM data...
	I1227 09:55:53.222582  484533 main.go:144] libmachine: Parsing certificate...
	I1227 09:55:53.222634  484533 main.go:144] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22344-301174/.minikube/certs/cert.pem
	I1227 09:55:53.222660  484533 main.go:144] libmachine: Decoding PEM data...
	I1227 09:55:53.222672  484533 main.go:144] libmachine: Parsing certificate...
	I1227 09:55:53.223033  484533 cli_runner.go:164] Run: docker network inspect force-systemd-flag-779725 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1227 09:55:53.239101  484533 cli_runner.go:211] docker network inspect force-systemd-flag-779725 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1227 09:55:53.239200  484533 network_create.go:284] running [docker network inspect force-systemd-flag-779725] to gather additional debugging logs...
	I1227 09:55:53.239227  484533 cli_runner.go:164] Run: docker network inspect force-systemd-flag-779725
	W1227 09:55:53.255088  484533 cli_runner.go:211] docker network inspect force-systemd-flag-779725 returned with exit code 1
	I1227 09:55:53.255128  484533 network_create.go:287] error running [docker network inspect force-systemd-flag-779725]: docker network inspect force-systemd-flag-779725: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network force-systemd-flag-779725 not found
	I1227 09:55:53.255148  484533 network_create.go:289] output of [docker network inspect force-systemd-flag-779725]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network force-systemd-flag-779725 not found
	
	** /stderr **
	I1227 09:55:53.255269  484533 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1227 09:55:53.272556  484533 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-cc45c0939b74 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:be:a9:eb:a6:c1:01} reservation:<nil>}
	I1227 09:55:53.272985  484533 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-b122a856da6d IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:2e:eb:89:95:c1:62} reservation:<nil>}
	I1227 09:55:53.273283  484533 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-a1a16649dea9 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:9e:51:2b:90:d1:ea} reservation:<nil>}
	I1227 09:55:53.273750  484533 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001a210c0}
	I1227 09:55:53.273775  484533 network_create.go:124] attempt to create docker network force-systemd-flag-779725 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I1227 09:55:53.273840  484533 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=force-systemd-flag-779725 force-systemd-flag-779725
	I1227 09:55:53.332153  484533 network_create.go:108] docker network force-systemd-flag-779725 192.168.76.0/24 created
	I1227 09:55:53.332185  484533 kic.go:121] calculated static IP "192.168.76.2" for the "force-systemd-flag-779725" container
	I1227 09:55:53.332269  484533 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1227 09:55:53.349497  484533 cli_runner.go:164] Run: docker volume create force-systemd-flag-779725 --label name.minikube.sigs.k8s.io=force-systemd-flag-779725 --label created_by.minikube.sigs.k8s.io=true
	I1227 09:55:53.367363  484533 oci.go:103] Successfully created a docker volume force-systemd-flag-779725
	I1227 09:55:53.367455  484533 cli_runner.go:164] Run: docker run --rm --name force-systemd-flag-779725-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=force-systemd-flag-779725 --entrypoint /usr/bin/test -v force-systemd-flag-779725:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a -d /var/lib
	I1227 09:55:53.914927  484533 oci.go:107] Successfully prepared a docker volume force-systemd-flag-779725
	I1227 09:55:53.914996  484533 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime crio
	I1227 09:55:53.915017  484533 kic.go:194] Starting extracting preloaded images to volume ...
	I1227 09:55:53.915088  484533 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22344-301174/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v force-systemd-flag-779725:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a -I lz4 -xf /preloaded.tar -C /extractDir
	I1227 09:55:57.830866  484533 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22344-301174/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v force-systemd-flag-779725:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a -I lz4 -xf /preloaded.tar -C /extractDir: (3.915713139s)
	I1227 09:55:57.830903  484533 kic.go:203] duration metric: took 3.915882421s to extract preloaded images to volume ...
	W1227 09:55:57.831035  484533 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1227 09:55:57.831151  484533 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1227 09:55:57.883318  484533 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname force-systemd-flag-779725 --name force-systemd-flag-779725 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=force-systemd-flag-779725 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=force-systemd-flag-779725 --network force-systemd-flag-779725 --ip 192.168.76.2 --volume force-systemd-flag-779725:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a
	I1227 09:55:58.204888  484533 cli_runner.go:164] Run: docker container inspect force-systemd-flag-779725 --format={{.State.Running}}
	I1227 09:55:58.226405  484533 cli_runner.go:164] Run: docker container inspect force-systemd-flag-779725 --format={{.State.Status}}
	I1227 09:55:58.245312  484533 cli_runner.go:164] Run: docker exec force-systemd-flag-779725 stat /var/lib/dpkg/alternatives/iptables
	I1227 09:55:58.292687  484533 oci.go:144] the created container "force-systemd-flag-779725" has a running status.
	I1227 09:55:58.292719  484533 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/22344-301174/.minikube/machines/force-systemd-flag-779725/id_rsa...
	I1227 09:55:58.595860  484533 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22344-301174/.minikube/machines/force-systemd-flag-779725/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I1227 09:55:58.595909  484533 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22344-301174/.minikube/machines/force-systemd-flag-779725/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1227 09:55:58.623920  484533 cli_runner.go:164] Run: docker container inspect force-systemd-flag-779725 --format={{.State.Status}}
	I1227 09:55:58.641703  484533 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1227 09:55:58.641729  484533 kic_runner.go:114] Args: [docker exec --privileged force-systemd-flag-779725 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1227 09:55:58.697285  484533 cli_runner.go:164] Run: docker container inspect force-systemd-flag-779725 --format={{.State.Status}}
	I1227 09:55:58.720797  484533 machine.go:94] provisionDockerMachine start ...
	I1227 09:55:58.721011  484533 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-779725
	I1227 09:55:58.748299  484533 main.go:144] libmachine: Using SSH client type: native
	I1227 09:55:58.748638  484533 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dbbe0] 0x3de0e0 <nil>  [] 0s} 127.0.0.1 33411 <nil> <nil>}
	I1227 09:55:58.748647  484533 main.go:144] libmachine: About to run SSH command:
	hostname
	I1227 09:55:58.749426  484533 main.go:144] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:33558->127.0.0.1:33411: read: connection reset by peer
	I1227 09:56:01.890073  484533 main.go:144] libmachine: SSH cmd err, output: <nil>: force-systemd-flag-779725
	
	I1227 09:56:01.890094  484533 ubuntu.go:182] provisioning hostname "force-systemd-flag-779725"
	I1227 09:56:01.890204  484533 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-779725
	I1227 09:56:01.921774  484533 main.go:144] libmachine: Using SSH client type: native
	I1227 09:56:01.922103  484533 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dbbe0] 0x3de0e0 <nil>  [] 0s} 127.0.0.1 33411 <nil> <nil>}
	I1227 09:56:01.922114  484533 main.go:144] libmachine: About to run SSH command:
	sudo hostname force-systemd-flag-779725 && echo "force-systemd-flag-779725" | sudo tee /etc/hostname
	I1227 09:56:02.080023  484533 main.go:144] libmachine: SSH cmd err, output: <nil>: force-systemd-flag-779725
	
	I1227 09:56:02.080102  484533 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-779725
	I1227 09:56:02.099159  484533 main.go:144] libmachine: Using SSH client type: native
	I1227 09:56:02.099474  484533 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dbbe0] 0x3de0e0 <nil>  [] 0s} 127.0.0.1 33411 <nil> <nil>}
	I1227 09:56:02.099496  484533 main.go:144] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sforce-systemd-flag-779725' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 force-systemd-flag-779725/g' /etc/hosts;
				else 
					echo '127.0.1.1 force-systemd-flag-779725' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1227 09:56:02.238676  484533 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	I1227 09:56:02.238704  484533 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22344-301174/.minikube CaCertPath:/home/jenkins/minikube-integration/22344-301174/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22344-301174/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22344-301174/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22344-301174/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22344-301174/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22344-301174/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22344-301174/.minikube}
	I1227 09:56:02.238737  484533 ubuntu.go:190] setting up certificates
	I1227 09:56:02.238747  484533 provision.go:84] configureAuth start
	I1227 09:56:02.238816  484533 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" force-systemd-flag-779725
	I1227 09:56:02.256388  484533 provision.go:143] copyHostCerts
	I1227 09:56:02.256433  484533 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22344-301174/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/22344-301174/.minikube/ca.pem
	I1227 09:56:02.256466  484533 exec_runner.go:144] found /home/jenkins/minikube-integration/22344-301174/.minikube/ca.pem, removing ...
	I1227 09:56:02.256477  484533 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22344-301174/.minikube/ca.pem
	I1227 09:56:02.256557  484533 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22344-301174/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22344-301174/.minikube/ca.pem (1082 bytes)
	I1227 09:56:02.256643  484533 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22344-301174/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/22344-301174/.minikube/cert.pem
	I1227 09:56:02.256665  484533 exec_runner.go:144] found /home/jenkins/minikube-integration/22344-301174/.minikube/cert.pem, removing ...
	I1227 09:56:02.256670  484533 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22344-301174/.minikube/cert.pem
	I1227 09:56:02.256701  484533 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22344-301174/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22344-301174/.minikube/cert.pem (1123 bytes)
	I1227 09:56:02.256743  484533 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22344-301174/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/22344-301174/.minikube/key.pem
	I1227 09:56:02.256763  484533 exec_runner.go:144] found /home/jenkins/minikube-integration/22344-301174/.minikube/key.pem, removing ...
	I1227 09:56:02.256770  484533 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22344-301174/.minikube/key.pem
	I1227 09:56:02.256793  484533 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22344-301174/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22344-301174/.minikube/key.pem (1675 bytes)
	I1227 09:56:02.256843  484533 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22344-301174/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22344-301174/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22344-301174/.minikube/certs/ca-key.pem org=jenkins.force-systemd-flag-779725 san=[127.0.0.1 192.168.76.2 force-systemd-flag-779725 localhost minikube]
	I1227 09:56:02.820175  484533 provision.go:177] copyRemoteCerts
	I1227 09:56:02.820242  484533 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1227 09:56:02.820293  484533 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-779725
	I1227 09:56:02.839095  484533 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33411 SSHKeyPath:/home/jenkins/minikube-integration/22344-301174/.minikube/machines/force-systemd-flag-779725/id_rsa Username:docker}
	I1227 09:56:02.937862  484533 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22344-301174/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1227 09:56:02.937917  484533 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22344-301174/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1227 09:56:02.954900  484533 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22344-301174/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1227 09:56:02.955012  484533 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22344-301174/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I1227 09:56:02.972777  484533 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22344-301174/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1227 09:56:02.972837  484533 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22344-301174/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1227 09:56:02.989993  484533 provision.go:87] duration metric: took 751.225708ms to configureAuth
	I1227 09:56:02.990022  484533 ubuntu.go:206] setting minikube options for container-runtime
	I1227 09:56:02.990286  484533 config.go:182] Loaded profile config "force-systemd-flag-779725": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1227 09:56:02.990400  484533 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-779725
	I1227 09:56:03.009371  484533 main.go:144] libmachine: Using SSH client type: native
	I1227 09:56:03.009721  484533 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dbbe0] 0x3de0e0 <nil>  [] 0s} 127.0.0.1 33411 <nil> <nil>}
	I1227 09:56:03.009743  484533 main.go:144] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1227 09:56:03.297764  484533 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1227 09:56:03.297784  484533 machine.go:97] duration metric: took 4.576962743s to provisionDockerMachine
	I1227 09:56:03.297795  484533 client.go:176] duration metric: took 10.075336911s to LocalClient.Create
	I1227 09:56:03.297811  484533 start.go:167] duration metric: took 10.075406179s to libmachine.API.Create "force-systemd-flag-779725"
	I1227 09:56:03.297818  484533 start.go:293] postStartSetup for "force-systemd-flag-779725" (driver="docker")
	I1227 09:56:03.297829  484533 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1227 09:56:03.297891  484533 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1227 09:56:03.297940  484533 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-779725
	I1227 09:56:03.317175  484533 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33411 SSHKeyPath:/home/jenkins/minikube-integration/22344-301174/.minikube/machines/force-systemd-flag-779725/id_rsa Username:docker}
	I1227 09:56:03.420137  484533 ssh_runner.go:195] Run: cat /etc/os-release
	I1227 09:56:03.425386  484533 main.go:144] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1227 09:56:03.425413  484533 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1227 09:56:03.425425  484533 filesync.go:126] Scanning /home/jenkins/minikube-integration/22344-301174/.minikube/addons for local assets ...
	I1227 09:56:03.425486  484533 filesync.go:126] Scanning /home/jenkins/minikube-integration/22344-301174/.minikube/files for local assets ...
	I1227 09:56:03.425575  484533 filesync.go:149] local asset: /home/jenkins/minikube-integration/22344-301174/.minikube/files/etc/ssl/certs/3030432.pem -> 3030432.pem in /etc/ssl/certs
	I1227 09:56:03.425587  484533 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22344-301174/.minikube/files/etc/ssl/certs/3030432.pem -> /etc/ssl/certs/3030432.pem
	I1227 09:56:03.425700  484533 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1227 09:56:03.434974  484533 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22344-301174/.minikube/files/etc/ssl/certs/3030432.pem --> /etc/ssl/certs/3030432.pem (1708 bytes)
	I1227 09:56:03.456435  484533 start.go:296] duration metric: took 158.601608ms for postStartSetup
	I1227 09:56:03.456816  484533 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" force-systemd-flag-779725
	I1227 09:56:03.474010  484533 profile.go:143] Saving config to /home/jenkins/minikube-integration/22344-301174/.minikube/profiles/force-systemd-flag-779725/config.json ...
	I1227 09:56:03.474343  484533 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1227 09:56:03.474394  484533 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-779725
	I1227 09:56:03.491294  484533 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33411 SSHKeyPath:/home/jenkins/minikube-integration/22344-301174/.minikube/machines/force-systemd-flag-779725/id_rsa Username:docker}
	I1227 09:56:03.587080  484533 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1227 09:56:03.591521  484533 start.go:128] duration metric: took 10.372663152s to createHost
	I1227 09:56:03.591548  484533 start.go:83] releasing machines lock for "force-systemd-flag-779725", held for 10.372799235s
	I1227 09:56:03.591617  484533 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" force-systemd-flag-779725
	I1227 09:56:03.608861  484533 ssh_runner.go:195] Run: cat /version.json
	I1227 09:56:03.608919  484533 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-779725
	I1227 09:56:03.609175  484533 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1227 09:56:03.609235  484533 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-779725
	I1227 09:56:03.630341  484533 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33411 SSHKeyPath:/home/jenkins/minikube-integration/22344-301174/.minikube/machines/force-systemd-flag-779725/id_rsa Username:docker}
	I1227 09:56:03.632306  484533 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33411 SSHKeyPath:/home/jenkins/minikube-integration/22344-301174/.minikube/machines/force-systemd-flag-779725/id_rsa Username:docker}
	I1227 09:56:03.824484  484533 ssh_runner.go:195] Run: systemctl --version
	I1227 09:56:03.831153  484533 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1227 09:56:03.866389  484533 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1227 09:56:03.870816  484533 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1227 09:56:03.870913  484533 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1227 09:56:03.899637  484533 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1227 09:56:03.899664  484533 start.go:496] detecting cgroup driver to use...
	I1227 09:56:03.899679  484533 start.go:500] using "systemd" cgroup driver as enforced via flags
	I1227 09:56:03.899734  484533 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1227 09:56:03.917759  484533 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1227 09:56:03.930590  484533 docker.go:218] disabling cri-docker service (if available) ...
	I1227 09:56:03.930658  484533 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1227 09:56:03.949239  484533 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1227 09:56:03.967893  484533 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1227 09:56:04.087613  484533 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1227 09:56:04.236434  484533 docker.go:234] disabling docker service ...
	I1227 09:56:04.236533  484533 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1227 09:56:04.258720  484533 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1227 09:56:04.272670  484533 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1227 09:56:04.397721  484533 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1227 09:56:04.522806  484533 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1227 09:56:04.536017  484533 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1227 09:56:04.550767  484533 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1227 09:56:04.550852  484533 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 09:56:04.560105  484533 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1227 09:56:04.560201  484533 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 09:56:04.570142  484533 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 09:56:04.579772  484533 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 09:56:04.588919  484533 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1227 09:56:04.597463  484533 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 09:56:04.607037  484533 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 09:56:04.621447  484533 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 09:56:04.631131  484533 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1227 09:56:04.638826  484533 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1227 09:56:04.646373  484533 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1227 09:56:04.761276  484533 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1227 09:56:04.955089  484533 start.go:553] Will wait 60s for socket path /var/run/crio/crio.sock
	I1227 09:56:04.955160  484533 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1227 09:56:04.959251  484533 start.go:574] Will wait 60s for crictl version
	I1227 09:56:04.959361  484533 ssh_runner.go:195] Run: which crictl
	I1227 09:56:04.963057  484533 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1227 09:56:04.986231  484533 start.go:590] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1227 09:56:04.986384  484533 ssh_runner.go:195] Run: crio --version
	I1227 09:56:05.016778  484533 ssh_runner.go:195] Run: crio --version
	I1227 09:56:05.050525  484533 out.go:179] * Preparing Kubernetes v1.35.0 on CRI-O 1.34.3 ...
	I1227 09:56:05.053292  484533 cli_runner.go:164] Run: docker network inspect force-systemd-flag-779725 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1227 09:56:05.068946  484533 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1227 09:56:05.072677  484533 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1227 09:56:05.082493  484533 kubeadm.go:884] updating cluster {Name:force-systemd-flag-779725 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:force-systemd-flag-779725 Namespace:default APIServerHAVIP: APIServerNam
e:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuth
Sock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false} ...
	I1227 09:56:05.082626  484533 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime crio
	I1227 09:56:05.082693  484533 ssh_runner.go:195] Run: sudo crictl images --output json
	I1227 09:56:05.121874  484533 crio.go:561] all images are preloaded for cri-o runtime.
	I1227 09:56:05.121901  484533 crio.go:433] Images already preloaded, skipping extraction
	I1227 09:56:05.121957  484533 ssh_runner.go:195] Run: sudo crictl images --output json
	I1227 09:56:05.148169  484533 crio.go:561] all images are preloaded for cri-o runtime.
	I1227 09:56:05.148235  484533 cache_images.go:86] Images are preloaded, skipping loading
	I1227 09:56:05.148258  484533 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.35.0 crio true true} ...
	I1227 09:56:05.148376  484533 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=force-systemd-flag-779725 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0 ClusterName:force-systemd-flag-779725 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1227 09:56:05.148475  484533 ssh_runner.go:195] Run: crio config
	I1227 09:56:05.206823  484533 cni.go:84] Creating CNI manager for ""
	I1227 09:56:05.206849  484533 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1227 09:56:05.206863  484533 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1227 09:56:05.206887  484533 kubeadm.go:197] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.35.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:force-systemd-flag-779725 NodeName:force-systemd-flag-779725 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt Sta
ticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1227 09:56:05.207016  484533 kubeadm.go:203] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "force-systemd-flag-779725"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1227 09:56:05.207095  484533 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0
	I1227 09:56:05.215092  484533 binaries.go:51] Found k8s binaries, skipping transfer
	I1227 09:56:05.215172  484533 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1227 09:56:05.222984  484533 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (375 bytes)
	I1227 09:56:05.235863  484533 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1227 09:56:05.249729  484533 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2221 bytes)
	I1227 09:56:05.262651  484533 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1227 09:56:05.266142  484533 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1227 09:56:05.276049  484533 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1227 09:56:05.391751  484533 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1227 09:56:05.409597  484533 certs.go:69] Setting up /home/jenkins/minikube-integration/22344-301174/.minikube/profiles/force-systemd-flag-779725 for IP: 192.168.76.2
	I1227 09:56:05.409620  484533 certs.go:195] generating shared ca certs ...
	I1227 09:56:05.409637  484533 certs.go:227] acquiring lock for ca certs: {Name:mk8bd99999eb300c33129cb11fd55a1be3001328 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 09:56:05.409782  484533 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22344-301174/.minikube/ca.key
	I1227 09:56:05.409833  484533 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22344-301174/.minikube/proxy-client-ca.key
	I1227 09:56:05.409843  484533 certs.go:257] generating profile certs ...
	I1227 09:56:05.409896  484533 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22344-301174/.minikube/profiles/force-systemd-flag-779725/client.key
	I1227 09:56:05.409921  484533 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22344-301174/.minikube/profiles/force-systemd-flag-779725/client.crt with IP's: []
	I1227 09:56:05.819192  484533 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22344-301174/.minikube/profiles/force-systemd-flag-779725/client.crt ...
	I1227 09:56:05.819226  484533 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22344-301174/.minikube/profiles/force-systemd-flag-779725/client.crt: {Name:mkd1d275c3c275bb893e96ad8a5f4872b9397052 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 09:56:05.819425  484533 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22344-301174/.minikube/profiles/force-systemd-flag-779725/client.key ...
	I1227 09:56:05.819439  484533 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22344-301174/.minikube/profiles/force-systemd-flag-779725/client.key: {Name:mkdfedbef7b759979f4447f7a607c257e91a7898 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 09:56:05.819535  484533 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22344-301174/.minikube/profiles/force-systemd-flag-779725/apiserver.key.b02328ec
	I1227 09:56:05.819551  484533 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22344-301174/.minikube/profiles/force-systemd-flag-779725/apiserver.crt.b02328ec with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.76.2]
	I1227 09:56:06.116438  484533 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22344-301174/.minikube/profiles/force-systemd-flag-779725/apiserver.crt.b02328ec ...
	I1227 09:56:06.116469  484533 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22344-301174/.minikube/profiles/force-systemd-flag-779725/apiserver.crt.b02328ec: {Name:mk761e2004e9f22386d59f827364db9f82f7df23 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 09:56:06.116661  484533 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22344-301174/.minikube/profiles/force-systemd-flag-779725/apiserver.key.b02328ec ...
	I1227 09:56:06.116675  484533 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22344-301174/.minikube/profiles/force-systemd-flag-779725/apiserver.key.b02328ec: {Name:mkc6c7023e856a9d390622f91a838a5e786a71be Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 09:56:06.116764  484533 certs.go:382] copying /home/jenkins/minikube-integration/22344-301174/.minikube/profiles/force-systemd-flag-779725/apiserver.crt.b02328ec -> /home/jenkins/minikube-integration/22344-301174/.minikube/profiles/force-systemd-flag-779725/apiserver.crt
	I1227 09:56:06.116843  484533 certs.go:386] copying /home/jenkins/minikube-integration/22344-301174/.minikube/profiles/force-systemd-flag-779725/apiserver.key.b02328ec -> /home/jenkins/minikube-integration/22344-301174/.minikube/profiles/force-systemd-flag-779725/apiserver.key
	I1227 09:56:06.116933  484533 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22344-301174/.minikube/profiles/force-systemd-flag-779725/proxy-client.key
	I1227 09:56:06.116952  484533 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22344-301174/.minikube/profiles/force-systemd-flag-779725/proxy-client.crt with IP's: []
	I1227 09:56:06.250408  484533 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22344-301174/.minikube/profiles/force-systemd-flag-779725/proxy-client.crt ...
	I1227 09:56:06.250442  484533 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22344-301174/.minikube/profiles/force-systemd-flag-779725/proxy-client.crt: {Name:mkd8f7c3da466bc4f00268e09334061825876390 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 09:56:06.250650  484533 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22344-301174/.minikube/profiles/force-systemd-flag-779725/proxy-client.key ...
	I1227 09:56:06.250665  484533 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22344-301174/.minikube/profiles/force-systemd-flag-779725/proxy-client.key: {Name:mk3b09064dff809062bef247e863b0fbfa5fc48d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 09:56:06.250760  484533 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22344-301174/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1227 09:56:06.250783  484533 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22344-301174/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1227 09:56:06.250798  484533 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22344-301174/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1227 09:56:06.250815  484533 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22344-301174/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1227 09:56:06.250836  484533 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22344-301174/.minikube/profiles/force-systemd-flag-779725/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1227 09:56:06.250850  484533 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22344-301174/.minikube/profiles/force-systemd-flag-779725/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1227 09:56:06.250867  484533 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22344-301174/.minikube/profiles/force-systemd-flag-779725/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1227 09:56:06.250878  484533 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22344-301174/.minikube/profiles/force-systemd-flag-779725/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1227 09:56:06.250939  484533 certs.go:484] found cert: /home/jenkins/minikube-integration/22344-301174/.minikube/certs/303043.pem (1338 bytes)
	W1227 09:56:06.250983  484533 certs.go:480] ignoring /home/jenkins/minikube-integration/22344-301174/.minikube/certs/303043_empty.pem, impossibly tiny 0 bytes
	I1227 09:56:06.250996  484533 certs.go:484] found cert: /home/jenkins/minikube-integration/22344-301174/.minikube/certs/ca-key.pem (1675 bytes)
	I1227 09:56:06.251022  484533 certs.go:484] found cert: /home/jenkins/minikube-integration/22344-301174/.minikube/certs/ca.pem (1082 bytes)
	I1227 09:56:06.251051  484533 certs.go:484] found cert: /home/jenkins/minikube-integration/22344-301174/.minikube/certs/cert.pem (1123 bytes)
	I1227 09:56:06.251078  484533 certs.go:484] found cert: /home/jenkins/minikube-integration/22344-301174/.minikube/certs/key.pem (1675 bytes)
	I1227 09:56:06.251127  484533 certs.go:484] found cert: /home/jenkins/minikube-integration/22344-301174/.minikube/files/etc/ssl/certs/3030432.pem (1708 bytes)
	I1227 09:56:06.251161  484533 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22344-301174/.minikube/files/etc/ssl/certs/3030432.pem -> /usr/share/ca-certificates/3030432.pem
	I1227 09:56:06.251179  484533 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22344-301174/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1227 09:56:06.251190  484533 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22344-301174/.minikube/certs/303043.pem -> /usr/share/ca-certificates/303043.pem
	I1227 09:56:06.251756  484533 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22344-301174/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1227 09:56:06.272067  484533 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22344-301174/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1227 09:56:06.290346  484533 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22344-301174/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1227 09:56:06.307837  484533 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22344-301174/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1227 09:56:06.325835  484533 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22344-301174/.minikube/profiles/force-systemd-flag-779725/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I1227 09:56:06.343691  484533 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22344-301174/.minikube/profiles/force-systemd-flag-779725/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1227 09:56:06.361569  484533 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22344-301174/.minikube/profiles/force-systemd-flag-779725/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1227 09:56:06.378946  484533 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22344-301174/.minikube/profiles/force-systemd-flag-779725/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1227 09:56:06.398560  484533 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22344-301174/.minikube/files/etc/ssl/certs/3030432.pem --> /usr/share/ca-certificates/3030432.pem (1708 bytes)
	I1227 09:56:06.417740  484533 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22344-301174/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1227 09:56:06.435383  484533 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22344-301174/.minikube/certs/303043.pem --> /usr/share/ca-certificates/303043.pem (1338 bytes)
	I1227 09:56:06.452867  484533 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (722 bytes)
	I1227 09:56:06.465304  484533 ssh_runner.go:195] Run: openssl version
	I1227 09:56:06.471936  484533 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1227 09:56:06.479535  484533 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1227 09:56:06.487152  484533 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1227 09:56:06.490950  484533 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 27 09:13 /usr/share/ca-certificates/minikubeCA.pem
	I1227 09:56:06.491022  484533 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1227 09:56:06.532152  484533 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1227 09:56:06.539692  484533 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1227 09:56:06.547003  484533 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/303043.pem
	I1227 09:56:06.554481  484533 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/303043.pem /etc/ssl/certs/303043.pem
	I1227 09:56:06.561894  484533 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/303043.pem
	I1227 09:56:06.565876  484533 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 27 09:17 /usr/share/ca-certificates/303043.pem
	I1227 09:56:06.565948  484533 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/303043.pem
	I1227 09:56:06.607064  484533 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1227 09:56:06.614989  484533 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/303043.pem /etc/ssl/certs/51391683.0
	I1227 09:56:06.622266  484533 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/3030432.pem
	I1227 09:56:06.629626  484533 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/3030432.pem /etc/ssl/certs/3030432.pem
	I1227 09:56:06.637235  484533 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3030432.pem
	I1227 09:56:06.640795  484533 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 27 09:17 /usr/share/ca-certificates/3030432.pem
	I1227 09:56:06.640900  484533 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3030432.pem
	I1227 09:56:06.681904  484533 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1227 09:56:06.689401  484533 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/3030432.pem /etc/ssl/certs/3ec20f2e.0
	I1227 09:56:06.696833  484533 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1227 09:56:06.700609  484533 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1227 09:56:06.700663  484533 kubeadm.go:401] StartCluster: {Name:force-systemd-flag-779725 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:force-systemd-flag-779725 Namespace:default APIServerHAVIP: APIServerName:m
inikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSoc
k: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1227 09:56:06.700744  484533 cri.go:61] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1227 09:56:06.700807  484533 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1227 09:56:06.727176  484533 cri.go:96] found id: ""
	I1227 09:56:06.727247  484533 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1227 09:56:06.735188  484533 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1227 09:56:06.743319  484533 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1227 09:56:06.743410  484533 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1227 09:56:06.751614  484533 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1227 09:56:06.751680  484533 kubeadm.go:158] found existing configuration files:
	
	I1227 09:56:06.751746  484533 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1227 09:56:06.759507  484533 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1227 09:56:06.759571  484533 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1227 09:56:06.766845  484533 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1227 09:56:06.774563  484533 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1227 09:56:06.774631  484533 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1227 09:56:06.781805  484533 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1227 09:56:06.789306  484533 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1227 09:56:06.789389  484533 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1227 09:56:06.797071  484533 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1227 09:56:06.804915  484533 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1227 09:56:06.804991  484533 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1227 09:56:06.812295  484533 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1227 09:56:06.849359  484533 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0
	I1227 09:56:06.849523  484533 kubeadm.go:319] [preflight] Running pre-flight checks
	I1227 09:56:06.941272  484533 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1227 09:56:06.941349  484533 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1227 09:56:06.941390  484533 kubeadm.go:319] OS: Linux
	I1227 09:56:06.941440  484533 kubeadm.go:319] CGROUPS_CPU: enabled
	I1227 09:56:06.941491  484533 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1227 09:56:06.941541  484533 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1227 09:56:06.941600  484533 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1227 09:56:06.941651  484533 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1227 09:56:06.941702  484533 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1227 09:56:06.941753  484533 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1227 09:56:06.941804  484533 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1227 09:56:06.941854  484533 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1227 09:56:07.013986  484533 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1227 09:56:07.014230  484533 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1227 09:56:07.014385  484533 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1227 09:56:07.026541  484533 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1227 09:56:07.033211  484533 out.go:252]   - Generating certificates and keys ...
	I1227 09:56:07.033314  484533 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1227 09:56:07.033397  484533 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1227 09:56:07.147983  484533 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1227 09:56:07.196342  484533 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1227 09:56:07.292471  484533 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1227 09:56:07.406603  484533 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1227 09:56:07.797541  484533 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1227 09:56:07.797702  484533 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [force-systemd-flag-779725 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1227 09:56:08.092997  484533 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1227 09:56:08.093189  484533 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [force-systemd-flag-779725 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1227 09:56:08.332578  484533 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1227 09:56:08.826008  484533 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1227 09:56:09.186234  484533 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1227 09:56:09.186774  484533 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1227 09:56:09.323204  484533 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1227 09:56:09.676751  484533 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1227 09:56:09.832347  484533 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1227 09:56:10.112003  484533 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1227 09:56:10.327460  484533 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1227 09:56:10.328358  484533 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1227 09:56:10.331150  484533 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1227 09:56:10.336747  484533 out.go:252]   - Booting up control plane ...
	I1227 09:56:10.336912  484533 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1227 09:56:10.337014  484533 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1227 09:56:10.337120  484533 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1227 09:56:10.351272  484533 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1227 09:56:10.351587  484533 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1227 09:56:10.360769  484533 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1227 09:56:10.360949  484533 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1227 09:56:10.361018  484533 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1227 09:56:10.495827  484533 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1227 09:56:10.495946  484533 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1227 09:58:51.403412  467949 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1227 09:58:51.403440  467949 kubeadm.go:319] 
	I1227 09:58:51.403514  467949 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1227 09:58:51.409072  467949 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0
	I1227 09:58:51.409131  467949 kubeadm.go:319] [preflight] Running pre-flight checks
	I1227 09:58:51.409226  467949 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1227 09:58:51.409285  467949 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1227 09:58:51.409322  467949 kubeadm.go:319] OS: Linux
	I1227 09:58:51.409371  467949 kubeadm.go:319] CGROUPS_CPU: enabled
	I1227 09:58:51.409423  467949 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1227 09:58:51.409486  467949 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1227 09:58:51.409539  467949 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1227 09:58:51.409591  467949 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1227 09:58:51.409643  467949 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1227 09:58:51.409691  467949 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1227 09:58:51.409742  467949 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1227 09:58:51.409791  467949 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1227 09:58:51.409868  467949 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1227 09:58:51.409970  467949 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1227 09:58:51.410066  467949 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1227 09:58:51.410138  467949 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1227 09:58:51.413848  467949 out.go:252]   - Generating certificates and keys ...
	I1227 09:58:51.413937  467949 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1227 09:58:51.414014  467949 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1227 09:58:51.414089  467949 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1227 09:58:51.414194  467949 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1227 09:58:51.414271  467949 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1227 09:58:51.414323  467949 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1227 09:58:51.414382  467949 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1227 09:58:51.414440  467949 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1227 09:58:51.414509  467949 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1227 09:58:51.414577  467949 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1227 09:58:51.414612  467949 kubeadm.go:319] [certs] Using the existing "sa" key
	I1227 09:58:51.414664  467949 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1227 09:58:51.414712  467949 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1227 09:58:51.414765  467949 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1227 09:58:51.414815  467949 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1227 09:58:51.414874  467949 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1227 09:58:51.414931  467949 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1227 09:58:51.415010  467949 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1227 09:58:51.415072  467949 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1227 09:58:51.420021  467949 out.go:252]   - Booting up control plane ...
	I1227 09:58:51.420150  467949 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1227 09:58:51.420243  467949 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1227 09:58:51.420319  467949 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1227 09:58:51.420436  467949 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1227 09:58:51.420536  467949 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1227 09:58:51.420642  467949 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1227 09:58:51.420727  467949 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1227 09:58:51.420767  467949 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1227 09:58:51.420899  467949 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1227 09:58:51.421005  467949 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1227 09:58:51.421070  467949 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.001610122s
	I1227 09:58:51.421074  467949 kubeadm.go:319] 
	I1227 09:58:51.421131  467949 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1227 09:58:51.421164  467949 kubeadm.go:319] 	- The kubelet is not running
	I1227 09:58:51.421278  467949 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1227 09:58:51.421283  467949 kubeadm.go:319] 
	I1227 09:58:51.421393  467949 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1227 09:58:51.421426  467949 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1227 09:58:51.421457  467949 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1227 09:58:51.421519  467949 kubeadm.go:403] duration metric: took 8m7.759551334s to StartCluster
	I1227 09:58:51.421551  467949 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1227 09:58:51.421611  467949 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1227 09:58:51.421710  467949 kubeadm.go:319] 
	I1227 09:58:51.472151  467949 cri.go:96] found id: ""
	I1227 09:58:51.472181  467949 logs.go:282] 0 containers: []
	W1227 09:58:51.472190  467949 logs.go:284] No container was found matching "kube-apiserver"
	I1227 09:58:51.472196  467949 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1227 09:58:51.472271  467949 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1227 09:58:51.507048  467949 cri.go:96] found id: ""
	I1227 09:58:51.507075  467949 logs.go:282] 0 containers: []
	W1227 09:58:51.507084  467949 logs.go:284] No container was found matching "etcd"
	I1227 09:58:51.507090  467949 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1227 09:58:51.507151  467949 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1227 09:58:51.532175  467949 cri.go:96] found id: ""
	I1227 09:58:51.532203  467949 logs.go:282] 0 containers: []
	W1227 09:58:51.532212  467949 logs.go:284] No container was found matching "coredns"
	I1227 09:58:51.532219  467949 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1227 09:58:51.532279  467949 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1227 09:58:51.560804  467949 cri.go:96] found id: ""
	I1227 09:58:51.560837  467949 logs.go:282] 0 containers: []
	W1227 09:58:51.560846  467949 logs.go:284] No container was found matching "kube-scheduler"
	I1227 09:58:51.560853  467949 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1227 09:58:51.560910  467949 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1227 09:58:51.588389  467949 cri.go:96] found id: ""
	I1227 09:58:51.588415  467949 logs.go:282] 0 containers: []
	W1227 09:58:51.588424  467949 logs.go:284] No container was found matching "kube-proxy"
	I1227 09:58:51.588431  467949 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1227 09:58:51.588490  467949 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1227 09:58:51.617006  467949 cri.go:96] found id: ""
	I1227 09:58:51.617033  467949 logs.go:282] 0 containers: []
	W1227 09:58:51.617042  467949 logs.go:284] No container was found matching "kube-controller-manager"
	I1227 09:58:51.617048  467949 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1227 09:58:51.617106  467949 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1227 09:58:51.642364  467949 cri.go:96] found id: ""
	I1227 09:58:51.642387  467949 logs.go:282] 0 containers: []
	W1227 09:58:51.642395  467949 logs.go:284] No container was found matching "kindnet"
	I1227 09:58:51.642405  467949 logs.go:123] Gathering logs for dmesg ...
	I1227 09:58:51.642417  467949 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1227 09:58:51.658409  467949 logs.go:123] Gathering logs for describe nodes ...
	I1227 09:58:51.658440  467949 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1227 09:58:51.723748  467949 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1227 09:58:51.715496    4958 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 09:58:51.715973    4958 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 09:58:51.717642    4958 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 09:58:51.718073    4958 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 09:58:51.719759    4958 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1227 09:58:51.715496    4958 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 09:58:51.715973    4958 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 09:58:51.717642    4958 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 09:58:51.718073    4958 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 09:58:51.719759    4958 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1227 09:58:51.723772  467949 logs.go:123] Gathering logs for CRI-O ...
	I1227 09:58:51.723785  467949 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1227 09:58:51.757945  467949 logs.go:123] Gathering logs for container status ...
	I1227 09:58:51.757981  467949 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1227 09:58:51.789642  467949 logs.go:123] Gathering logs for kubelet ...
	I1227 09:58:51.789679  467949 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1227 09:58:51.857608  467949 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001610122s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	W1227 09:58:51.857679  467949 out.go:285] * 
	W1227 09:58:51.857785  467949 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001610122s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1227 09:58:51.857802  467949 out.go:285] * 
	W1227 09:58:51.858054  467949 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1227 09:58:51.862978  467949 out.go:203] 
	W1227 09:58:51.866601  467949 out.go:285] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001610122s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1227 09:58:51.866700  467949 out.go:285] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1227 09:58:51.866724  467949 out.go:285] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1227 09:58:51.870333  467949 out.go:203] 
	
	
	==> CRI-O <==
	Dec 27 09:50:41 force-systemd-env-029895 crio[838]: time="2025-12-27T09:50:41.567170206Z" level=info msg="Registered SIGHUP reload watcher"
	Dec 27 09:50:41 force-systemd-env-029895 crio[838]: time="2025-12-27T09:50:41.567362783Z" level=info msg="Starting seccomp notifier watcher"
	Dec 27 09:50:41 force-systemd-env-029895 crio[838]: time="2025-12-27T09:50:41.567463453Z" level=info msg="Create NRI interface"
	Dec 27 09:50:41 force-systemd-env-029895 crio[838]: time="2025-12-27T09:50:41.567640596Z" level=info msg="built-in NRI default validator is disabled"
	Dec 27 09:50:41 force-systemd-env-029895 crio[838]: time="2025-12-27T09:50:41.567709068Z" level=info msg="runtime interface created"
	Dec 27 09:50:41 force-systemd-env-029895 crio[838]: time="2025-12-27T09:50:41.567769713Z" level=info msg="Registered domain \"k8s.io\" with NRI"
	Dec 27 09:50:41 force-systemd-env-029895 crio[838]: time="2025-12-27T09:50:41.567826296Z" level=info msg="runtime interface starting up..."
	Dec 27 09:50:41 force-systemd-env-029895 crio[838]: time="2025-12-27T09:50:41.567882781Z" level=info msg="starting plugins..."
	Dec 27 09:50:41 force-systemd-env-029895 crio[838]: time="2025-12-27T09:50:41.567945822Z" level=info msg="Synchronizing NRI (plugin) with current runtime state"
	Dec 27 09:50:41 force-systemd-env-029895 crio[838]: time="2025-12-27T09:50:41.568070295Z" level=info msg="No systemd watchdog enabled"
	Dec 27 09:50:41 force-systemd-env-029895 systemd[1]: Started crio.service - Container Runtime Interface for OCI (CRI-O).
	Dec 27 09:50:44 force-systemd-env-029895 crio[838]: time="2025-12-27T09:50:44.217034567Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.35.0" id=f15669b0-7059-4e5e-a704-be10a8650bed name=/runtime.v1.ImageService/ImageStatus
	Dec 27 09:50:44 force-systemd-env-029895 crio[838]: time="2025-12-27T09:50:44.217820177Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.35.0" id=76409248-4bf4-472c-9e19-7fe8ab685a28 name=/runtime.v1.ImageService/ImageStatus
	Dec 27 09:50:44 force-systemd-env-029895 crio[838]: time="2025-12-27T09:50:44.218532358Z" level=info msg="Checking image status: registry.k8s.io/kube-scheduler:v1.35.0" id=da5b3644-689c-4927-ab89-d1feef67d3d5 name=/runtime.v1.ImageService/ImageStatus
	Dec 27 09:50:44 force-systemd-env-029895 crio[838]: time="2025-12-27T09:50:44.219003987Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.35.0" id=e5465c1e-1e6d-4e95-ac37-a83177908f16 name=/runtime.v1.ImageService/ImageStatus
	Dec 27 09:50:44 force-systemd-env-029895 crio[838]: time="2025-12-27T09:50:44.222486361Z" level=info msg="Checking image status: registry.k8s.io/coredns/coredns:v1.13.1" id=abd44aeb-3571-4bf5-aa8a-343f0ab0e860 name=/runtime.v1.ImageService/ImageStatus
	Dec 27 09:50:44 force-systemd-env-029895 crio[838]: time="2025-12-27T09:50:44.223088953Z" level=info msg="Checking image status: registry.k8s.io/pause:3.10.1" id=023facb0-51a7-4b2f-b8e9-b71f200cc62c name=/runtime.v1.ImageService/ImageStatus
	Dec 27 09:50:44 force-systemd-env-029895 crio[838]: time="2025-12-27T09:50:44.223569739Z" level=info msg="Checking image status: registry.k8s.io/etcd:3.6.6-0" id=8b9be771-e79f-45a8-92a4-22e5625c9516 name=/runtime.v1.ImageService/ImageStatus
	Dec 27 09:54:50 force-systemd-env-029895 crio[838]: time="2025-12-27T09:54:50.23125601Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.35.0" id=ca7f01d1-65b1-42e8-b4b2-73208af13193 name=/runtime.v1.ImageService/ImageStatus
	Dec 27 09:54:50 force-systemd-env-029895 crio[838]: time="2025-12-27T09:54:50.232022657Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.35.0" id=843a4648-4803-458d-80ac-4eaa15768eff name=/runtime.v1.ImageService/ImageStatus
	Dec 27 09:54:50 force-systemd-env-029895 crio[838]: time="2025-12-27T09:54:50.232767545Z" level=info msg="Checking image status: registry.k8s.io/kube-scheduler:v1.35.0" id=d47c55f3-e39f-4e94-a8a6-24b945ae8b16 name=/runtime.v1.ImageService/ImageStatus
	Dec 27 09:54:50 force-systemd-env-029895 crio[838]: time="2025-12-27T09:54:50.233396082Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.35.0" id=874fce3a-584c-4db3-ae69-534d590c3b6f name=/runtime.v1.ImageService/ImageStatus
	Dec 27 09:54:50 force-systemd-env-029895 crio[838]: time="2025-12-27T09:54:50.23404578Z" level=info msg="Checking image status: registry.k8s.io/coredns/coredns:v1.13.1" id=8b5460f0-4677-406b-bb6f-80e08aa5ffcb name=/runtime.v1.ImageService/ImageStatus
	Dec 27 09:54:50 force-systemd-env-029895 crio[838]: time="2025-12-27T09:54:50.234680086Z" level=info msg="Checking image status: registry.k8s.io/pause:3.10.1" id=fc9c16a5-b30d-4786-aa24-3db260f72923 name=/runtime.v1.ImageService/ImageStatus
	Dec 27 09:54:50 force-systemd-env-029895 crio[838]: time="2025-12-27T09:54:50.235306095Z" level=info msg="Checking image status: registry.k8s.io/etcd:3.6.6-0" id=7b638ed4-f713-4850-9b24-1590cb83ba8e name=/runtime.v1.ImageService/ImageStatus
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1227 09:58:52.891198    5075 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 09:58:52.892059    5075 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 09:58:52.906538    5075 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 09:58:52.907372    5075 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 09:58:52.912852    5075 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[ +42.046056] overlayfs: idmapped layers are currently not supported
	[Dec27 09:26] overlayfs: idmapped layers are currently not supported
	[  +3.426470] overlayfs: idmapped layers are currently not supported
	[Dec27 09:27] overlayfs: idmapped layers are currently not supported
	[Dec27 09:28] overlayfs: idmapped layers are currently not supported
	[Dec27 09:29] overlayfs: idmapped layers are currently not supported
	[Dec27 09:32] overlayfs: idmapped layers are currently not supported
	[Dec27 09:33] overlayfs: idmapped layers are currently not supported
	[Dec27 09:34] overlayfs: idmapped layers are currently not supported
	[Dec27 09:35] overlayfs: idmapped layers are currently not supported
	[ +35.855481] overlayfs: idmapped layers are currently not supported
	[Dec27 09:36] overlayfs: idmapped layers are currently not supported
	[Dec27 09:37] overlayfs: idmapped layers are currently not supported
	[  +4.962986] overlayfs: idmapped layers are currently not supported
	[Dec27 09:38] overlayfs: idmapped layers are currently not supported
	[ +23.051169] overlayfs: idmapped layers are currently not supported
	[Dec27 09:39] overlayfs: idmapped layers are currently not supported
	[ +34.383287] overlayfs: idmapped layers are currently not supported
	[Dec27 09:41] overlayfs: idmapped layers are currently not supported
	[Dec27 09:43] overlayfs: idmapped layers are currently not supported
	[ +35.522529] overlayfs: idmapped layers are currently not supported
	[Dec27 09:48] overlayfs: idmapped layers are currently not supported
	[Dec27 09:49] overlayfs: idmapped layers are currently not supported
	[Dec27 09:51] overlayfs: idmapped layers are currently not supported
	[Dec27 09:52] overlayfs: idmapped layers are currently not supported
	
	
	==> kernel <==
	 09:58:52 up  2:41,  0 user,  load average: 0.25, 1.14, 1.93
	Linux force-systemd-env-029895 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 27 09:58:50 force-systemd-env-029895 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 27 09:58:51 force-systemd-env-029895 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 320.
	Dec 27 09:58:51 force-systemd-env-029895 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 27 09:58:51 force-systemd-env-029895 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 27 09:58:51 force-systemd-env-029895 kubelet[4892]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 27 09:58:51 force-systemd-env-029895 kubelet[4892]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 27 09:58:51 force-systemd-env-029895 kubelet[4892]: E1227 09:58:51.498548    4892 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 27 09:58:51 force-systemd-env-029895 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 27 09:58:51 force-systemd-env-029895 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 27 09:58:52 force-systemd-env-029895 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 321.
	Dec 27 09:58:52 force-systemd-env-029895 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 27 09:58:52 force-systemd-env-029895 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 27 09:58:52 force-systemd-env-029895 kubelet[4985]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 27 09:58:52 force-systemd-env-029895 kubelet[4985]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 27 09:58:52 force-systemd-env-029895 kubelet[4985]: E1227 09:58:52.228569    4985 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 27 09:58:52 force-systemd-env-029895 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 27 09:58:52 force-systemd-env-029895 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 27 09:58:52 force-systemd-env-029895 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 322.
	Dec 27 09:58:52 force-systemd-env-029895 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 27 09:58:52 force-systemd-env-029895 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 27 09:58:52 force-systemd-env-029895 kubelet[5079]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 27 09:58:52 force-systemd-env-029895 kubelet[5079]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 27 09:58:52 force-systemd-env-029895 kubelet[5079]: E1227 09:58:52.976889    5079 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 27 09:58:52 force-systemd-env-029895 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 27 09:58:52 force-systemd-env-029895 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p force-systemd-env-029895 -n force-systemd-env-029895
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p force-systemd-env-029895 -n force-systemd-env-029895: exit status 6 (314.698489ms)

                                                
                                                
-- stdout --
	Stopped
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1227 09:58:53.355885  489270 status.go:458] kubeconfig endpoint: get endpoint: "force-systemd-env-029895" does not appear in /home/jenkins/minikube-integration/22344-301174/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:263: status error: exit status 6 (may be ok)
helpers_test.go:265: "force-systemd-env-029895" apiserver is not running, skipping kubectl commands (state="Stopped")
helpers_test.go:176: Cleaning up "force-systemd-env-029895" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-env-029895
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-env-029895: (2.002966181s)
--- FAIL: TestForceSystemdEnv (507.57s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (1.87s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 pause -p json-output-923177 --output=json --user=testUser
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-linux-arm64 pause -p json-output-923177 --output=json --user=testUser: exit status 80 (1.873809439s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"e182516f-f6bd-4fdd-9ca0-ebece1ee0768","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"Pausing node json-output-923177 ...","name":"Pausing","totalsteps":"1"}}
	{"specversion":"1.0","id":"9ab68df1-4ee4-4661-91eb-c2e52ce3af9d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"80","issues":"","message":"Pause: list running: runc: sudo runc list -f json: Process exited with status 1\nstdout:\n\nstderr:\ntime=\"2025-12-27T09:29:41Z\" level=error msg=\"open /run/runc: no such file or directory\"","name":"GUEST_PAUSE","url":""}}
	{"specversion":"1.0","id":"40ba154d-9b84-42bb-8b34-55bddfb50858","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"╭───────────────────────────────────────────────────────────────────────────────────────────╮\n│                                                                                           │\n│    If the above advice does not help, please let us know:                                 │\n│    https://github.com/kubernetes/minikube/issues/new/choose                               │\n│                                                                                           │\n│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │\n│    Please also attach the following f
ile to the GitHub issue:                             │\n│    - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │\n│                                                                                           │\n╰───────────────────────────────────────────────────────────────────────────────────────────╯"}}

                                                
                                                
-- /stdout --
json_output_test.go:65: failed to clean up: args "out/minikube-linux-arm64 pause -p json-output-923177 --output=json --user=testUser": exit status 80
--- FAIL: TestJSONOutput/pause/Command (1.87s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (1.81s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 unpause -p json-output-923177 --output=json --user=testUser
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-linux-arm64 unpause -p json-output-923177 --output=json --user=testUser: exit status 80 (1.805290097s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"445880ea-f85f-40c8-8948-4ea7241450d2","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"Unpausing node json-output-923177 ...","name":"Unpausing","totalsteps":"1"}}
	{"specversion":"1.0","id":"41c237de-f3e9-425e-9442-3d75a66f9be6","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"80","issues":"","message":"Pause: list paused: runc: sudo runc list -f json: Process exited with status 1\nstdout:\n\nstderr:\ntime=\"2025-12-27T09:29:43Z\" level=error msg=\"open /run/runc: no such file or directory\"","name":"GUEST_UNPAUSE","url":""}}
	{"specversion":"1.0","id":"f41bf91f-5f46-4c7f-be46-891130dde038","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"╭───────────────────────────────────────────────────────────────────────────────────────────╮\n│                                                                                           │\n│    If the above advice does not help, please let us know:                                 │\n│    https://github.com/kubernetes/minikube/issues/new/choose                               │\n│                                                                                           │\n│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │\n│    Please also attach the following f
ile to the GitHub issue:                             │\n│    - /tmp/minikube_unpause_85c908ac827001a7ced33feb0caf7da086d17584_0.log                 │\n│                                                                                           │\n╰───────────────────────────────────────────────────────────────────────────────────────────╯"}}

                                                
                                                
-- /stdout --
json_output_test.go:65: failed to clean up: args "out/minikube-linux-arm64 unpause -p json-output-923177 --output=json --user=testUser": exit status 80
--- FAIL: TestJSONOutput/unpause/Command (1.81s)

                                                
                                    
x
+
TestPause/serial/Pause (6.84s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-arm64 pause -p pause-212930 --alsologtostderr -v=5
pause_test.go:110: (dbg) Non-zero exit: out/minikube-linux-arm64 pause -p pause-212930 --alsologtostderr -v=5: exit status 80 (2.476318949s)

                                                
                                                
-- stdout --
	* Pausing node pause-212930 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1227 09:42:28.427780  431958 out.go:360] Setting OutFile to fd 1 ...
	I1227 09:42:28.428795  431958 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 09:42:28.428842  431958 out.go:374] Setting ErrFile to fd 2...
	I1227 09:42:28.428863  431958 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 09:42:28.429532  431958 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22344-301174/.minikube/bin
	I1227 09:42:28.429953  431958 out.go:368] Setting JSON to false
	I1227 09:42:28.430015  431958 mustload.go:66] Loading cluster: pause-212930
	I1227 09:42:28.430552  431958 config.go:182] Loaded profile config "pause-212930": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1227 09:42:28.431079  431958 cli_runner.go:164] Run: docker container inspect pause-212930 --format={{.State.Status}}
	I1227 09:42:28.450902  431958 host.go:66] Checking if "pause-212930" exists ...
	I1227 09:42:28.451222  431958 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1227 09:42:28.522129  431958 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:39 OomKillDisable:true NGoroutines:53 SystemTime:2025-12-27 09:42:28.503649515 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1227 09:42:28.522816  431958 pause.go:60] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/22158/minikube-v1.37.0-1766719468-22158-arm64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1766719468-22158/minikube-v1.37.0-1766719468-22158-arm64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1766719468-22158-arm64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) preload-source:auto profile:pause-212930 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) rosetta:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true
) wantupdatenotification:%!s(bool=true) wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1227 09:42:28.525918  431958 out.go:179] * Pausing node pause-212930 ... 
	I1227 09:42:28.529743  431958 host.go:66] Checking if "pause-212930" exists ...
	I1227 09:42:28.530096  431958 ssh_runner.go:195] Run: systemctl --version
	I1227 09:42:28.530192  431958 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-212930
	I1227 09:42:28.547330  431958 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33336 SSHKeyPath:/home/jenkins/minikube-integration/22344-301174/.minikube/machines/pause-212930/id_rsa Username:docker}
	I1227 09:42:28.644738  431958 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1227 09:42:28.657466  431958 pause.go:52] kubelet running: true
	I1227 09:42:28.657541  431958 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1227 09:42:28.861709  431958 cri.go:61] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1227 09:42:28.861805  431958 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1227 09:42:28.925457  431958 cri.go:96] found id: "aa41a510f7223dca1286c7561932eedc33fa445732808526059960960b028886"
	I1227 09:42:28.925476  431958 cri.go:96] found id: "78e44803c8142565ec41e1482889f7692ec1eb9df4682d12be987925aba4dc71"
	I1227 09:42:28.925481  431958 cri.go:96] found id: "fb5d5abd088bf2b2019159161bad02e4b126be613fbd648e5ba75ce0a8d89008"
	I1227 09:42:28.925484  431958 cri.go:96] found id: "36a549f100e5b12b82a2441b6f9c890e71d25fb1797c9e47186609cd2e2e1a6b"
	I1227 09:42:28.925487  431958 cri.go:96] found id: "c93fe8161deae8dd5efcbd7f268e872176eb4d2c3df8e52e409d42dd300f7cea"
	I1227 09:42:28.925491  431958 cri.go:96] found id: "e43c40729d4c0469cf5b2a084aa9ebe018a885cdfe51f7d958c4ff17a340bdeb"
	I1227 09:42:28.925494  431958 cri.go:96] found id: "368c7201327cbe37eff9370846fa248f727cbfe09186ebc53e2cef14b8a9dc23"
	I1227 09:42:28.925498  431958 cri.go:96] found id: "a92514a62346a43118a86dbc159a30b0a844a38b3ffde54e8fe87cd4f8ff9786"
	I1227 09:42:28.925501  431958 cri.go:96] found id: "8e8d093798bba459f2cdf4bfbdf2b34b4f04f26043d223f4d234c6db2c17986b"
	I1227 09:42:28.925507  431958 cri.go:96] found id: "123709a40ad132b23744a3beeceee9b5e5c7095f906902ee953204b4702278fe"
	I1227 09:42:28.925510  431958 cri.go:96] found id: "03c077c9fc8b59712914c587598b07b45434df3ac40128669bc567d1738e39f4"
	I1227 09:42:28.925513  431958 cri.go:96] found id: "1074f4a0ea38ddbcac7244f9ddcdef61ec82f29cbe311ebaaaa4751e92095d5b"
	I1227 09:42:28.925517  431958 cri.go:96] found id: "8984abd55ae2a6b70149f64ccc0e67debf1bfd4c7a664ace1295f91f08bc8c23"
	I1227 09:42:28.925520  431958 cri.go:96] found id: "f1c6a1c1c62397748c6399aaa614424035b1cae6e1916e7a474286917d5684ff"
	I1227 09:42:28.925522  431958 cri.go:96] found id: ""
	I1227 09:42:28.925573  431958 ssh_runner.go:195] Run: sudo runc list -f json
	I1227 09:42:28.936726  431958 retry.go:84] will retry after 200ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-27T09:42:28Z" level=error msg="open /run/runc: no such file or directory"
	I1227 09:42:29.122101  431958 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1227 09:42:29.136386  431958 pause.go:52] kubelet running: false
	I1227 09:42:29.136483  431958 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1227 09:42:29.288418  431958 cri.go:61] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1227 09:42:29.288605  431958 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1227 09:42:29.362771  431958 cri.go:96] found id: "aa41a510f7223dca1286c7561932eedc33fa445732808526059960960b028886"
	I1227 09:42:29.362796  431958 cri.go:96] found id: "78e44803c8142565ec41e1482889f7692ec1eb9df4682d12be987925aba4dc71"
	I1227 09:42:29.362801  431958 cri.go:96] found id: "fb5d5abd088bf2b2019159161bad02e4b126be613fbd648e5ba75ce0a8d89008"
	I1227 09:42:29.362805  431958 cri.go:96] found id: "36a549f100e5b12b82a2441b6f9c890e71d25fb1797c9e47186609cd2e2e1a6b"
	I1227 09:42:29.362808  431958 cri.go:96] found id: "c93fe8161deae8dd5efcbd7f268e872176eb4d2c3df8e52e409d42dd300f7cea"
	I1227 09:42:29.362812  431958 cri.go:96] found id: "e43c40729d4c0469cf5b2a084aa9ebe018a885cdfe51f7d958c4ff17a340bdeb"
	I1227 09:42:29.362816  431958 cri.go:96] found id: "368c7201327cbe37eff9370846fa248f727cbfe09186ebc53e2cef14b8a9dc23"
	I1227 09:42:29.362836  431958 cri.go:96] found id: "a92514a62346a43118a86dbc159a30b0a844a38b3ffde54e8fe87cd4f8ff9786"
	I1227 09:42:29.362845  431958 cri.go:96] found id: "8e8d093798bba459f2cdf4bfbdf2b34b4f04f26043d223f4d234c6db2c17986b"
	I1227 09:42:29.362857  431958 cri.go:96] found id: "123709a40ad132b23744a3beeceee9b5e5c7095f906902ee953204b4702278fe"
	I1227 09:42:29.362864  431958 cri.go:96] found id: "03c077c9fc8b59712914c587598b07b45434df3ac40128669bc567d1738e39f4"
	I1227 09:42:29.362867  431958 cri.go:96] found id: "1074f4a0ea38ddbcac7244f9ddcdef61ec82f29cbe311ebaaaa4751e92095d5b"
	I1227 09:42:29.362870  431958 cri.go:96] found id: "8984abd55ae2a6b70149f64ccc0e67debf1bfd4c7a664ace1295f91f08bc8c23"
	I1227 09:42:29.362874  431958 cri.go:96] found id: "f1c6a1c1c62397748c6399aaa614424035b1cae6e1916e7a474286917d5684ff"
	I1227 09:42:29.362877  431958 cri.go:96] found id: ""
	I1227 09:42:29.362934  431958 ssh_runner.go:195] Run: sudo runc list -f json
	I1227 09:42:29.812378  431958 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1227 09:42:29.826451  431958 pause.go:52] kubelet running: false
	I1227 09:42:29.826521  431958 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1227 09:42:29.978078  431958 cri.go:61] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1227 09:42:29.978189  431958 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1227 09:42:30.130654  431958 cri.go:96] found id: "aa41a510f7223dca1286c7561932eedc33fa445732808526059960960b028886"
	I1227 09:42:30.130682  431958 cri.go:96] found id: "78e44803c8142565ec41e1482889f7692ec1eb9df4682d12be987925aba4dc71"
	I1227 09:42:30.130688  431958 cri.go:96] found id: "fb5d5abd088bf2b2019159161bad02e4b126be613fbd648e5ba75ce0a8d89008"
	I1227 09:42:30.130693  431958 cri.go:96] found id: "36a549f100e5b12b82a2441b6f9c890e71d25fb1797c9e47186609cd2e2e1a6b"
	I1227 09:42:30.130715  431958 cri.go:96] found id: "c93fe8161deae8dd5efcbd7f268e872176eb4d2c3df8e52e409d42dd300f7cea"
	I1227 09:42:30.130720  431958 cri.go:96] found id: "e43c40729d4c0469cf5b2a084aa9ebe018a885cdfe51f7d958c4ff17a340bdeb"
	I1227 09:42:30.130724  431958 cri.go:96] found id: "368c7201327cbe37eff9370846fa248f727cbfe09186ebc53e2cef14b8a9dc23"
	I1227 09:42:30.130727  431958 cri.go:96] found id: "a92514a62346a43118a86dbc159a30b0a844a38b3ffde54e8fe87cd4f8ff9786"
	I1227 09:42:30.130732  431958 cri.go:96] found id: "8e8d093798bba459f2cdf4bfbdf2b34b4f04f26043d223f4d234c6db2c17986b"
	I1227 09:42:30.130739  431958 cri.go:96] found id: "123709a40ad132b23744a3beeceee9b5e5c7095f906902ee953204b4702278fe"
	I1227 09:42:30.130742  431958 cri.go:96] found id: "03c077c9fc8b59712914c587598b07b45434df3ac40128669bc567d1738e39f4"
	I1227 09:42:30.130750  431958 cri.go:96] found id: "1074f4a0ea38ddbcac7244f9ddcdef61ec82f29cbe311ebaaaa4751e92095d5b"
	I1227 09:42:30.130754  431958 cri.go:96] found id: "8984abd55ae2a6b70149f64ccc0e67debf1bfd4c7a664ace1295f91f08bc8c23"
	I1227 09:42:30.130757  431958 cri.go:96] found id: "f1c6a1c1c62397748c6399aaa614424035b1cae6e1916e7a474286917d5684ff"
	I1227 09:42:30.130762  431958 cri.go:96] found id: ""
	I1227 09:42:30.130827  431958 ssh_runner.go:195] Run: sudo runc list -f json
	I1227 09:42:30.579357  431958 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1227 09:42:30.592268  431958 pause.go:52] kubelet running: false
	I1227 09:42:30.592373  431958 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1227 09:42:30.743275  431958 cri.go:61] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1227 09:42:30.743422  431958 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1227 09:42:30.812316  431958 cri.go:96] found id: "aa41a510f7223dca1286c7561932eedc33fa445732808526059960960b028886"
	I1227 09:42:30.812341  431958 cri.go:96] found id: "78e44803c8142565ec41e1482889f7692ec1eb9df4682d12be987925aba4dc71"
	I1227 09:42:30.812347  431958 cri.go:96] found id: "fb5d5abd088bf2b2019159161bad02e4b126be613fbd648e5ba75ce0a8d89008"
	I1227 09:42:30.812350  431958 cri.go:96] found id: "36a549f100e5b12b82a2441b6f9c890e71d25fb1797c9e47186609cd2e2e1a6b"
	I1227 09:42:30.812354  431958 cri.go:96] found id: "c93fe8161deae8dd5efcbd7f268e872176eb4d2c3df8e52e409d42dd300f7cea"
	I1227 09:42:30.812358  431958 cri.go:96] found id: "e43c40729d4c0469cf5b2a084aa9ebe018a885cdfe51f7d958c4ff17a340bdeb"
	I1227 09:42:30.812362  431958 cri.go:96] found id: "368c7201327cbe37eff9370846fa248f727cbfe09186ebc53e2cef14b8a9dc23"
	I1227 09:42:30.812364  431958 cri.go:96] found id: "a92514a62346a43118a86dbc159a30b0a844a38b3ffde54e8fe87cd4f8ff9786"
	I1227 09:42:30.812367  431958 cri.go:96] found id: "8e8d093798bba459f2cdf4bfbdf2b34b4f04f26043d223f4d234c6db2c17986b"
	I1227 09:42:30.812373  431958 cri.go:96] found id: "123709a40ad132b23744a3beeceee9b5e5c7095f906902ee953204b4702278fe"
	I1227 09:42:30.812377  431958 cri.go:96] found id: "03c077c9fc8b59712914c587598b07b45434df3ac40128669bc567d1738e39f4"
	I1227 09:42:30.812380  431958 cri.go:96] found id: "1074f4a0ea38ddbcac7244f9ddcdef61ec82f29cbe311ebaaaa4751e92095d5b"
	I1227 09:42:30.812393  431958 cri.go:96] found id: "8984abd55ae2a6b70149f64ccc0e67debf1bfd4c7a664ace1295f91f08bc8c23"
	I1227 09:42:30.812397  431958 cri.go:96] found id: "f1c6a1c1c62397748c6399aaa614424035b1cae6e1916e7a474286917d5684ff"
	I1227 09:42:30.812400  431958 cri.go:96] found id: ""
	I1227 09:42:30.812451  431958 ssh_runner.go:195] Run: sudo runc list -f json
	I1227 09:42:30.827219  431958 out.go:203] 
	W1227 09:42:30.830111  431958 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-27T09:42:30Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-27T09:42:30Z" level=error msg="open /run/runc: no such file or directory"
	
	W1227 09:42:30.830130  431958 out.go:285] * 
	* 
	W1227 09:42:30.833971  431958 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1227 09:42:30.836087  431958 out.go:203] 

                                                
                                                
** /stderr **
pause_test.go:112: failed to pause minikube with args: "out/minikube-linux-arm64 pause -p pause-212930 --alsologtostderr -v=5" : exit status 80
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestPause/serial/Pause]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestPause/serial/Pause]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect pause-212930
helpers_test.go:244: (dbg) docker inspect pause-212930:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "67a0c964596f7a739fbaa2c0a4e582479ce58a63afefa782db1c1784ffa78556",
	        "Created": "2025-12-27T09:41:23.359687129Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 426258,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-27T09:41:25.825011528Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:713d59eed1cb7ea89d31e3cbef8f6274a6ab7509a421e96cf0197c334e76398d",
	        "ResolvConfPath": "/var/lib/docker/containers/67a0c964596f7a739fbaa2c0a4e582479ce58a63afefa782db1c1784ffa78556/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/67a0c964596f7a739fbaa2c0a4e582479ce58a63afefa782db1c1784ffa78556/hostname",
	        "HostsPath": "/var/lib/docker/containers/67a0c964596f7a739fbaa2c0a4e582479ce58a63afefa782db1c1784ffa78556/hosts",
	        "LogPath": "/var/lib/docker/containers/67a0c964596f7a739fbaa2c0a4e582479ce58a63afefa782db1c1784ffa78556/67a0c964596f7a739fbaa2c0a4e582479ce58a63afefa782db1c1784ffa78556-json.log",
	        "Name": "/pause-212930",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "pause-212930:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "pause-212930",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "67a0c964596f7a739fbaa2c0a4e582479ce58a63afefa782db1c1784ffa78556",
	                "LowerDir": "/var/lib/docker/overlay2/d1bb2c53a8bced17b87ff2715a79b484e6f37dbf5f489c46dd9b475c7499b35b-init/diff:/var/lib/docker/overlay2/888349771a41a46b09e7de676064af9effbe2f5ae2a8ba49ad062335fb2a70e5/diff",
	                "MergedDir": "/var/lib/docker/overlay2/d1bb2c53a8bced17b87ff2715a79b484e6f37dbf5f489c46dd9b475c7499b35b/merged",
	                "UpperDir": "/var/lib/docker/overlay2/d1bb2c53a8bced17b87ff2715a79b484e6f37dbf5f489c46dd9b475c7499b35b/diff",
	                "WorkDir": "/var/lib/docker/overlay2/d1bb2c53a8bced17b87ff2715a79b484e6f37dbf5f489c46dd9b475c7499b35b/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "pause-212930",
	                "Source": "/var/lib/docker/volumes/pause-212930/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "pause-212930",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "pause-212930",
	                "name.minikube.sigs.k8s.io": "pause-212930",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "907f8eef2d5c7943aeba1710ead13eec8eedd1418e89c753e492a30da0b72f3e",
	            "SandboxKey": "/var/run/docker/netns/907f8eef2d5c",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33336"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33337"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33340"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33338"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33339"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "pause-212930": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "3a:45:10:4d:23:96",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "5ef6a920fbe5d46f9d5c8df54c3416c448684dd3ffc32c15c27db042128606cd",
	                    "EndpointID": "c3f4de98c622827e7e61e70e24deb63e5780f3718316c0953d51fda1d6926bc6",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "pause-212930",
	                        "67a0c964596f"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p pause-212930 -n pause-212930
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p pause-212930 -n pause-212930: exit status 2 (345.500961ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestPause/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestPause/serial/Pause]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p pause-212930 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-arm64 -p pause-212930 logs -n 25: (1.351439183s)
helpers_test.go:261: TestPause/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬─────────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                       ARGS                                                       │           PROFILE           │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼─────────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ node    │ list -p multinode-535956                                                                                         │ multinode-535956            │ jenkins │ v1.37.0 │ 27 Dec 25 09:38 UTC │                     │
	│ start   │ -p multinode-535956-m02 --driver=docker  --container-runtime=crio                                                │ multinode-535956-m02        │ jenkins │ v1.37.0 │ 27 Dec 25 09:38 UTC │                     │
	│ start   │ -p multinode-535956-m03 --driver=docker  --container-runtime=crio                                                │ multinode-535956-m03        │ jenkins │ v1.37.0 │ 27 Dec 25 09:38 UTC │ 27 Dec 25 09:39 UTC │
	│ node    │ add -p multinode-535956                                                                                          │ multinode-535956            │ jenkins │ v1.37.0 │ 27 Dec 25 09:39 UTC │                     │
	│ delete  │ -p multinode-535956-m03                                                                                          │ multinode-535956-m03        │ jenkins │ v1.37.0 │ 27 Dec 25 09:39 UTC │ 27 Dec 25 09:39 UTC │
	│ delete  │ -p multinode-535956                                                                                              │ multinode-535956            │ jenkins │ v1.37.0 │ 27 Dec 25 09:39 UTC │ 27 Dec 25 09:39 UTC │
	│ start   │ -p scheduled-stop-172677 --memory=3072 --driver=docker  --container-runtime=crio                                 │ scheduled-stop-172677       │ jenkins │ v1.37.0 │ 27 Dec 25 09:39 UTC │ 27 Dec 25 09:39 UTC │
	│ stop    │ -p scheduled-stop-172677 --schedule 5m -v=5 --alsologtostderr                                                    │ scheduled-stop-172677       │ jenkins │ v1.37.0 │ 27 Dec 25 09:39 UTC │                     │
	│ stop    │ -p scheduled-stop-172677 --schedule 5m -v=5 --alsologtostderr                                                    │ scheduled-stop-172677       │ jenkins │ v1.37.0 │ 27 Dec 25 09:39 UTC │                     │
	│ stop    │ -p scheduled-stop-172677 --schedule 5m -v=5 --alsologtostderr                                                    │ scheduled-stop-172677       │ jenkins │ v1.37.0 │ 27 Dec 25 09:39 UTC │                     │
	│ stop    │ -p scheduled-stop-172677 --schedule 15s -v=5 --alsologtostderr                                                   │ scheduled-stop-172677       │ jenkins │ v1.37.0 │ 27 Dec 25 09:39 UTC │                     │
	│ stop    │ -p scheduled-stop-172677 --schedule 15s -v=5 --alsologtostderr                                                   │ scheduled-stop-172677       │ jenkins │ v1.37.0 │ 27 Dec 25 09:39 UTC │                     │
	│ stop    │ -p scheduled-stop-172677 --schedule 15s -v=5 --alsologtostderr                                                   │ scheduled-stop-172677       │ jenkins │ v1.37.0 │ 27 Dec 25 09:39 UTC │                     │
	│ stop    │ -p scheduled-stop-172677 --cancel-scheduled                                                                      │ scheduled-stop-172677       │ jenkins │ v1.37.0 │ 27 Dec 25 09:39 UTC │ 27 Dec 25 09:39 UTC │
	│ stop    │ -p scheduled-stop-172677 --schedule 15s -v=5 --alsologtostderr                                                   │ scheduled-stop-172677       │ jenkins │ v1.37.0 │ 27 Dec 25 09:40 UTC │                     │
	│ stop    │ -p scheduled-stop-172677 --schedule 15s -v=5 --alsologtostderr                                                   │ scheduled-stop-172677       │ jenkins │ v1.37.0 │ 27 Dec 25 09:40 UTC │                     │
	│ stop    │ -p scheduled-stop-172677 --schedule 15s -v=5 --alsologtostderr                                                   │ scheduled-stop-172677       │ jenkins │ v1.37.0 │ 27 Dec 25 09:40 UTC │ 27 Dec 25 09:40 UTC │
	│ delete  │ -p scheduled-stop-172677                                                                                         │ scheduled-stop-172677       │ jenkins │ v1.37.0 │ 27 Dec 25 09:40 UTC │ 27 Dec 25 09:41 UTC │
	│ start   │ -p insufficient-storage-641667 --memory=3072 --output=json --wait=true --driver=docker  --container-runtime=crio │ insufficient-storage-641667 │ jenkins │ v1.37.0 │ 27 Dec 25 09:41 UTC │                     │
	│ delete  │ -p insufficient-storage-641667                                                                                   │ insufficient-storage-641667 │ jenkins │ v1.37.0 │ 27 Dec 25 09:41 UTC │ 27 Dec 25 09:41 UTC │
	│ start   │ -p pause-212930 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=crio        │ pause-212930                │ jenkins │ v1.37.0 │ 27 Dec 25 09:41 UTC │ 27 Dec 25 09:42 UTC │
	│ start   │ -p missing-upgrade-080776 --memory=3072 --driver=docker  --container-runtime=crio                                │ missing-upgrade-080776      │ jenkins │ v1.35.0 │ 27 Dec 25 09:41 UTC │ 27 Dec 25 09:42 UTC │
	│ start   │ -p pause-212930 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                 │ pause-212930                │ jenkins │ v1.37.0 │ 27 Dec 25 09:42 UTC │ 27 Dec 25 09:42 UTC │
	│ start   │ -p missing-upgrade-080776 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio         │ missing-upgrade-080776      │ jenkins │ v1.37.0 │ 27 Dec 25 09:42 UTC │                     │
	│ pause   │ -p pause-212930 --alsologtostderr -v=5                                                                           │ pause-212930                │ jenkins │ v1.37.0 │ 27 Dec 25 09:42 UTC │                     │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴─────────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/27 09:42:17
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1227 09:42:17.666518  430965 out.go:360] Setting OutFile to fd 1 ...
	I1227 09:42:17.666705  430965 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 09:42:17.666732  430965 out.go:374] Setting ErrFile to fd 2...
	I1227 09:42:17.666753  430965 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 09:42:17.667034  430965 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22344-301174/.minikube/bin
	I1227 09:42:17.667462  430965 out.go:368] Setting JSON to false
	I1227 09:42:17.668364  430965 start.go:133] hostinfo: {"hostname":"ip-172-31-31-251","uptime":8687,"bootTime":1766819851,"procs":176,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1227 09:42:17.668461  430965 start.go:143] virtualization:  
	I1227 09:42:17.673391  430965 out.go:179] * [missing-upgrade-080776] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1227 09:42:17.677266  430965 out.go:179]   - MINIKUBE_LOCATION=22344
	I1227 09:42:17.677427  430965 notify.go:221] Checking for updates...
	I1227 09:42:17.683086  430965 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1227 09:42:17.685932  430965 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22344-301174/kubeconfig
	I1227 09:42:17.688686  430965 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22344-301174/.minikube
	I1227 09:42:17.691545  430965 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1227 09:42:17.694315  430965 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1227 09:42:17.697641  430965 config.go:182] Loaded profile config "missing-upgrade-080776": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.0
	I1227 09:42:17.701188  430965 out.go:179] * Kubernetes 1.35.0 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.35.0
	I1227 09:42:17.704063  430965 driver.go:422] Setting default libvirt URI to qemu:///system
	I1227 09:42:17.728755  430965 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1227 09:42:17.728868  430965 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1227 09:42:17.786701  430965 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:39 OomKillDisable:true NGoroutines:53 SystemTime:2025-12-27 09:42:17.776674688 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1227 09:42:17.786805  430965 docker.go:319] overlay module found
	I1227 09:42:17.789919  430965 out.go:179] * Using the docker driver based on existing profile
	I1227 09:42:17.792742  430965 start.go:309] selected driver: docker
	I1227 09:42:17.792760  430965 start.go:928] validating driver "docker" against &{Name:missing-upgrade-080776 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.0 ClusterName:missing-upgrade-080776 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwa
rePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1227 09:42:17.792874  430965 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1227 09:42:17.793587  430965 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1227 09:42:17.844941  430965 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:39 OomKillDisable:true NGoroutines:53 SystemTime:2025-12-27 09:42:17.835245271 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1227 09:42:17.845269  430965 cni.go:84] Creating CNI manager for ""
	I1227 09:42:17.845335  430965 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1227 09:42:17.845382  430965 start.go:353] cluster config:
	{Name:missing-upgrade-080776 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.0 ClusterName:missing-upgrade-080776 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio
CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentP
ID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1227 09:42:17.848458  430965 out.go:179] * Starting "missing-upgrade-080776" primary control-plane node in "missing-upgrade-080776" cluster
	I1227 09:42:17.851246  430965 cache.go:134] Beginning downloading kic base image for docker with crio
	I1227 09:42:17.856025  430965 out.go:179] * Pulling base image v0.0.48-1766570851-22316 ...
	I1227 09:42:17.858825  430965 preload.go:188] Checking if preload exists for k8s version v1.32.0 and runtime crio
	I1227 09:42:17.858880  430965 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22344-301174/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.0-cri-o-overlay-arm64.tar.lz4
	I1227 09:42:17.858895  430965 cache.go:65] Caching tarball of preloaded images
	I1227 09:42:17.858899  430965 image.go:81] Checking for gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 in local docker daemon
	I1227 09:42:17.858980  430965 preload.go:251] Found /home/jenkins/minikube-integration/22344-301174/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.0-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1227 09:42:17.858989  430965 cache.go:68] Finished verifying existence of preloaded tar for v1.32.0 on crio
	I1227 09:42:17.859102  430965 profile.go:143] Saving config to /home/jenkins/minikube-integration/22344-301174/.minikube/profiles/missing-upgrade-080776/config.json ...
	I1227 09:42:17.877823  430965 image.go:100] Found gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 in local docker daemon, skipping pull
	I1227 09:42:17.877847  430965 cache.go:158] gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 exists in daemon, skipping load
	I1227 09:42:17.877867  430965 cache.go:243] Successfully downloaded all kic artifacts
	I1227 09:42:17.877897  430965 start.go:360] acquireMachinesLock for missing-upgrade-080776: {Name:mk3f368884f25182476d4c92af57261b5fc1f1f0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1227 09:42:17.877954  430965 start.go:364] duration metric: took 37.334µs to acquireMachinesLock for "missing-upgrade-080776"
	I1227 09:42:17.877980  430965 start.go:96] Skipping create...Using existing machine configuration
	I1227 09:42:17.877986  430965 fix.go:54] fixHost starting: 
	I1227 09:42:17.878275  430965 cli_runner.go:164] Run: docker container inspect missing-upgrade-080776 --format={{.State.Status}}
	W1227 09:42:17.893588  430965 cli_runner.go:211] docker container inspect missing-upgrade-080776 --format={{.State.Status}} returned with exit code 1
	I1227 09:42:17.893653  430965 fix.go:112] recreateIfNeeded on missing-upgrade-080776: state= err=unknown state "missing-upgrade-080776": docker container inspect missing-upgrade-080776 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-080776
	I1227 09:42:17.893685  430965 fix.go:117] machineExists: false. err=machine does not exist
	I1227 09:42:17.896942  430965 out.go:179] * docker "missing-upgrade-080776" container is missing, will recreate.
	I1227 09:42:19.344170  430416 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1227 09:42:19.344195  430416 machine.go:97] duration metric: took 6.557372287s to provisionDockerMachine
	I1227 09:42:19.344206  430416 start.go:293] postStartSetup for "pause-212930" (driver="docker")
	I1227 09:42:19.344217  430416 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1227 09:42:19.344285  430416 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1227 09:42:19.344334  430416 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-212930
	I1227 09:42:19.364653  430416 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33336 SSHKeyPath:/home/jenkins/minikube-integration/22344-301174/.minikube/machines/pause-212930/id_rsa Username:docker}
	I1227 09:42:19.465975  430416 ssh_runner.go:195] Run: cat /etc/os-release
	I1227 09:42:19.469472  430416 main.go:144] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1227 09:42:19.469501  430416 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1227 09:42:19.469513  430416 filesync.go:126] Scanning /home/jenkins/minikube-integration/22344-301174/.minikube/addons for local assets ...
	I1227 09:42:19.469592  430416 filesync.go:126] Scanning /home/jenkins/minikube-integration/22344-301174/.minikube/files for local assets ...
	I1227 09:42:19.469715  430416 filesync.go:149] local asset: /home/jenkins/minikube-integration/22344-301174/.minikube/files/etc/ssl/certs/3030432.pem -> 3030432.pem in /etc/ssl/certs
	I1227 09:42:19.469832  430416 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1227 09:42:19.477562  430416 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22344-301174/.minikube/files/etc/ssl/certs/3030432.pem --> /etc/ssl/certs/3030432.pem (1708 bytes)
	I1227 09:42:19.496225  430416 start.go:296] duration metric: took 151.997998ms for postStartSetup
	I1227 09:42:19.496328  430416 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1227 09:42:19.496391  430416 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-212930
	I1227 09:42:19.514124  430416 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33336 SSHKeyPath:/home/jenkins/minikube-integration/22344-301174/.minikube/machines/pause-212930/id_rsa Username:docker}
	I1227 09:42:19.612288  430416 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1227 09:42:19.617386  430416 fix.go:56] duration metric: took 6.854220318s for fixHost
	I1227 09:42:19.617414  430416 start.go:83] releasing machines lock for "pause-212930", held for 6.854271593s
	I1227 09:42:19.617482  430416 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" pause-212930
	I1227 09:42:19.635420  430416 ssh_runner.go:195] Run: cat /version.json
	I1227 09:42:19.635468  430416 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1227 09:42:19.635538  430416 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-212930
	I1227 09:42:19.635472  430416 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-212930
	I1227 09:42:19.653729  430416 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33336 SSHKeyPath:/home/jenkins/minikube-integration/22344-301174/.minikube/machines/pause-212930/id_rsa Username:docker}
	I1227 09:42:19.655553  430416 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33336 SSHKeyPath:/home/jenkins/minikube-integration/22344-301174/.minikube/machines/pause-212930/id_rsa Username:docker}
	I1227 09:42:19.749667  430416 ssh_runner.go:195] Run: systemctl --version
	I1227 09:42:19.851206  430416 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1227 09:42:19.909312  430416 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1227 09:42:19.913885  430416 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1227 09:42:19.913959  430416 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1227 09:42:19.923704  430416 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1227 09:42:19.923732  430416 start.go:496] detecting cgroup driver to use...
	I1227 09:42:19.923785  430416 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1227 09:42:19.923860  430416 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1227 09:42:19.940168  430416 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1227 09:42:19.952930  430416 docker.go:218] disabling cri-docker service (if available) ...
	I1227 09:42:19.953036  430416 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1227 09:42:19.971481  430416 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1227 09:42:19.985445  430416 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1227 09:42:20.142802  430416 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1227 09:42:20.288248  430416 docker.go:234] disabling docker service ...
	I1227 09:42:20.288317  430416 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1227 09:42:20.304414  430416 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1227 09:42:20.320168  430416 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1227 09:42:20.482479  430416 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1227 09:42:20.629348  430416 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1227 09:42:20.643822  430416 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1227 09:42:20.659310  430416 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1227 09:42:20.659397  430416 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 09:42:20.668310  430416 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1227 09:42:20.668419  430416 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 09:42:20.677313  430416 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 09:42:20.686522  430416 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 09:42:20.695656  430416 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1227 09:42:20.703907  430416 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 09:42:20.712738  430416 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 09:42:20.721070  430416 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 09:42:20.729858  430416 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1227 09:42:20.737355  430416 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1227 09:42:20.745081  430416 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1227 09:42:20.883536  430416 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1227 09:42:21.177333  430416 start.go:553] Will wait 60s for socket path /var/run/crio/crio.sock
	I1227 09:42:21.177418  430416 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1227 09:42:21.181406  430416 start.go:574] Will wait 60s for crictl version
	I1227 09:42:21.181532  430416 ssh_runner.go:195] Run: which crictl
	I1227 09:42:21.185092  430416 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1227 09:42:21.209180  430416 start.go:590] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1227 09:42:21.209363  430416 ssh_runner.go:195] Run: crio --version
	I1227 09:42:21.237522  430416 ssh_runner.go:195] Run: crio --version
	I1227 09:42:21.278704  430416 out.go:179] * Preparing Kubernetes v1.35.0 on CRI-O 1.34.3 ...
	I1227 09:42:21.282069  430416 cli_runner.go:164] Run: docker network inspect pause-212930 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1227 09:42:21.307883  430416 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1227 09:42:21.312584  430416 kubeadm.go:884] updating cluster {Name:pause-212930 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:pause-212930 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerName
s:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false regist
ry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false} ...
	I1227 09:42:21.312768  430416 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime crio
	I1227 09:42:21.312834  430416 ssh_runner.go:195] Run: sudo crictl images --output json
	I1227 09:42:21.403069  430416 crio.go:561] all images are preloaded for cri-o runtime.
	I1227 09:42:21.403097  430416 crio.go:433] Images already preloaded, skipping extraction
	I1227 09:42:21.403151  430416 ssh_runner.go:195] Run: sudo crictl images --output json
	I1227 09:42:21.503833  430416 crio.go:561] all images are preloaded for cri-o runtime.
	I1227 09:42:21.503855  430416 cache_images.go:86] Images are preloaded, skipping loading
	I1227 09:42:21.503863  430416 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.35.0 crio true true} ...
	I1227 09:42:21.503969  430416 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=pause-212930 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0 ClusterName:pause-212930 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1227 09:42:21.504048  430416 ssh_runner.go:195] Run: crio config
	I1227 09:42:21.667570  430416 cni.go:84] Creating CNI manager for ""
	I1227 09:42:21.667645  430416 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1227 09:42:21.667679  430416 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1227 09:42:21.667719  430416 kubeadm.go:197] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.35.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:pause-212930 NodeName:pause-212930 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernete
s/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock failCgroupV1:false hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1227 09:42:21.667875  430416 kubeadm.go:203] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "pause-212930"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	failCgroupV1: false
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1227 09:42:21.667970  430416 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0
	I1227 09:42:21.681431  430416 binaries.go:51] Found k8s binaries, skipping transfer
	I1227 09:42:21.681544  430416 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1227 09:42:21.697638  430416 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (362 bytes)
	I1227 09:42:21.720491  430416 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1227 09:42:21.743608  430416 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2229 bytes)
	I1227 09:42:21.766517  430416 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1227 09:42:21.773479  430416 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1227 09:42:21.980852  430416 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1227 09:42:21.997123  430416 certs.go:69] Setting up /home/jenkins/minikube-integration/22344-301174/.minikube/profiles/pause-212930 for IP: 192.168.76.2
	I1227 09:42:21.997142  430416 certs.go:195] generating shared ca certs ...
	I1227 09:42:21.997163  430416 certs.go:227] acquiring lock for ca certs: {Name:mk8bd99999eb300c33129cb11fd55a1be3001328 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 09:42:21.997305  430416 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22344-301174/.minikube/ca.key
	I1227 09:42:21.997347  430416 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22344-301174/.minikube/proxy-client-ca.key
	I1227 09:42:21.997354  430416 certs.go:257] generating profile certs ...
	I1227 09:42:21.997437  430416 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22344-301174/.minikube/profiles/pause-212930/client.key
	I1227 09:42:21.998368  430416 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22344-301174/.minikube/profiles/pause-212930/apiserver.key.0d8d2798
	I1227 09:42:21.999077  430416 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22344-301174/.minikube/profiles/pause-212930/proxy-client.key
	I1227 09:42:21.999219  430416 certs.go:484] found cert: /home/jenkins/minikube-integration/22344-301174/.minikube/certs/303043.pem (1338 bytes)
	W1227 09:42:21.999251  430416 certs.go:480] ignoring /home/jenkins/minikube-integration/22344-301174/.minikube/certs/303043_empty.pem, impossibly tiny 0 bytes
	I1227 09:42:21.999258  430416 certs.go:484] found cert: /home/jenkins/minikube-integration/22344-301174/.minikube/certs/ca-key.pem (1675 bytes)
	I1227 09:42:21.999284  430416 certs.go:484] found cert: /home/jenkins/minikube-integration/22344-301174/.minikube/certs/ca.pem (1082 bytes)
	I1227 09:42:21.999312  430416 certs.go:484] found cert: /home/jenkins/minikube-integration/22344-301174/.minikube/certs/cert.pem (1123 bytes)
	I1227 09:42:21.999336  430416 certs.go:484] found cert: /home/jenkins/minikube-integration/22344-301174/.minikube/certs/key.pem (1675 bytes)
	I1227 09:42:21.999384  430416 certs.go:484] found cert: /home/jenkins/minikube-integration/22344-301174/.minikube/files/etc/ssl/certs/3030432.pem (1708 bytes)
	I1227 09:42:22.000089  430416 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22344-301174/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1227 09:42:22.028404  430416 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22344-301174/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1227 09:42:22.059636  430416 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22344-301174/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1227 09:42:22.087497  430416 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22344-301174/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1227 09:42:22.108436  430416 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22344-301174/.minikube/profiles/pause-212930/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1227 09:42:22.133599  430416 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22344-301174/.minikube/profiles/pause-212930/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1227 09:42:22.163221  430416 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22344-301174/.minikube/profiles/pause-212930/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1227 09:42:22.192274  430416 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22344-301174/.minikube/profiles/pause-212930/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1227 09:42:22.220536  430416 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22344-301174/.minikube/certs/303043.pem --> /usr/share/ca-certificates/303043.pem (1338 bytes)
	I1227 09:42:22.252444  430416 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22344-301174/.minikube/files/etc/ssl/certs/3030432.pem --> /usr/share/ca-certificates/3030432.pem (1708 bytes)
	I1227 09:42:22.283844  430416 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22344-301174/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1227 09:42:22.319233  430416 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (722 bytes)
	I1227 09:42:22.352431  430416 ssh_runner.go:195] Run: openssl version
	I1227 09:42:22.363423  430416 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1227 09:42:22.373611  430416 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1227 09:42:22.391847  430416 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1227 09:42:22.398978  430416 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 27 09:13 /usr/share/ca-certificates/minikubeCA.pem
	I1227 09:42:22.399088  430416 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1227 09:42:17.899777  430965 delete.go:124] DEMOLISHING missing-upgrade-080776 ...
	I1227 09:42:17.899875  430965 cli_runner.go:164] Run: docker container inspect missing-upgrade-080776 --format={{.State.Status}}
	W1227 09:42:17.915673  430965 cli_runner.go:211] docker container inspect missing-upgrade-080776 --format={{.State.Status}} returned with exit code 1
	W1227 09:42:17.915734  430965 stop.go:83] unable to get state: unknown state "missing-upgrade-080776": docker container inspect missing-upgrade-080776 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-080776
	I1227 09:42:17.915755  430965 delete.go:128] stophost failed (probably ok): ssh power off: unknown state "missing-upgrade-080776": docker container inspect missing-upgrade-080776 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-080776
	I1227 09:42:17.916234  430965 cli_runner.go:164] Run: docker container inspect missing-upgrade-080776 --format={{.State.Status}}
	W1227 09:42:17.931659  430965 cli_runner.go:211] docker container inspect missing-upgrade-080776 --format={{.State.Status}} returned with exit code 1
	I1227 09:42:17.931724  430965 delete.go:82] Unable to get host status for missing-upgrade-080776, assuming it has already been deleted: state: unknown state "missing-upgrade-080776": docker container inspect missing-upgrade-080776 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-080776
	I1227 09:42:17.931799  430965 cli_runner.go:164] Run: docker container inspect -f {{.Id}} missing-upgrade-080776
	W1227 09:42:17.947046  430965 cli_runner.go:211] docker container inspect -f {{.Id}} missing-upgrade-080776 returned with exit code 1
	I1227 09:42:17.947090  430965 kic.go:371] could not find the container missing-upgrade-080776 to remove it. will try anyways
	I1227 09:42:17.947151  430965 cli_runner.go:164] Run: docker container inspect missing-upgrade-080776 --format={{.State.Status}}
	W1227 09:42:17.962584  430965 cli_runner.go:211] docker container inspect missing-upgrade-080776 --format={{.State.Status}} returned with exit code 1
	W1227 09:42:17.962666  430965 oci.go:84] error getting container status, will try to delete anyways: unknown state "missing-upgrade-080776": docker container inspect missing-upgrade-080776 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-080776
	I1227 09:42:17.962737  430965 cli_runner.go:164] Run: docker exec --privileged -t missing-upgrade-080776 /bin/bash -c "sudo init 0"
	W1227 09:42:17.976515  430965 cli_runner.go:211] docker exec --privileged -t missing-upgrade-080776 /bin/bash -c "sudo init 0" returned with exit code 1
	I1227 09:42:17.976548  430965 oci.go:659] error shutdown missing-upgrade-080776: docker exec --privileged -t missing-upgrade-080776 /bin/bash -c "sudo init 0": exit status 1
	stdout:
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-080776
	I1227 09:42:18.976783  430965 cli_runner.go:164] Run: docker container inspect missing-upgrade-080776 --format={{.State.Status}}
	W1227 09:42:18.995280  430965 cli_runner.go:211] docker container inspect missing-upgrade-080776 --format={{.State.Status}} returned with exit code 1
	I1227 09:42:18.995348  430965 oci.go:671] temporary error verifying shutdown: unknown state "missing-upgrade-080776": docker container inspect missing-upgrade-080776 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-080776
	I1227 09:42:18.995360  430965 oci.go:673] temporary error: container missing-upgrade-080776 status is  but expect it to be exited
	I1227 09:42:18.995403  430965 retry.go:84] will retry after 400ms: couldn't verify container is exited. %v: unknown state "missing-upgrade-080776": docker container inspect missing-upgrade-080776 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-080776
	I1227 09:42:19.372198  430965 cli_runner.go:164] Run: docker container inspect missing-upgrade-080776 --format={{.State.Status}}
	W1227 09:42:19.387531  430965 cli_runner.go:211] docker container inspect missing-upgrade-080776 --format={{.State.Status}} returned with exit code 1
	I1227 09:42:19.387586  430965 oci.go:671] temporary error verifying shutdown: unknown state "missing-upgrade-080776": docker container inspect missing-upgrade-080776 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-080776
	I1227 09:42:19.387596  430965 oci.go:673] temporary error: container missing-upgrade-080776 status is  but expect it to be exited
	I1227 09:42:20.026341  430965 cli_runner.go:164] Run: docker container inspect missing-upgrade-080776 --format={{.State.Status}}
	W1227 09:42:20.066202  430965 cli_runner.go:211] docker container inspect missing-upgrade-080776 --format={{.State.Status}} returned with exit code 1
	I1227 09:42:20.066280  430965 oci.go:671] temporary error verifying shutdown: unknown state "missing-upgrade-080776": docker container inspect missing-upgrade-080776 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-080776
	I1227 09:42:20.066290  430965 oci.go:673] temporary error: container missing-upgrade-080776 status is  but expect it to be exited
	I1227 09:42:21.753130  430965 cli_runner.go:164] Run: docker container inspect missing-upgrade-080776 --format={{.State.Status}}
	W1227 09:42:21.782377  430965 cli_runner.go:211] docker container inspect missing-upgrade-080776 --format={{.State.Status}} returned with exit code 1
	I1227 09:42:21.782441  430965 oci.go:671] temporary error verifying shutdown: unknown state "missing-upgrade-080776": docker container inspect missing-upgrade-080776 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-080776
	I1227 09:42:21.782460  430965 oci.go:673] temporary error: container missing-upgrade-080776 status is  but expect it to be exited
	I1227 09:42:22.478949  430416 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1227 09:42:22.487765  430416 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/303043.pem
	I1227 09:42:22.510687  430416 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/303043.pem /etc/ssl/certs/303043.pem
	I1227 09:42:22.519132  430416 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/303043.pem
	I1227 09:42:22.523877  430416 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 27 09:17 /usr/share/ca-certificates/303043.pem
	I1227 09:42:22.524025  430416 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/303043.pem
	I1227 09:42:22.582107  430416 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1227 09:42:22.590544  430416 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/3030432.pem
	I1227 09:42:22.598517  430416 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/3030432.pem /etc/ssl/certs/3030432.pem
	I1227 09:42:22.606941  430416 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3030432.pem
	I1227 09:42:22.611297  430416 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 27 09:17 /usr/share/ca-certificates/3030432.pem
	I1227 09:42:22.611408  430416 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3030432.pem
	I1227 09:42:22.655975  430416 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1227 09:42:22.664293  430416 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1227 09:42:22.669689  430416 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1227 09:42:22.716972  430416 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1227 09:42:22.760673  430416 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1227 09:42:22.801801  430416 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1227 09:42:22.845927  430416 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1227 09:42:22.891949  430416 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1227 09:42:22.946491  430416 kubeadm.go:401] StartCluster: {Name:pause-212930 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:pause-212930 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[
] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-
aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1227 09:42:22.946672  430416 cri.go:61] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1227 09:42:22.946756  430416 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1227 09:42:23.005031  430416 cri.go:96] found id: "aa41a510f7223dca1286c7561932eedc33fa445732808526059960960b028886"
	I1227 09:42:23.005117  430416 cri.go:96] found id: "78e44803c8142565ec41e1482889f7692ec1eb9df4682d12be987925aba4dc71"
	I1227 09:42:23.005136  430416 cri.go:96] found id: "fb5d5abd088bf2b2019159161bad02e4b126be613fbd648e5ba75ce0a8d89008"
	I1227 09:42:23.005166  430416 cri.go:96] found id: "36a549f100e5b12b82a2441b6f9c890e71d25fb1797c9e47186609cd2e2e1a6b"
	I1227 09:42:23.005188  430416 cri.go:96] found id: "c93fe8161deae8dd5efcbd7f268e872176eb4d2c3df8e52e409d42dd300f7cea"
	I1227 09:42:23.005211  430416 cri.go:96] found id: "e43c40729d4c0469cf5b2a084aa9ebe018a885cdfe51f7d958c4ff17a340bdeb"
	I1227 09:42:23.005234  430416 cri.go:96] found id: "368c7201327cbe37eff9370846fa248f727cbfe09186ebc53e2cef14b8a9dc23"
	I1227 09:42:23.005265  430416 cri.go:96] found id: "a92514a62346a43118a86dbc159a30b0a844a38b3ffde54e8fe87cd4f8ff9786"
	I1227 09:42:23.005282  430416 cri.go:96] found id: "8e8d093798bba459f2cdf4bfbdf2b34b4f04f26043d223f4d234c6db2c17986b"
	I1227 09:42:23.005315  430416 cri.go:96] found id: "123709a40ad132b23744a3beeceee9b5e5c7095f906902ee953204b4702278fe"
	I1227 09:42:23.005342  430416 cri.go:96] found id: "03c077c9fc8b59712914c587598b07b45434df3ac40128669bc567d1738e39f4"
	I1227 09:42:23.005364  430416 cri.go:96] found id: "1074f4a0ea38ddbcac7244f9ddcdef61ec82f29cbe311ebaaaa4751e92095d5b"
	I1227 09:42:23.005387  430416 cri.go:96] found id: "8984abd55ae2a6b70149f64ccc0e67debf1bfd4c7a664ace1295f91f08bc8c23"
	I1227 09:42:23.005410  430416 cri.go:96] found id: "f1c6a1c1c62397748c6399aaa614424035b1cae6e1916e7a474286917d5684ff"
	I1227 09:42:23.005439  430416 cri.go:96] found id: ""
	I1227 09:42:23.005525  430416 ssh_runner.go:195] Run: sudo runc list -f json
	W1227 09:42:23.033124  430416 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-27T09:42:23Z" level=error msg="open /run/runc: no such file or directory"
	I1227 09:42:23.033233  430416 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1227 09:42:23.051487  430416 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1227 09:42:23.051559  430416 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1227 09:42:23.051632  430416 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1227 09:42:23.059931  430416 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1227 09:42:23.060701  430416 kubeconfig.go:125] found "pause-212930" server: "https://192.168.76.2:8443"
	I1227 09:42:23.061624  430416 kapi.go:59] client config for pause-212930: &rest.Config{Host:"https://192.168.76.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22344-301174/.minikube/profiles/pause-212930/client.crt", KeyFile:"/home/jenkins/minikube-integration/22344-301174/.minikube/profiles/pause-212930/client.key", CAFile:"/home/jenkins/minikube-integration/22344-301174/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]s
tring(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1f30470), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1227 09:42:23.062677  430416 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=true
	I1227 09:42:23.062788  430416 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=true
	I1227 09:42:23.062824  430416 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1227 09:42:23.062846  430416 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1227 09:42:23.062886  430416 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1227 09:42:23.062908  430416 envvar.go:172] "Feature gate default state" feature="InOrderInformersBatchProcess" enabled=true
	I1227 09:42:23.063282  430416 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1227 09:42:23.077762  430416 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.76.2
	I1227 09:42:23.077839  430416 kubeadm.go:602] duration metric: took 26.259931ms to restartPrimaryControlPlane
	I1227 09:42:23.077866  430416 kubeadm.go:403] duration metric: took 131.384317ms to StartCluster
	I1227 09:42:23.077897  430416 settings.go:142] acquiring lock: {Name:mkd3c0090464e277fd903dd0170c6ace5a6172ea Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 09:42:23.077977  430416 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22344-301174/kubeconfig
	I1227 09:42:23.078992  430416 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22344-301174/kubeconfig: {Name:mk3a3b85dc4ae4614cb3a9a7459a1ec0ee3f4d9f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 09:42:23.079271  430416 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1227 09:42:23.079694  430416 config.go:182] Loaded profile config "pause-212930": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1227 09:42:23.079718  430416 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1227 09:42:23.082597  430416 out.go:179] * Verifying Kubernetes components...
	I1227 09:42:23.082688  430416 out.go:179] * Enabled addons: 
	I1227 09:42:23.085467  430416 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1227 09:42:23.085578  430416 addons.go:530] duration metric: took 5.864863ms for enable addons: enabled=[]
	I1227 09:42:23.295015  430416 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1227 09:42:23.309701  430416 node_ready.go:35] waiting up to 6m0s for node "pause-212930" to be "Ready" ...
	I1227 09:42:25.044551  430416 node_ready.go:49] node "pause-212930" is "Ready"
	I1227 09:42:25.044579  430416 node_ready.go:38] duration metric: took 1.734843697s for node "pause-212930" to be "Ready" ...
	I1227 09:42:25.044591  430416 api_server.go:52] waiting for apiserver process to appear ...
	I1227 09:42:25.044649  430416 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 09:42:25.062867  430416 api_server.go:72] duration metric: took 1.983534382s to wait for apiserver process to appear ...
	I1227 09:42:25.062892  430416 api_server.go:88] waiting for apiserver healthz status ...
	I1227 09:42:25.062921  430416 api_server.go:299] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1227 09:42:25.080990  430416 api_server.go:325] https://192.168.76.2:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1227 09:42:25.081067  430416 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1227 09:42:25.563759  430416 api_server.go:299] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1227 09:42:25.572268  430416 api_server.go:325] https://192.168.76.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1227 09:42:25.572304  430416 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1227 09:42:26.063634  430416 api_server.go:299] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1227 09:42:26.072323  430416 api_server.go:325] https://192.168.76.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1227 09:42:26.072407  430416 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1227 09:42:26.563030  430416 api_server.go:299] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1227 09:42:26.573153  430416 api_server.go:325] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1227 09:42:26.574440  430416 api_server.go:141] control plane version: v1.35.0
	I1227 09:42:26.574475  430416 api_server.go:131] duration metric: took 1.511575457s to wait for apiserver health ...
	I1227 09:42:26.574484  430416 system_pods.go:43] waiting for kube-system pods to appear ...
	I1227 09:42:26.578755  430416 system_pods.go:59] 7 kube-system pods found
	I1227 09:42:26.578852  430416 system_pods.go:61] "coredns-7d764666f9-j52xk" [7506b606-698b-481c-aac2-86984f3866e4] Running
	I1227 09:42:26.578876  430416 system_pods.go:61] "etcd-pause-212930" [5fe7f336-926a-4740-bcd1-0ef4efe13456] Running
	I1227 09:42:26.578915  430416 system_pods.go:61] "kindnet-l2mpb" [0633095a-3161-4f93-951b-90597bcc80cb] Running
	I1227 09:42:26.578941  430416 system_pods.go:61] "kube-apiserver-pause-212930" [323d69e8-83e1-441d-91c3-6f40f5b90b85] Running
	I1227 09:42:26.578964  430416 system_pods.go:61] "kube-controller-manager-pause-212930" [c0117059-f3c1-4352-8ae1-4d8a92e83dc3] Running
	I1227 09:42:26.579013  430416 system_pods.go:61] "kube-proxy-w88ml" [b077dc1e-d0af-48bd-b8b0-4f775f0c07b9] Running
	I1227 09:42:26.579039  430416 system_pods.go:61] "kube-scheduler-pause-212930" [c943ea12-6351-4069-974d-211bbafa2b2e] Running
	I1227 09:42:26.579086  430416 system_pods.go:74] duration metric: took 4.571932ms to wait for pod list to return data ...
	I1227 09:42:26.579116  430416 default_sa.go:34] waiting for default service account to be created ...
	I1227 09:42:26.584456  430416 default_sa.go:45] found service account: "default"
	I1227 09:42:26.584486  430416 default_sa.go:55] duration metric: took 5.348943ms for default service account to be created ...
	I1227 09:42:26.584497  430416 system_pods.go:116] waiting for k8s-apps to be running ...
	I1227 09:42:26.587649  430416 system_pods.go:86] 7 kube-system pods found
	I1227 09:42:26.587683  430416 system_pods.go:89] "coredns-7d764666f9-j52xk" [7506b606-698b-481c-aac2-86984f3866e4] Running
	I1227 09:42:26.587690  430416 system_pods.go:89] "etcd-pause-212930" [5fe7f336-926a-4740-bcd1-0ef4efe13456] Running
	I1227 09:42:26.587695  430416 system_pods.go:89] "kindnet-l2mpb" [0633095a-3161-4f93-951b-90597bcc80cb] Running
	I1227 09:42:26.587699  430416 system_pods.go:89] "kube-apiserver-pause-212930" [323d69e8-83e1-441d-91c3-6f40f5b90b85] Running
	I1227 09:42:26.587723  430416 system_pods.go:89] "kube-controller-manager-pause-212930" [c0117059-f3c1-4352-8ae1-4d8a92e83dc3] Running
	I1227 09:42:26.587735  430416 system_pods.go:89] "kube-proxy-w88ml" [b077dc1e-d0af-48bd-b8b0-4f775f0c07b9] Running
	I1227 09:42:26.587740  430416 system_pods.go:89] "kube-scheduler-pause-212930" [c943ea12-6351-4069-974d-211bbafa2b2e] Running
	I1227 09:42:26.587752  430416 system_pods.go:126] duration metric: took 3.24846ms to wait for k8s-apps to be running ...
	I1227 09:42:26.587763  430416 system_svc.go:44] waiting for kubelet service to be running ....
	I1227 09:42:26.587831  430416 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1227 09:42:26.606674  430416 system_svc.go:56] duration metric: took 18.901076ms WaitForService to wait for kubelet
	I1227 09:42:26.606745  430416 kubeadm.go:587] duration metric: took 3.527415468s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1227 09:42:26.606782  430416 node_conditions.go:102] verifying NodePressure condition ...
	I1227 09:42:26.610090  430416 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1227 09:42:26.610186  430416 node_conditions.go:123] node cpu capacity is 2
	I1227 09:42:26.610216  430416 node_conditions.go:105] duration metric: took 3.411761ms to run NodePressure ...
	I1227 09:42:26.610245  430416 start.go:242] waiting for startup goroutines ...
	I1227 09:42:26.610283  430416 start.go:247] waiting for cluster config update ...
	I1227 09:42:26.610308  430416 start.go:256] writing updated cluster config ...
	I1227 09:42:26.610642  430416 ssh_runner.go:195] Run: rm -f paused
	I1227 09:42:26.614765  430416 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1227 09:42:26.615466  430416 kapi.go:59] client config for pause-212930: &rest.Config{Host:"https://192.168.76.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22344-301174/.minikube/profiles/pause-212930/client.crt", KeyFile:"/home/jenkins/minikube-integration/22344-301174/.minikube/profiles/pause-212930/client.key", CAFile:"/home/jenkins/minikube-integration/22344-301174/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]s
tring(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1f30470), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1227 09:42:26.618745  430416 pod_ready.go:83] waiting for pod "coredns-7d764666f9-j52xk" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 09:42:26.624590  430416 pod_ready.go:94] pod "coredns-7d764666f9-j52xk" is "Ready"
	I1227 09:42:26.624666  430416 pod_ready.go:86] duration metric: took 5.859653ms for pod "coredns-7d764666f9-j52xk" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 09:42:26.627650  430416 pod_ready.go:83] waiting for pod "etcd-pause-212930" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 09:42:26.633261  430416 pod_ready.go:94] pod "etcd-pause-212930" is "Ready"
	I1227 09:42:26.633300  430416 pod_ready.go:86] duration metric: took 5.622423ms for pod "etcd-pause-212930" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 09:42:26.636121  430416 pod_ready.go:83] waiting for pod "kube-apiserver-pause-212930" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 09:42:26.640955  430416 pod_ready.go:94] pod "kube-apiserver-pause-212930" is "Ready"
	I1227 09:42:26.641026  430416 pod_ready.go:86] duration metric: took 4.87674ms for pod "kube-apiserver-pause-212930" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 09:42:26.643472  430416 pod_ready.go:83] waiting for pod "kube-controller-manager-pause-212930" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 09:42:27.019099  430416 pod_ready.go:94] pod "kube-controller-manager-pause-212930" is "Ready"
	I1227 09:42:27.019127  430416 pod_ready.go:86] duration metric: took 375.630834ms for pod "kube-controller-manager-pause-212930" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 09:42:27.219307  430416 pod_ready.go:83] waiting for pod "kube-proxy-w88ml" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 09:42:24.198858  430965 cli_runner.go:164] Run: docker container inspect missing-upgrade-080776 --format={{.State.Status}}
	W1227 09:42:24.227858  430965 cli_runner.go:211] docker container inspect missing-upgrade-080776 --format={{.State.Status}} returned with exit code 1
	I1227 09:42:24.227918  430965 oci.go:671] temporary error verifying shutdown: unknown state "missing-upgrade-080776": docker container inspect missing-upgrade-080776 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-080776
	I1227 09:42:24.227926  430965 oci.go:673] temporary error: container missing-upgrade-080776 status is  but expect it to be exited
	I1227 09:42:25.643606  430965 cli_runner.go:164] Run: docker container inspect missing-upgrade-080776 --format={{.State.Status}}
	W1227 09:42:25.659924  430965 cli_runner.go:211] docker container inspect missing-upgrade-080776 --format={{.State.Status}} returned with exit code 1
	I1227 09:42:25.660000  430965 oci.go:671] temporary error verifying shutdown: unknown state "missing-upgrade-080776": docker container inspect missing-upgrade-080776 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-080776
	I1227 09:42:25.660011  430965 oci.go:673] temporary error: container missing-upgrade-080776 status is  but expect it to be exited
	I1227 09:42:27.619463  430416 pod_ready.go:94] pod "kube-proxy-w88ml" is "Ready"
	I1227 09:42:27.619491  430416 pod_ready.go:86] duration metric: took 400.152707ms for pod "kube-proxy-w88ml" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 09:42:27.819727  430416 pod_ready.go:83] waiting for pod "kube-scheduler-pause-212930" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 09:42:28.219356  430416 pod_ready.go:94] pod "kube-scheduler-pause-212930" is "Ready"
	I1227 09:42:28.219439  430416 pod_ready.go:86] duration metric: took 399.638535ms for pod "kube-scheduler-pause-212930" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 09:42:28.219459  430416 pod_ready.go:40] duration metric: took 1.604623575s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1227 09:42:28.315358  430416 start.go:625] kubectl: 1.33.2, cluster: 1.35.0 (minor skew: 2)
	I1227 09:42:28.318499  430416 out.go:203] 
	W1227 09:42:28.321581  430416 out.go:285] ! /usr/local/bin/kubectl is version 1.33.2, which may have incompatibilities with Kubernetes 1.35.0.
	I1227 09:42:28.324398  430416 out.go:179]   - Want kubectl v1.35.0? Try 'minikube kubectl -- get pods -A'
	I1227 09:42:28.327382  430416 out.go:179] * Done! kubectl is now configured to use "pause-212930" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Dec 27 09:42:21 pause-212930 crio[2084]: time="2025-12-27T09:42:21.394834Z" level=info msg="Starting container: 368c7201327cbe37eff9370846fa248f727cbfe09186ebc53e2cef14b8a9dc23" id=386d9f1b-c0c4-4f8b-982f-2ed911a11aed name=/runtime.v1.RuntimeService/StartContainer
	Dec 27 09:42:21 pause-212930 crio[2084]: time="2025-12-27T09:42:21.395501356Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 27 09:42:21 pause-212930 crio[2084]: time="2025-12-27T09:42:21.402585804Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 27 09:42:21 pause-212930 crio[2084]: time="2025-12-27T09:42:21.423204162Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 27 09:42:21 pause-212930 crio[2084]: time="2025-12-27T09:42:21.423915998Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 27 09:42:21 pause-212930 crio[2084]: time="2025-12-27T09:42:21.44571979Z" level=info msg="Creating container: kube-system/coredns-7d764666f9-j52xk/coredns" id=43277de1-85f7-4f41-8e88-db4cbc7533c6 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 27 09:42:21 pause-212930 crio[2084]: time="2025-12-27T09:42:21.446005086Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 27 09:42:21 pause-212930 crio[2084]: time="2025-12-27T09:42:21.457207388Z" level=info msg="Started container" PID=2174 containerID=368c7201327cbe37eff9370846fa248f727cbfe09186ebc53e2cef14b8a9dc23 description=kube-system/kube-proxy-w88ml/kube-proxy id=386d9f1b-c0c4-4f8b-982f-2ed911a11aed name=/runtime.v1.RuntimeService/StartContainer sandboxID=1291c634d9fd1d35d70cfb45a4c508823928c302c1c41c12ef5b19ec564c7525
	Dec 27 09:42:21 pause-212930 crio[2084]: time="2025-12-27T09:42:21.463027795Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 27 09:42:21 pause-212930 crio[2084]: time="2025-12-27T09:42:21.463576479Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 27 09:42:21 pause-212930 crio[2084]: time="2025-12-27T09:42:21.47907418Z" level=info msg="Created container 36a549f100e5b12b82a2441b6f9c890e71d25fb1797c9e47186609cd2e2e1a6b: kube-system/kube-apiserver-pause-212930/kube-apiserver" id=e9d0ee92-bfe7-4f63-9f6a-966cb65fd235 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 27 09:42:21 pause-212930 crio[2084]: time="2025-12-27T09:42:21.48325107Z" level=info msg="Created container 78e44803c8142565ec41e1482889f7692ec1eb9df4682d12be987925aba4dc71: kube-system/kindnet-l2mpb/kindnet-cni" id=bc52c1bb-b444-4bc8-bd63-46b741d4c00c name=/runtime.v1.RuntimeService/CreateContainer
	Dec 27 09:42:21 pause-212930 crio[2084]: time="2025-12-27T09:42:21.485555141Z" level=info msg="Starting container: 36a549f100e5b12b82a2441b6f9c890e71d25fb1797c9e47186609cd2e2e1a6b" id=3657b67d-6f92-4980-8966-658a7bc43798 name=/runtime.v1.RuntimeService/StartContainer
	Dec 27 09:42:21 pause-212930 crio[2084]: time="2025-12-27T09:42:21.486054856Z" level=info msg="Created container c93fe8161deae8dd5efcbd7f268e872176eb4d2c3df8e52e409d42dd300f7cea: kube-system/kube-scheduler-pause-212930/kube-scheduler" id=1a754748-9609-43dd-be6c-c73c4adc9daa name=/runtime.v1.RuntimeService/CreateContainer
	Dec 27 09:42:21 pause-212930 crio[2084]: time="2025-12-27T09:42:21.499070432Z" level=info msg="Starting container: 78e44803c8142565ec41e1482889f7692ec1eb9df4682d12be987925aba4dc71" id=f5e11a5b-56c2-48c1-96d1-accf4a87aa45 name=/runtime.v1.RuntimeService/StartContainer
	Dec 27 09:42:21 pause-212930 crio[2084]: time="2025-12-27T09:42:21.516851223Z" level=info msg="Started container" PID=2188 containerID=36a549f100e5b12b82a2441b6f9c890e71d25fb1797c9e47186609cd2e2e1a6b description=kube-system/kube-apiserver-pause-212930/kube-apiserver id=3657b67d-6f92-4980-8966-658a7bc43798 name=/runtime.v1.RuntimeService/StartContainer sandboxID=230deea308df7bac2f3a284f829fd83a5010ae55ec8895270ff1fed1812642c4
	Dec 27 09:42:21 pause-212930 crio[2084]: time="2025-12-27T09:42:21.528760603Z" level=info msg="Created container fb5d5abd088bf2b2019159161bad02e4b126be613fbd648e5ba75ce0a8d89008: kube-system/etcd-pause-212930/etcd" id=faf7ff9e-161b-493c-a9f7-1cd293b2d1b8 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 27 09:42:21 pause-212930 crio[2084]: time="2025-12-27T09:42:21.529135196Z" level=info msg="Starting container: c93fe8161deae8dd5efcbd7f268e872176eb4d2c3df8e52e409d42dd300f7cea" id=af70f7db-c8fe-4cf2-b452-2dff5ff89fa2 name=/runtime.v1.RuntimeService/StartContainer
	Dec 27 09:42:21 pause-212930 crio[2084]: time="2025-12-27T09:42:21.534834617Z" level=info msg="Starting container: fb5d5abd088bf2b2019159161bad02e4b126be613fbd648e5ba75ce0a8d89008" id=98e06a18-9889-4b0f-83d5-64cf04443d9f name=/runtime.v1.RuntimeService/StartContainer
	Dec 27 09:42:21 pause-212930 crio[2084]: time="2025-12-27T09:42:21.535950652Z" level=info msg="Started container" PID=2205 containerID=78e44803c8142565ec41e1482889f7692ec1eb9df4682d12be987925aba4dc71 description=kube-system/kindnet-l2mpb/kindnet-cni id=f5e11a5b-56c2-48c1-96d1-accf4a87aa45 name=/runtime.v1.RuntimeService/StartContainer sandboxID=fd098f3f156ed010c45afc444cce1539ac78633abcdbbcb60c64f58bdc8177b6
	Dec 27 09:42:21 pause-212930 crio[2084]: time="2025-12-27T09:42:21.537033981Z" level=info msg="Started container" PID=2216 containerID=fb5d5abd088bf2b2019159161bad02e4b126be613fbd648e5ba75ce0a8d89008 description=kube-system/etcd-pause-212930/etcd id=98e06a18-9889-4b0f-83d5-64cf04443d9f name=/runtime.v1.RuntimeService/StartContainer sandboxID=da386ae3581cdd7845e504c6c22377264821d416c52496825402686bfcf7addc
	Dec 27 09:42:21 pause-212930 crio[2084]: time="2025-12-27T09:42:21.540143387Z" level=info msg="Created container aa41a510f7223dca1286c7561932eedc33fa445732808526059960960b028886: kube-system/coredns-7d764666f9-j52xk/coredns" id=43277de1-85f7-4f41-8e88-db4cbc7533c6 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 27 09:42:21 pause-212930 crio[2084]: time="2025-12-27T09:42:21.544301635Z" level=info msg="Starting container: aa41a510f7223dca1286c7561932eedc33fa445732808526059960960b028886" id=6756bfb7-7feb-4a2f-bc6c-291be06a25ce name=/runtime.v1.RuntimeService/StartContainer
	Dec 27 09:42:21 pause-212930 crio[2084]: time="2025-12-27T09:42:21.556162653Z" level=info msg="Started container" PID=2182 containerID=c93fe8161deae8dd5efcbd7f268e872176eb4d2c3df8e52e409d42dd300f7cea description=kube-system/kube-scheduler-pause-212930/kube-scheduler id=af70f7db-c8fe-4cf2-b452-2dff5ff89fa2 name=/runtime.v1.RuntimeService/StartContainer sandboxID=64872c42cfb38f9b1943c4630d7cb0365b1a1f8bfad37e6dd74933a8cc41a02d
	Dec 27 09:42:21 pause-212930 crio[2084]: time="2025-12-27T09:42:21.581849884Z" level=info msg="Started container" PID=2219 containerID=aa41a510f7223dca1286c7561932eedc33fa445732808526059960960b028886 description=kube-system/coredns-7d764666f9-j52xk/coredns id=6756bfb7-7feb-4a2f-bc6c-291be06a25ce name=/runtime.v1.RuntimeService/StartContainer sandboxID=702192942d5ccfffe8985898cfade8e7fefab5bef7cc36f41c666730a5696cb8
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                    NAMESPACE
	aa41a510f7223       e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf                                     10 seconds ago      Running             coredns                   1                   702192942d5cc       coredns-7d764666f9-j52xk               kube-system
	78e44803c8142       c96ee3c17498748ccc544ba99ee8ffeb020fc335b230b43cd28bf43bed229a13                                     10 seconds ago      Running             kindnet-cni               1                   fd098f3f156ed       kindnet-l2mpb                          kube-system
	fb5d5abd088bf       271e49a0ebc56647476845128fcd2a73bb138beeca3878cc3bf52b4ff1172a57                                     10 seconds ago      Running             etcd                      1                   da386ae3581cd       etcd-pause-212930                      kube-system
	36a549f100e5b       c3fcf259c473a57a5d7da116e29161904491091743512d27467c907c5516f856                                     10 seconds ago      Running             kube-apiserver            1                   230deea308df7       kube-apiserver-pause-212930            kube-system
	c93fe8161deae       ddc8422d4d35a6fc66c34be61e24df795e5cebf197eb546f62740d0bafef874f                                     10 seconds ago      Running             kube-scheduler            1                   64872c42cfb38       kube-scheduler-pause-212930            kube-system
	e43c40729d4c0       88898f1d1a62a3ea9db5d4d099dee7aa52ebe8191016c5b3c721388a309983e0                                     10 seconds ago      Running             kube-controller-manager   1                   24339f8acf8fc       kube-controller-manager-pause-212930   kube-system
	368c7201327cb       de369f46c2ff55c31ea783a663eb203caa820f3db1f9b9c935e79e7d1e9fd9e5                                     10 seconds ago      Running             kube-proxy                1                   1291c634d9fd1       kube-proxy-w88ml                       kube-system
	a92514a62346a       e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf                                     22 seconds ago      Exited              coredns                   0                   702192942d5cc       coredns-7d764666f9-j52xk               kube-system
	8e8d093798bba       docker.io/kindest/kindnetd@sha256:f1260f5691195cc9a693dc0b55178aa724d944efd62486a8320f0583272b1fa3   33 seconds ago      Exited              kindnet-cni               0                   fd098f3f156ed       kindnet-l2mpb                          kube-system
	123709a40ad13       de369f46c2ff55c31ea783a663eb203caa820f3db1f9b9c935e79e7d1e9fd9e5                                     36 seconds ago      Exited              kube-proxy                0                   1291c634d9fd1       kube-proxy-w88ml                       kube-system
	03c077c9fc8b5       c3fcf259c473a57a5d7da116e29161904491091743512d27467c907c5516f856                                     49 seconds ago      Exited              kube-apiserver            0                   230deea308df7       kube-apiserver-pause-212930            kube-system
	1074f4a0ea38d       ddc8422d4d35a6fc66c34be61e24df795e5cebf197eb546f62740d0bafef874f                                     49 seconds ago      Exited              kube-scheduler            0                   64872c42cfb38       kube-scheduler-pause-212930            kube-system
	8984abd55ae2a       271e49a0ebc56647476845128fcd2a73bb138beeca3878cc3bf52b4ff1172a57                                     49 seconds ago      Exited              etcd                      0                   da386ae3581cd       etcd-pause-212930                      kube-system
	f1c6a1c1c6239       88898f1d1a62a3ea9db5d4d099dee7aa52ebe8191016c5b3c721388a309983e0                                     49 seconds ago      Exited              kube-controller-manager   0                   24339f8acf8fc       kube-controller-manager-pause-212930   kube-system
	
	
	==> coredns [a92514a62346a43118a86dbc159a30b0a844a38b3ffde54e8fe87cd4f8ff9786] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 3e2243e8b9e7116f563b83b1933f477a68ba9ad4a829ed5d7e54629fb2ce53528b9bc6023030be20be434ad805fd246296dd428c64e9bbef3a70f22b8621f560
	CoreDNS-1.13.1
	linux/arm64, go1.25.2, 1db4568
	[INFO] 127.0.0.1:50170 - 44552 "HINFO IN 4153360780497730766.5679363157234129485. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.041262471s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [aa41a510f7223dca1286c7561932eedc33fa445732808526059960960b028886] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Plugins not ready: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Plugins not ready: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 3e2243e8b9e7116f563b83b1933f477a68ba9ad4a829ed5d7e54629fb2ce53528b9bc6023030be20be434ad805fd246296dd428c64e9bbef3a70f22b8621f560
	CoreDNS-1.13.1
	linux/arm64, go1.25.2, 1db4568
	[INFO] 127.0.0.1:49243 - 65142 "HINFO IN 5311795279361908259.7512581213843604179. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.013767651s
	
	
	==> describe nodes <==
	Name:               pause-212930
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=pause-212930
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=a2daf445edf4872fd9586416ba5dbf507613db86
	                    minikube.k8s.io/name=pause-212930
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_27T09_41_51_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 27 Dec 2025 09:41:46 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-212930
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 27 Dec 2025 09:42:25 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 27 Dec 2025 09:42:09 +0000   Sat, 27 Dec 2025 09:41:42 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 27 Dec 2025 09:42:09 +0000   Sat, 27 Dec 2025 09:41:42 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 27 Dec 2025 09:42:09 +0000   Sat, 27 Dec 2025 09:41:42 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 27 Dec 2025 09:42:09 +0000   Sat, 27 Dec 2025 09:42:09 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    pause-212930
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022304Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022304Ki
	  pods:               110
	System Info:
	  Machine ID:                 56495bcb733b49eb3642bf15694bbc8c
	  System UUID:                8d0e12be-86e6-4a4d-b390-142e4fbcd202
	  Boot ID:                    bc2ae459-717b-455e-ba6b-b9cf89c5e3c8
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.3
	  Kubelet Version:            v1.35.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-7d764666f9-j52xk                100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     36s
	  kube-system                 etcd-pause-212930                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         42s
	  kube-system                 kindnet-l2mpb                           100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      37s
	  kube-system                 kube-apiserver-pause-212930             250m (12%)    0 (0%)      0 (0%)           0 (0%)         41s
	  kube-system                 kube-controller-manager-pause-212930    200m (10%)    0 (0%)      0 (0%)           0 (0%)         41s
	  kube-system                 kube-proxy-w88ml                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         37s
	  kube-system                 kube-scheduler-pause-212930             100m (5%)     0 (0%)      0 (0%)           0 (0%)         41s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type    Reason          Age   From             Message
	  ----    ------          ----  ----             -------
	  Normal  RegisteredNode  38s   node-controller  Node pause-212930 event: Registered Node pause-212930 in Controller
	  Normal  RegisteredNode  3s    node-controller  Node pause-212930 event: Registered Node pause-212930 in Controller
	
	
	==> dmesg <==
	[Dec27 09:21] overlayfs: idmapped layers are currently not supported
	[Dec27 09:22] overlayfs: idmapped layers are currently not supported
	[Dec27 09:23] overlayfs: idmapped layers are currently not supported
	[Dec27 09:24] overlayfs: idmapped layers are currently not supported
	[  +3.021431] overlayfs: idmapped layers are currently not supported
	[Dec27 09:25] overlayfs: idmapped layers are currently not supported
	[ +42.046056] overlayfs: idmapped layers are currently not supported
	[Dec27 09:26] overlayfs: idmapped layers are currently not supported
	[  +3.426470] overlayfs: idmapped layers are currently not supported
	[Dec27 09:27] overlayfs: idmapped layers are currently not supported
	[Dec27 09:28] overlayfs: idmapped layers are currently not supported
	[Dec27 09:29] overlayfs: idmapped layers are currently not supported
	[Dec27 09:32] overlayfs: idmapped layers are currently not supported
	[Dec27 09:33] overlayfs: idmapped layers are currently not supported
	[Dec27 09:34] overlayfs: idmapped layers are currently not supported
	[Dec27 09:35] overlayfs: idmapped layers are currently not supported
	[ +35.855481] overlayfs: idmapped layers are currently not supported
	[Dec27 09:36] overlayfs: idmapped layers are currently not supported
	[Dec27 09:37] overlayfs: idmapped layers are currently not supported
	[  +4.962986] overlayfs: idmapped layers are currently not supported
	[Dec27 09:38] overlayfs: idmapped layers are currently not supported
	[ +23.051169] overlayfs: idmapped layers are currently not supported
	[Dec27 09:39] overlayfs: idmapped layers are currently not supported
	[ +34.383287] overlayfs: idmapped layers are currently not supported
	[Dec27 09:41] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [8984abd55ae2a6b70149f64ccc0e67debf1bfd4c7a664ace1295f91f08bc8c23] <==
	{"level":"info","ts":"2025-12-27T09:41:42.794923Z","caller":"membership/cluster.go:682","msg":"set initial cluster version","cluster-id":"6f20f2c4b2fb5f8a","local-member-id":"ea7e25599daad906","cluster-version":"3.6"}
	{"level":"info","ts":"2025-12-27T09:41:42.808745Z","caller":"api/capability.go:76","msg":"enabled capabilities for version","cluster-version":"3.6"}
	{"level":"info","ts":"2025-12-27T09:41:42.815858Z","caller":"etcdserver/server.go:2440","msg":"cluster version is updated","cluster-version":"3.6"}
	{"level":"info","ts":"2025-12-27T09:41:42.823939Z","caller":"v3rpc/health.go:63","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-12-27T09:41:42.833908Z","caller":"version/monitor.go:116","msg":"cluster version differs from storage version.","cluster-version":"3.6.0","storage-version":"3.5.0"}
	{"level":"info","ts":"2025-12-27T09:41:42.834287Z","caller":"schema/migration.go:65","msg":"updated storage version","new-storage-version":"3.6.0"}
	{"level":"info","ts":"2025-12-27T09:41:42.851012Z","caller":"embed/serve.go:283","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.76.2:2379"}
	{"level":"info","ts":"2025-12-27T09:42:14.101856Z","caller":"osutil/interrupt_unix.go:65","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2025-12-27T09:42:14.101905Z","caller":"embed/etcd.go:426","msg":"closing etcd server","name":"pause-212930","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.76.2:2380"],"advertise-client-urls":["https://192.168.76.2:2379"]}
	{"level":"error","ts":"2025-12-27T09:42:14.102004Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-12-27T09:42:14.253709Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"warn","ts":"2025-12-27T09:42:14.254083Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.76.2:2379: use of closed network connection"}
	{"level":"error","ts":"2025-12-27T09:42:14.253868Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2381: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-12-27T09:42:14.253904Z","caller":"etcdserver/server.go:1297","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"ea7e25599daad906","current-leader-member-id":"ea7e25599daad906"}
	{"level":"warn","ts":"2025-12-27T09:42:14.254012Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-12-27T09:42:14.254266Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"error","ts":"2025-12-27T09:42:14.254306Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-12-27T09:42:14.254369Z","caller":"etcdserver/server.go:2358","msg":"server has stopped; stopping storage version's monitor"}
	{"level":"info","ts":"2025-12-27T09:42:14.254405Z","caller":"etcdserver/server.go:2335","msg":"server has stopped; stopping cluster version's monitor"}
	{"level":"warn","ts":"2025-12-27T09:42:14.254125Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.76.2:2379: use of closed network connection"}
	{"level":"error","ts":"2025-12-27T09:42:14.254665Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.76.2:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-12-27T09:42:14.257610Z","caller":"embed/etcd.go:621","msg":"stopping serving peer traffic","address":"192.168.76.2:2380"}
	{"level":"error","ts":"2025-12-27T09:42:14.257720Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.76.2:2380: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-12-27T09:42:14.257748Z","caller":"embed/etcd.go:626","msg":"stopped serving peer traffic","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2025-12-27T09:42:14.257763Z","caller":"embed/etcd.go:428","msg":"closed etcd server","name":"pause-212930","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.76.2:2380"],"advertise-client-urls":["https://192.168.76.2:2379"]}
	
	
	==> etcd [fb5d5abd088bf2b2019159161bad02e4b126be613fbd648e5ba75ce0a8d89008] <==
	{"level":"info","ts":"2025-12-27T09:42:21.799333Z","caller":"embed/etcd.go:640","msg":"serving peer traffic","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2025-12-27T09:42:21.800409Z","caller":"embed/etcd.go:890","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2025-12-27T09:42:21.800554Z","caller":"fileutil/purge.go:49","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-12-27T09:42:21.800857Z","caller":"fileutil/purge.go:49","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-12-27T09:42:21.800716Z","caller":"membership/cluster.go:433","msg":"ignore already added member","cluster-id":"6f20f2c4b2fb5f8a","local-member-id":"ea7e25599daad906","added-peer-id":"ea7e25599daad906","added-peer-peer-urls":["https://192.168.76.2:2380"],"added-peer-is-learner":false}
	{"level":"info","ts":"2025-12-27T09:42:21.801070Z","caller":"membership/cluster.go:674","msg":"updated cluster version","cluster-id":"6f20f2c4b2fb5f8a","local-member-id":"ea7e25599daad906","from":"3.6","to":"3.6"}
	{"level":"info","ts":"2025-12-27T09:42:21.800770Z","caller":"embed/etcd.go:611","msg":"cmux::serve","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2025-12-27T09:42:22.026358Z","logger":"raft","caller":"v3@v3.6.0/raft.go:988","msg":"ea7e25599daad906 is starting a new election at term 2"}
	{"level":"info","ts":"2025-12-27T09:42:22.026412Z","logger":"raft","caller":"v3@v3.6.0/raft.go:930","msg":"ea7e25599daad906 became pre-candidate at term 2"}
	{"level":"info","ts":"2025-12-27T09:42:22.026464Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1077","msg":"ea7e25599daad906 received MsgPreVoteResp from ea7e25599daad906 at term 2"}
	{"level":"info","ts":"2025-12-27T09:42:22.026478Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1693","msg":"ea7e25599daad906 has received 1 MsgPreVoteResp votes and 0 vote rejections"}
	{"level":"info","ts":"2025-12-27T09:42:22.026498Z","logger":"raft","caller":"v3@v3.6.0/raft.go:912","msg":"ea7e25599daad906 became candidate at term 3"}
	{"level":"info","ts":"2025-12-27T09:42:22.031661Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1077","msg":"ea7e25599daad906 received MsgVoteResp from ea7e25599daad906 at term 3"}
	{"level":"info","ts":"2025-12-27T09:42:22.031724Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1693","msg":"ea7e25599daad906 has received 1 MsgVoteResp votes and 0 vote rejections"}
	{"level":"info","ts":"2025-12-27T09:42:22.031746Z","logger":"raft","caller":"v3@v3.6.0/raft.go:970","msg":"ea7e25599daad906 became leader at term 3"}
	{"level":"info","ts":"2025-12-27T09:42:22.031756Z","logger":"raft","caller":"v3@v3.6.0/node.go:370","msg":"raft.node: ea7e25599daad906 elected leader ea7e25599daad906 at term 3"}
	{"level":"info","ts":"2025-12-27T09:42:22.044357Z","caller":"etcdserver/server.go:1820","msg":"published local member to cluster through raft","local-member-id":"ea7e25599daad906","local-member-attributes":"{Name:pause-212930 ClientURLs:[https://192.168.76.2:2379]}","cluster-id":"6f20f2c4b2fb5f8a","publish-timeout":"7s"}
	{"level":"info","ts":"2025-12-27T09:42:22.044401Z","caller":"embed/serve.go:138","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-12-27T09:42:22.044640Z","caller":"embed/serve.go:138","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-12-27T09:42:22.045541Z","caller":"v3rpc/health.go:63","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-12-27T09:42:22.047829Z","caller":"embed/serve.go:283","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.76.2:2379"}
	{"level":"info","ts":"2025-12-27T09:42:22.048720Z","caller":"v3rpc/health.go:63","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-12-27T09:42:22.049413Z","caller":"embed/serve.go:283","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2025-12-27T09:42:22.059158Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-12-27T09:42:22.059231Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	
	
	==> kernel <==
	 09:42:32 up  2:25,  0 user,  load average: 3.53, 2.42, 2.43
	Linux pause-212930 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [78e44803c8142565ec41e1482889f7692ec1eb9df4682d12be987925aba4dc71] <==
	I1227 09:42:21.693134       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1227 09:42:21.714327       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1227 09:42:21.714476       1 main.go:148] setting mtu 1500 for CNI 
	I1227 09:42:21.714488       1 main.go:178] kindnetd IP family: "ipv4"
	I1227 09:42:21.714504       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-12-27T09:42:21Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1227 09:42:21.899292       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1227 09:42:21.899325       1 controller.go:381] "Waiting for informer caches to sync"
	I1227 09:42:21.899335       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1227 09:42:21.900043       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1227 09:42:25.200074       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1227 09:42:25.200182       1 metrics.go:72] Registering metrics
	I1227 09:42:25.200308       1 controller.go:711] "Syncing nftables rules"
	I1227 09:42:31.899368       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1227 09:42:31.899408       1 main.go:301] handling current node
	
	
	==> kindnet [8e8d093798bba459f2cdf4bfbdf2b34b4f04f26043d223f4d234c6db2c17986b] <==
	I1227 09:41:58.827450       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1227 09:41:58.827811       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1227 09:41:58.827959       1 main.go:148] setting mtu 1500 for CNI 
	I1227 09:41:58.827998       1 main.go:178] kindnetd IP family: "ipv4"
	I1227 09:41:58.828039       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-12-27T09:41:59Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1227 09:41:59.027480       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1227 09:41:59.027557       1 controller.go:381] "Waiting for informer caches to sync"
	I1227 09:41:59.027593       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1227 09:41:59.027755       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1227 09:41:59.328667       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1227 09:41:59.328758       1 metrics.go:72] Registering metrics
	I1227 09:41:59.328833       1 controller.go:711] "Syncing nftables rules"
	I1227 09:42:09.027795       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1227 09:42:09.029162       1 main.go:301] handling current node
	
	
	==> kube-apiserver [03c077c9fc8b59712914c587598b07b45434df3ac40128669bc567d1738e39f4] <==
	W1227 09:42:14.131194       1 logging.go:55] [core] [Channel #75 SubChannel #77]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1227 09:42:14.131254       1 logging.go:55] [core] [Channel #123 SubChannel #125]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1227 09:42:14.131309       1 logging.go:55] [core] [Channel #147 SubChannel #149]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1227 09:42:14.131510       1 logging.go:55] [core] [Channel #171 SubChannel #173]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1227 09:42:14.131689       1 logging.go:55] [core] [Channel #27 SubChannel #29]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1227 09:42:14.131780       1 logging.go:55] [core] [Channel #203 SubChannel #205]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1227 09:42:14.131871       1 logging.go:55] [core] [Channel #255 SubChannel #257]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1227 09:42:14.136251       1 logging.go:55] [core] [Channel #107 SubChannel #109]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1227 09:42:14.136739       1 logging.go:55] [core] [Channel #79 SubChannel #81]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1227 09:42:14.137457       1 logging.go:55] [core] [Channel #2 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1227 09:42:14.137666       1 logging.go:55] [core] [Channel #59 SubChannel #61]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1227 09:42:14.137734       1 logging.go:55] [core] [Channel #251 SubChannel #253]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1227 09:42:14.137783       1 logging.go:55] [core] [Channel #55 SubChannel #57]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1227 09:42:14.137845       1 logging.go:55] [core] [Channel #83 SubChannel #85]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1227 09:42:14.137899       1 logging.go:55] [core] [Channel #87 SubChannel #89]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1227 09:42:14.137948       1 logging.go:55] [core] [Channel #167 SubChannel #169]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1227 09:42:14.137999       1 logging.go:55] [core] [Channel #207 SubChannel #209]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1227 09:42:14.138098       1 logging.go:55] [core] [Channel #179 SubChannel #181]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1227 09:42:14.138161       1 logging.go:55] [core] [Channel #139 SubChannel #141]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1227 09:42:14.138210       1 logging.go:55] [core] [Channel #191 SubChannel #193]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1227 09:42:14.138255       1 logging.go:55] [core] [Channel #143 SubChannel #145]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1227 09:42:14.138327       1 logging.go:55] [core] [Channel #12 SubChannel #14]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1227 09:42:14.138395       1 logging.go:55] [core] [Channel #155 SubChannel #157]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1227 09:42:14.138503       1 logging.go:55] [core] [Channel #183 SubChannel #185]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-apiserver [36a549f100e5b12b82a2441b6f9c890e71d25fb1797c9e47186609cd2e2e1a6b] <==
	I1227 09:42:24.791634       1 apf_controller.go:377] Starting API Priority and Fairness config controller
	I1227 09:42:25.113095       1 cidrallocator.go:302] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1227 09:42:25.133572       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1227 09:42:25.151957       1 shared_informer.go:377] "Caches are synced"
	I1227 09:42:25.151994       1 policy_source.go:248] refreshing policies
	I1227 09:42:25.155301       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1227 09:42:25.155502       1 shared_informer.go:356] "Caches are synced" controller="crd-autoregister"
	I1227 09:42:25.155563       1 aggregator.go:187] initial CRD sync complete...
	I1227 09:42:25.155594       1 autoregister_controller.go:144] Starting autoregister controller
	I1227 09:42:25.155622       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1227 09:42:25.155650       1 cache.go:39] Caches are synced for autoregister controller
	I1227 09:42:25.155723       1 shared_informer.go:377] "Caches are synced"
	I1227 09:42:25.164191       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1227 09:42:25.164199       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1227 09:42:25.165569       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1227 09:42:25.166003       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1227 09:42:25.166024       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1227 09:42:25.170015       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1227 09:42:25.191224       1 shared_informer.go:377] "Caches are synced"
	I1227 09:42:25.191651       1 shared_informer.go:377] "Caches are synced"
	I1227 09:42:25.191715       1 cache.go:39] Caches are synced for LocalAvailability controller
	E1227 09:42:25.202585       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1227 09:42:25.211119       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1227 09:42:25.801872       1 storage_scheduling.go:139] all system priority classes are created successfully or already exist.
	I1227 09:42:27.074003       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	
	
	==> kube-controller-manager [e43c40729d4c0469cf5b2a084aa9ebe018a885cdfe51f7d958c4ff17a340bdeb] <==
	I1227 09:42:28.252738       1 shared_informer.go:377] "Caches are synced"
	I1227 09:42:28.252809       1 shared_informer.go:377] "Caches are synced"
	I1227 09:42:28.252932       1 node_lifecycle_controller.go:1234] "Initializing eviction metric for zone" zone=""
	I1227 09:42:28.253060       1 node_lifecycle_controller.go:886] "Missing timestamp for Node. Assuming now as a timestamp" node="pause-212930"
	I1227 09:42:28.253249       1 node_lifecycle_controller.go:1080] "Controller detected that zone is now in new state" zone="" newState="Normal"
	I1227 09:42:28.253499       1 shared_informer.go:377] "Caches are synced"
	I1227 09:42:28.253566       1 shared_informer.go:377] "Caches are synced"
	I1227 09:42:28.253614       1 shared_informer.go:377] "Caches are synced"
	I1227 09:42:28.253691       1 shared_informer.go:377] "Caches are synced"
	I1227 09:42:28.253836       1 shared_informer.go:377] "Caches are synced"
	I1227 09:42:28.253940       1 shared_informer.go:377] "Caches are synced"
	I1227 09:42:28.255108       1 shared_informer.go:377] "Caches are synced"
	I1227 09:42:28.257203       1 shared_informer.go:377] "Caches are synced"
	I1227 09:42:28.257310       1 shared_informer.go:377] "Caches are synced"
	I1227 09:42:28.257553       1 shared_informer.go:377] "Caches are synced"
	I1227 09:42:28.257664       1 shared_informer.go:377] "Caches are synced"
	I1227 09:42:28.257750       1 shared_informer.go:377] "Caches are synced"
	I1227 09:42:28.257813       1 shared_informer.go:377] "Caches are synced"
	I1227 09:42:28.259701       1 shared_informer.go:377] "Caches are synced"
	I1227 09:42:28.269862       1 shared_informer.go:370] "Waiting for caches to sync"
	I1227 09:42:28.270575       1 shared_informer.go:377] "Caches are synced"
	I1227 09:42:28.361232       1 shared_informer.go:377] "Caches are synced"
	I1227 09:42:28.361257       1 garbagecollector.go:166] "Garbage collector: all resource monitors have synced"
	I1227 09:42:28.361262       1 garbagecollector.go:169] "Proceeding to collect garbage"
	I1227 09:42:28.371829       1 shared_informer.go:377] "Caches are synced"
	
	
	==> kube-controller-manager [f1c6a1c1c62397748c6399aaa614424035b1cae6e1916e7a474286917d5684ff] <==
	I1227 09:41:53.950463       1 shared_informer.go:377] "Caches are synced"
	I1227 09:41:53.950469       1 shared_informer.go:377] "Caches are synced"
	I1227 09:41:53.950476       1 shared_informer.go:377] "Caches are synced"
	I1227 09:41:53.951656       1 shared_informer.go:377] "Caches are synced"
	I1227 09:41:53.951787       1 shared_informer.go:377] "Caches are synced"
	I1227 09:41:53.954324       1 shared_informer.go:377] "Caches are synced"
	I1227 09:41:53.954365       1 shared_informer.go:377] "Caches are synced"
	I1227 09:41:53.954375       1 shared_informer.go:377] "Caches are synced"
	I1227 09:41:53.954471       1 shared_informer.go:377] "Caches are synced"
	I1227 09:41:54.017104       1 shared_informer.go:377] "Caches are synced"
	I1227 09:41:53.954493       1 shared_informer.go:377] "Caches are synced"
	I1227 09:41:53.949573       1 shared_informer.go:377] "Caches are synced"
	I1227 09:41:53.950439       1 shared_informer.go:377] "Caches are synced"
	I1227 09:41:53.950446       1 shared_informer.go:377] "Caches are synced"
	I1227 09:41:53.984021       1 range_allocator.go:433] "Set node PodCIDR" node="pause-212930" podCIDRs=["10.244.0.0/24"]
	I1227 09:41:53.950243       1 shared_informer.go:377] "Caches are synced"
	I1227 09:41:53.950369       1 shared_informer.go:377] "Caches are synced"
	I1227 09:41:53.950452       1 shared_informer.go:377] "Caches are synced"
	I1227 09:41:53.994494       1 shared_informer.go:370] "Waiting for caches to sync"
	I1227 09:41:53.954483       1 shared_informer.go:377] "Caches are synced"
	I1227 09:41:54.324404       1 shared_informer.go:377] "Caches are synced"
	I1227 09:41:54.351400       1 shared_informer.go:377] "Caches are synced"
	I1227 09:41:54.351509       1 garbagecollector.go:166] "Garbage collector: all resource monitors have synced"
	I1227 09:41:54.351539       1 garbagecollector.go:169] "Proceeding to collect garbage"
	I1227 09:42:13.965865       1 node_lifecycle_controller.go:1057] "Controller detected that some Nodes are Ready. Exiting master disruption mode"
	
	
	==> kube-proxy [123709a40ad132b23744a3beeceee9b5e5c7095f906902ee953204b4702278fe] <==
	I1227 09:41:55.845426       1 server_linux.go:53] "Using iptables proxy"
	I1227 09:41:56.035042       1 shared_informer.go:370] "Waiting for caches to sync"
	I1227 09:41:56.135750       1 shared_informer.go:377] "Caches are synced"
	I1227 09:41:56.135784       1 server.go:218] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1227 09:41:56.135892       1 server.go:255] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1227 09:41:56.268866       1 server.go:264] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1227 09:41:56.268923       1 server_linux.go:136] "Using iptables Proxier"
	I1227 09:41:56.281862       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1227 09:41:56.286784       1 server.go:529] "Version info" version="v1.35.0"
	I1227 09:41:56.286901       1 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1227 09:41:56.288411       1 config.go:200] "Starting service config controller"
	I1227 09:41:56.288422       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1227 09:41:56.288438       1 config.go:106] "Starting endpoint slice config controller"
	I1227 09:41:56.288442       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1227 09:41:56.288453       1 config.go:403] "Starting serviceCIDR config controller"
	I1227 09:41:56.288457       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1227 09:41:56.289059       1 config.go:309] "Starting node config controller"
	I1227 09:41:56.289066       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1227 09:41:56.289072       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1227 09:41:56.390228       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1227 09:41:56.390262       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1227 09:41:56.390288       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-proxy [368c7201327cbe37eff9370846fa248f727cbfe09186ebc53e2cef14b8a9dc23] <==
	I1227 09:42:21.988327       1 server_linux.go:53] "Using iptables proxy"
	I1227 09:42:22.519382       1 shared_informer.go:370] "Waiting for caches to sync"
	I1227 09:42:25.220011       1 shared_informer.go:377] "Caches are synced"
	I1227 09:42:25.220054       1 server.go:218] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1227 09:42:25.220138       1 server.go:255] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1227 09:42:25.251488       1 server.go:264] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1227 09:42:25.251628       1 server_linux.go:136] "Using iptables Proxier"
	I1227 09:42:25.257254       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1227 09:42:25.257882       1 server.go:529] "Version info" version="v1.35.0"
	I1227 09:42:25.258137       1 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1227 09:42:25.260111       1 config.go:200] "Starting service config controller"
	I1227 09:42:25.260161       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1227 09:42:25.260184       1 config.go:106] "Starting endpoint slice config controller"
	I1227 09:42:25.260188       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1227 09:42:25.260198       1 config.go:403] "Starting serviceCIDR config controller"
	I1227 09:42:25.260202       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1227 09:42:25.261027       1 config.go:309] "Starting node config controller"
	I1227 09:42:25.261091       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1227 09:42:25.261122       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1227 09:42:25.360834       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1227 09:42:25.360949       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1227 09:42:25.360964       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [1074f4a0ea38ddbcac7244f9ddcdef61ec82f29cbe311ebaaaa4751e92095d5b] <==
	E1227 09:41:47.237167       1 reflector.go:204] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.PersistentVolume"
	E1227 09:41:47.262474       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ReplicaSet"
	E1227 09:41:47.285085       1 reflector.go:204] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.PersistentVolumeClaim"
	E1227 09:41:47.319285       1 reflector.go:204] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.VolumeAttachment"
	E1227 09:41:47.392124       1 reflector.go:204] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.CSIStorageCapacity"
	E1227 09:41:47.418891       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ReplicationController"
	E1227 09:41:47.421010       1 reflector.go:204] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.CSINode"
	E1227 09:41:47.434519       1 reflector.go:204] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.CSIDriver"
	E1227 09:41:47.561846       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ResourceSlice"
	E1227 09:41:47.566671       1 reflector.go:204] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.StorageClass"
	E1227 09:41:47.572994       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Service"
	E1227 09:41:47.573986       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Namespace"
	E1227 09:41:47.617470       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ResourceClaim"
	E1227 09:41:47.714535       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1268" type="*v1.ConfigMap"
	E1227 09:41:47.753123       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Node"
	E1227 09:41:47.788105       1 reflector.go:204] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.DeviceClass"
	E1227 09:41:47.804965       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Pod"
	E1227 09:41:47.891815       1 reflector.go:204] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.PodDisruptionBudget"
	I1227 09:41:49.630257       1 shared_informer.go:377] "Caches are synced"
	I1227 09:42:14.116390       1 secure_serving.go:259] Stopped listening on 127.0.0.1:10259
	I1227 09:42:14.116422       1 server.go:263] "[graceful-termination] secure server has stopped listening"
	I1227 09:42:14.116436       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	I1227 09:42:14.116504       1 configmap_cafile_content.go:226] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1227 09:42:14.116630       1 server.go:265] "[graceful-termination] secure server is exiting"
	E1227 09:42:14.116651       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [c93fe8161deae8dd5efcbd7f268e872176eb4d2c3df8e52e409d42dd300f7cea] <==
	I1227 09:42:23.414347       1 serving.go:386] Generated self-signed cert in-memory
	W1227 09:42:25.025450       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1227 09:42:25.025561       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1227 09:42:25.025595       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1227 09:42:25.025683       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1227 09:42:25.119873       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.35.0"
	I1227 09:42:25.122445       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1227 09:42:25.126953       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1227 09:42:25.127383       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1227 09:42:25.127455       1 shared_informer.go:370] "Waiting for caches to sync"
	I1227 09:42:25.127539       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1227 09:42:25.228315       1 shared_informer.go:377] "Caches are synced"
	
	
	==> kubelet <==
	Dec 27 09:42:25 pause-212930 kubelet[1295]: E1227 09:42:25.082018    1295 status_manager.go:1045] "Failed to get status for pod" err="pods \"kube-proxy-w88ml\" is forbidden: User \"system:node:pause-212930\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-212930' and this object" podUID="b077dc1e-d0af-48bd-b8b0-4f775f0c07b9" pod="kube-system/kube-proxy-w88ml"
	Dec 27 09:42:25 pause-212930 kubelet[1295]: E1227 09:42:25.086219    1295 status_manager.go:1045] "Failed to get status for pod" err="pods \"kindnet-l2mpb\" is forbidden: User \"system:node:pause-212930\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-212930' and this object" podUID="0633095a-3161-4f93-951b-90597bcc80cb" pod="kube-system/kindnet-l2mpb"
	Dec 27 09:42:25 pause-212930 kubelet[1295]: E1227 09:42:25.089193    1295 status_manager.go:1045] "Failed to get status for pod" err="pods \"coredns-7d764666f9-j52xk\" is forbidden: User \"system:node:pause-212930\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-212930' and this object" podUID="7506b606-698b-481c-aac2-86984f3866e4" pod="kube-system/coredns-7d764666f9-j52xk"
	Dec 27 09:42:25 pause-212930 kubelet[1295]: E1227 09:42:25.091189    1295 status_manager.go:1045] "Failed to get status for pod" err="pods \"kindnet-l2mpb\" is forbidden: User \"system:node:pause-212930\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-212930' and this object" podUID="0633095a-3161-4f93-951b-90597bcc80cb" pod="kube-system/kindnet-l2mpb"
	Dec 27 09:42:25 pause-212930 kubelet[1295]: E1227 09:42:25.093177    1295 status_manager.go:1045] "Failed to get status for pod" err="pods \"coredns-7d764666f9-j52xk\" is forbidden: User \"system:node:pause-212930\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-212930' and this object" podUID="7506b606-698b-481c-aac2-86984f3866e4" pod="kube-system/coredns-7d764666f9-j52xk"
	Dec 27 09:42:25 pause-212930 kubelet[1295]: E1227 09:42:25.100666    1295 status_manager.go:1045] "Failed to get status for pod" err="pods \"kube-scheduler-pause-212930\" is forbidden: User \"system:node:pause-212930\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-212930' and this object" podUID="7d1c6692bbfacffa6abe69f95d71bf07" pod="kube-system/kube-scheduler-pause-212930"
	Dec 27 09:42:25 pause-212930 kubelet[1295]: E1227 09:42:25.103602    1295 status_manager.go:1045] "Failed to get status for pod" err=<
	Dec 27 09:42:25 pause-212930 kubelet[1295]:         pods "etcd-pause-212930" is forbidden: User "system:node:pause-212930" cannot get resource "pods" in API group "" in the namespace "kube-system": no relationship found between node 'pause-212930' and this object
	Dec 27 09:42:25 pause-212930 kubelet[1295]:         RBAC: [clusterrole.rbac.authorization.k8s.io "system:certificates.k8s.io:certificatesigningrequests:selfnodeclient" not found, clusterrole.rbac.authorization.k8s.io "system:basic-user" not found, clusterrole.rbac.authorization.k8s.io "system:public-info-viewer" not found, clusterrole.rbac.authorization.k8s.io "system:discovery" not found]
	Dec 27 09:42:25 pause-212930 kubelet[1295]:  > podUID="de8ba352182032d872d8f55cb8dd7bbf" pod="kube-system/etcd-pause-212930"
	Dec 27 09:42:25 pause-212930 kubelet[1295]: E1227 09:42:25.107793    1295 status_manager.go:1045] "Failed to get status for pod" err=<
	Dec 27 09:42:25 pause-212930 kubelet[1295]:         pods "kube-apiserver-pause-212930" is forbidden: User "system:node:pause-212930" cannot get resource "pods" in API group "" in the namespace "kube-system": no relationship found between node 'pause-212930' and this object
	Dec 27 09:42:25 pause-212930 kubelet[1295]:         RBAC: [clusterrole.rbac.authorization.k8s.io "system:public-info-viewer" not found, clusterrole.rbac.authorization.k8s.io "system:discovery" not found, clusterrole.rbac.authorization.k8s.io "system:certificates.k8s.io:certificatesigningrequests:selfnodeclient" not found, clusterrole.rbac.authorization.k8s.io "system:basic-user" not found]
	Dec 27 09:42:25 pause-212930 kubelet[1295]:  > podUID="da941abb28a05575d73bb68025dd7154" pod="kube-system/kube-apiserver-pause-212930"
	Dec 27 09:42:25 pause-212930 kubelet[1295]: E1227 09:42:25.114404    1295 status_manager.go:1045] "Failed to get status for pod" err=<
	Dec 27 09:42:25 pause-212930 kubelet[1295]:         pods "kube-controller-manager-pause-212930" is forbidden: User "system:node:pause-212930" cannot get resource "pods" in API group "" in the namespace "kube-system": no relationship found between node 'pause-212930' and this object
	Dec 27 09:42:25 pause-212930 kubelet[1295]:         RBAC: [clusterrole.rbac.authorization.k8s.io "system:public-info-viewer" not found, clusterrole.rbac.authorization.k8s.io "system:discovery" not found, clusterrole.rbac.authorization.k8s.io "system:certificates.k8s.io:certificatesigningrequests:selfnodeclient" not found, clusterrole.rbac.authorization.k8s.io "system:basic-user" not found]
	Dec 27 09:42:25 pause-212930 kubelet[1295]:  > podUID="1c0a609087841c89e458b5d24d8dec71" pod="kube-system/kube-controller-manager-pause-212930"
	Dec 27 09:42:25 pause-212930 kubelet[1295]: E1227 09:42:25.959785    1295 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/etcd-pause-212930" containerName="etcd"
	Dec 27 09:42:26 pause-212930 kubelet[1295]: E1227 09:42:26.555905    1295 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-pause-212930" containerName="kube-scheduler"
	Dec 27 09:42:27 pause-212930 kubelet[1295]: E1227 09:42:27.868930    1295 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-pause-212930" containerName="kube-apiserver"
	Dec 27 09:42:28 pause-212930 kubelet[1295]: E1227 09:42:28.590800    1295 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-controller-manager-pause-212930" containerName="kube-controller-manager"
	Dec 27 09:42:28 pause-212930 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Dec 27 09:42:28 pause-212930 systemd[1]: kubelet.service: Deactivated successfully.
	Dec 27 09:42:28 pause-212930 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p pause-212930 -n pause-212930
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p pause-212930 -n pause-212930: exit status 2 (340.95522ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:270: (dbg) Run:  kubectl --context pause-212930 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:294: <<< TestPause/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:295: ---------------------/post-mortem---------------------------------
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestPause/serial/Pause]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestPause/serial/Pause]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect pause-212930
helpers_test.go:244: (dbg) docker inspect pause-212930:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "67a0c964596f7a739fbaa2c0a4e582479ce58a63afefa782db1c1784ffa78556",
	        "Created": "2025-12-27T09:41:23.359687129Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 426258,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-27T09:41:25.825011528Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:713d59eed1cb7ea89d31e3cbef8f6274a6ab7509a421e96cf0197c334e76398d",
	        "ResolvConfPath": "/var/lib/docker/containers/67a0c964596f7a739fbaa2c0a4e582479ce58a63afefa782db1c1784ffa78556/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/67a0c964596f7a739fbaa2c0a4e582479ce58a63afefa782db1c1784ffa78556/hostname",
	        "HostsPath": "/var/lib/docker/containers/67a0c964596f7a739fbaa2c0a4e582479ce58a63afefa782db1c1784ffa78556/hosts",
	        "LogPath": "/var/lib/docker/containers/67a0c964596f7a739fbaa2c0a4e582479ce58a63afefa782db1c1784ffa78556/67a0c964596f7a739fbaa2c0a4e582479ce58a63afefa782db1c1784ffa78556-json.log",
	        "Name": "/pause-212930",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "pause-212930:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "pause-212930",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "67a0c964596f7a739fbaa2c0a4e582479ce58a63afefa782db1c1784ffa78556",
	                "LowerDir": "/var/lib/docker/overlay2/d1bb2c53a8bced17b87ff2715a79b484e6f37dbf5f489c46dd9b475c7499b35b-init/diff:/var/lib/docker/overlay2/888349771a41a46b09e7de676064af9effbe2f5ae2a8ba49ad062335fb2a70e5/diff",
	                "MergedDir": "/var/lib/docker/overlay2/d1bb2c53a8bced17b87ff2715a79b484e6f37dbf5f489c46dd9b475c7499b35b/merged",
	                "UpperDir": "/var/lib/docker/overlay2/d1bb2c53a8bced17b87ff2715a79b484e6f37dbf5f489c46dd9b475c7499b35b/diff",
	                "WorkDir": "/var/lib/docker/overlay2/d1bb2c53a8bced17b87ff2715a79b484e6f37dbf5f489c46dd9b475c7499b35b/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "pause-212930",
	                "Source": "/var/lib/docker/volumes/pause-212930/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "pause-212930",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "pause-212930",
	                "name.minikube.sigs.k8s.io": "pause-212930",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "907f8eef2d5c7943aeba1710ead13eec8eedd1418e89c753e492a30da0b72f3e",
	            "SandboxKey": "/var/run/docker/netns/907f8eef2d5c",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33336"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33337"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33340"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33338"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33339"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "pause-212930": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "3a:45:10:4d:23:96",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "5ef6a920fbe5d46f9d5c8df54c3416c448684dd3ffc32c15c27db042128606cd",
	                    "EndpointID": "c3f4de98c622827e7e61e70e24deb63e5780f3718316c0953d51fda1d6926bc6",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "pause-212930",
	                        "67a0c964596f"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p pause-212930 -n pause-212930
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p pause-212930 -n pause-212930: exit status 2 (323.43163ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestPause/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestPause/serial/Pause]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p pause-212930 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-arm64 -p pause-212930 logs -n 25: (1.342785563s)
helpers_test.go:261: TestPause/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬─────────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                       ARGS                                                       │           PROFILE           │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼─────────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ node    │ list -p multinode-535956                                                                                         │ multinode-535956            │ jenkins │ v1.37.0 │ 27 Dec 25 09:38 UTC │                     │
	│ start   │ -p multinode-535956-m02 --driver=docker  --container-runtime=crio                                                │ multinode-535956-m02        │ jenkins │ v1.37.0 │ 27 Dec 25 09:38 UTC │                     │
	│ start   │ -p multinode-535956-m03 --driver=docker  --container-runtime=crio                                                │ multinode-535956-m03        │ jenkins │ v1.37.0 │ 27 Dec 25 09:38 UTC │ 27 Dec 25 09:39 UTC │
	│ node    │ add -p multinode-535956                                                                                          │ multinode-535956            │ jenkins │ v1.37.0 │ 27 Dec 25 09:39 UTC │                     │
	│ delete  │ -p multinode-535956-m03                                                                                          │ multinode-535956-m03        │ jenkins │ v1.37.0 │ 27 Dec 25 09:39 UTC │ 27 Dec 25 09:39 UTC │
	│ delete  │ -p multinode-535956                                                                                              │ multinode-535956            │ jenkins │ v1.37.0 │ 27 Dec 25 09:39 UTC │ 27 Dec 25 09:39 UTC │
	│ start   │ -p scheduled-stop-172677 --memory=3072 --driver=docker  --container-runtime=crio                                 │ scheduled-stop-172677       │ jenkins │ v1.37.0 │ 27 Dec 25 09:39 UTC │ 27 Dec 25 09:39 UTC │
	│ stop    │ -p scheduled-stop-172677 --schedule 5m -v=5 --alsologtostderr                                                    │ scheduled-stop-172677       │ jenkins │ v1.37.0 │ 27 Dec 25 09:39 UTC │                     │
	│ stop    │ -p scheduled-stop-172677 --schedule 5m -v=5 --alsologtostderr                                                    │ scheduled-stop-172677       │ jenkins │ v1.37.0 │ 27 Dec 25 09:39 UTC │                     │
	│ stop    │ -p scheduled-stop-172677 --schedule 5m -v=5 --alsologtostderr                                                    │ scheduled-stop-172677       │ jenkins │ v1.37.0 │ 27 Dec 25 09:39 UTC │                     │
	│ stop    │ -p scheduled-stop-172677 --schedule 15s -v=5 --alsologtostderr                                                   │ scheduled-stop-172677       │ jenkins │ v1.37.0 │ 27 Dec 25 09:39 UTC │                     │
	│ stop    │ -p scheduled-stop-172677 --schedule 15s -v=5 --alsologtostderr                                                   │ scheduled-stop-172677       │ jenkins │ v1.37.0 │ 27 Dec 25 09:39 UTC │                     │
	│ stop    │ -p scheduled-stop-172677 --schedule 15s -v=5 --alsologtostderr                                                   │ scheduled-stop-172677       │ jenkins │ v1.37.0 │ 27 Dec 25 09:39 UTC │                     │
	│ stop    │ -p scheduled-stop-172677 --cancel-scheduled                                                                      │ scheduled-stop-172677       │ jenkins │ v1.37.0 │ 27 Dec 25 09:39 UTC │ 27 Dec 25 09:39 UTC │
	│ stop    │ -p scheduled-stop-172677 --schedule 15s -v=5 --alsologtostderr                                                   │ scheduled-stop-172677       │ jenkins │ v1.37.0 │ 27 Dec 25 09:40 UTC │                     │
	│ stop    │ -p scheduled-stop-172677 --schedule 15s -v=5 --alsologtostderr                                                   │ scheduled-stop-172677       │ jenkins │ v1.37.0 │ 27 Dec 25 09:40 UTC │                     │
	│ stop    │ -p scheduled-stop-172677 --schedule 15s -v=5 --alsologtostderr                                                   │ scheduled-stop-172677       │ jenkins │ v1.37.0 │ 27 Dec 25 09:40 UTC │ 27 Dec 25 09:40 UTC │
	│ delete  │ -p scheduled-stop-172677                                                                                         │ scheduled-stop-172677       │ jenkins │ v1.37.0 │ 27 Dec 25 09:40 UTC │ 27 Dec 25 09:41 UTC │
	│ start   │ -p insufficient-storage-641667 --memory=3072 --output=json --wait=true --driver=docker  --container-runtime=crio │ insufficient-storage-641667 │ jenkins │ v1.37.0 │ 27 Dec 25 09:41 UTC │                     │
	│ delete  │ -p insufficient-storage-641667                                                                                   │ insufficient-storage-641667 │ jenkins │ v1.37.0 │ 27 Dec 25 09:41 UTC │ 27 Dec 25 09:41 UTC │
	│ start   │ -p pause-212930 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=crio        │ pause-212930                │ jenkins │ v1.37.0 │ 27 Dec 25 09:41 UTC │ 27 Dec 25 09:42 UTC │
	│ start   │ -p missing-upgrade-080776 --memory=3072 --driver=docker  --container-runtime=crio                                │ missing-upgrade-080776      │ jenkins │ v1.35.0 │ 27 Dec 25 09:41 UTC │ 27 Dec 25 09:42 UTC │
	│ start   │ -p pause-212930 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                 │ pause-212930                │ jenkins │ v1.37.0 │ 27 Dec 25 09:42 UTC │ 27 Dec 25 09:42 UTC │
	│ start   │ -p missing-upgrade-080776 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio         │ missing-upgrade-080776      │ jenkins │ v1.37.0 │ 27 Dec 25 09:42 UTC │                     │
	│ pause   │ -p pause-212930 --alsologtostderr -v=5                                                                           │ pause-212930                │ jenkins │ v1.37.0 │ 27 Dec 25 09:42 UTC │                     │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴─────────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/27 09:42:17
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1227 09:42:17.666518  430965 out.go:360] Setting OutFile to fd 1 ...
	I1227 09:42:17.666705  430965 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 09:42:17.666732  430965 out.go:374] Setting ErrFile to fd 2...
	I1227 09:42:17.666753  430965 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 09:42:17.667034  430965 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22344-301174/.minikube/bin
	I1227 09:42:17.667462  430965 out.go:368] Setting JSON to false
	I1227 09:42:17.668364  430965 start.go:133] hostinfo: {"hostname":"ip-172-31-31-251","uptime":8687,"bootTime":1766819851,"procs":176,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1227 09:42:17.668461  430965 start.go:143] virtualization:  
	I1227 09:42:17.673391  430965 out.go:179] * [missing-upgrade-080776] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1227 09:42:17.677266  430965 out.go:179]   - MINIKUBE_LOCATION=22344
	I1227 09:42:17.677427  430965 notify.go:221] Checking for updates...
	I1227 09:42:17.683086  430965 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1227 09:42:17.685932  430965 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22344-301174/kubeconfig
	I1227 09:42:17.688686  430965 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22344-301174/.minikube
	I1227 09:42:17.691545  430965 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1227 09:42:17.694315  430965 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1227 09:42:17.697641  430965 config.go:182] Loaded profile config "missing-upgrade-080776": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.0
	I1227 09:42:17.701188  430965 out.go:179] * Kubernetes 1.35.0 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.35.0
	I1227 09:42:17.704063  430965 driver.go:422] Setting default libvirt URI to qemu:///system
	I1227 09:42:17.728755  430965 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1227 09:42:17.728868  430965 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1227 09:42:17.786701  430965 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:39 OomKillDisable:true NGoroutines:53 SystemTime:2025-12-27 09:42:17.776674688 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1227 09:42:17.786805  430965 docker.go:319] overlay module found
	I1227 09:42:17.789919  430965 out.go:179] * Using the docker driver based on existing profile
	I1227 09:42:17.792742  430965 start.go:309] selected driver: docker
	I1227 09:42:17.792760  430965 start.go:928] validating driver "docker" against &{Name:missing-upgrade-080776 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.0 ClusterName:missing-upgrade-080776 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwa
rePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1227 09:42:17.792874  430965 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1227 09:42:17.793587  430965 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1227 09:42:17.844941  430965 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:39 OomKillDisable:true NGoroutines:53 SystemTime:2025-12-27 09:42:17.835245271 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1227 09:42:17.845269  430965 cni.go:84] Creating CNI manager for ""
	I1227 09:42:17.845335  430965 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1227 09:42:17.845382  430965 start.go:353] cluster config:
	{Name:missing-upgrade-080776 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.0 ClusterName:missing-upgrade-080776 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio
CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentP
ID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1227 09:42:17.848458  430965 out.go:179] * Starting "missing-upgrade-080776" primary control-plane node in "missing-upgrade-080776" cluster
	I1227 09:42:17.851246  430965 cache.go:134] Beginning downloading kic base image for docker with crio
	I1227 09:42:17.856025  430965 out.go:179] * Pulling base image v0.0.48-1766570851-22316 ...
	I1227 09:42:17.858825  430965 preload.go:188] Checking if preload exists for k8s version v1.32.0 and runtime crio
	I1227 09:42:17.858880  430965 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22344-301174/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.0-cri-o-overlay-arm64.tar.lz4
	I1227 09:42:17.858895  430965 cache.go:65] Caching tarball of preloaded images
	I1227 09:42:17.858899  430965 image.go:81] Checking for gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 in local docker daemon
	I1227 09:42:17.858980  430965 preload.go:251] Found /home/jenkins/minikube-integration/22344-301174/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.0-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1227 09:42:17.858989  430965 cache.go:68] Finished verifying existence of preloaded tar for v1.32.0 on crio
	I1227 09:42:17.859102  430965 profile.go:143] Saving config to /home/jenkins/minikube-integration/22344-301174/.minikube/profiles/missing-upgrade-080776/config.json ...
	I1227 09:42:17.877823  430965 image.go:100] Found gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 in local docker daemon, skipping pull
	I1227 09:42:17.877847  430965 cache.go:158] gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 exists in daemon, skipping load
	I1227 09:42:17.877867  430965 cache.go:243] Successfully downloaded all kic artifacts
	I1227 09:42:17.877897  430965 start.go:360] acquireMachinesLock for missing-upgrade-080776: {Name:mk3f368884f25182476d4c92af57261b5fc1f1f0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1227 09:42:17.877954  430965 start.go:364] duration metric: took 37.334µs to acquireMachinesLock for "missing-upgrade-080776"
	I1227 09:42:17.877980  430965 start.go:96] Skipping create...Using existing machine configuration
	I1227 09:42:17.877986  430965 fix.go:54] fixHost starting: 
	I1227 09:42:17.878275  430965 cli_runner.go:164] Run: docker container inspect missing-upgrade-080776 --format={{.State.Status}}
	W1227 09:42:17.893588  430965 cli_runner.go:211] docker container inspect missing-upgrade-080776 --format={{.State.Status}} returned with exit code 1
	I1227 09:42:17.893653  430965 fix.go:112] recreateIfNeeded on missing-upgrade-080776: state= err=unknown state "missing-upgrade-080776": docker container inspect missing-upgrade-080776 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-080776
	I1227 09:42:17.893685  430965 fix.go:117] machineExists: false. err=machine does not exist
	I1227 09:42:17.896942  430965 out.go:179] * docker "missing-upgrade-080776" container is missing, will recreate.
	I1227 09:42:19.344170  430416 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1227 09:42:19.344195  430416 machine.go:97] duration metric: took 6.557372287s to provisionDockerMachine
	I1227 09:42:19.344206  430416 start.go:293] postStartSetup for "pause-212930" (driver="docker")
	I1227 09:42:19.344217  430416 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1227 09:42:19.344285  430416 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1227 09:42:19.344334  430416 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-212930
	I1227 09:42:19.364653  430416 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33336 SSHKeyPath:/home/jenkins/minikube-integration/22344-301174/.minikube/machines/pause-212930/id_rsa Username:docker}
	I1227 09:42:19.465975  430416 ssh_runner.go:195] Run: cat /etc/os-release
	I1227 09:42:19.469472  430416 main.go:144] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1227 09:42:19.469501  430416 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1227 09:42:19.469513  430416 filesync.go:126] Scanning /home/jenkins/minikube-integration/22344-301174/.minikube/addons for local assets ...
	I1227 09:42:19.469592  430416 filesync.go:126] Scanning /home/jenkins/minikube-integration/22344-301174/.minikube/files for local assets ...
	I1227 09:42:19.469715  430416 filesync.go:149] local asset: /home/jenkins/minikube-integration/22344-301174/.minikube/files/etc/ssl/certs/3030432.pem -> 3030432.pem in /etc/ssl/certs
	I1227 09:42:19.469832  430416 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1227 09:42:19.477562  430416 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22344-301174/.minikube/files/etc/ssl/certs/3030432.pem --> /etc/ssl/certs/3030432.pem (1708 bytes)
	I1227 09:42:19.496225  430416 start.go:296] duration metric: took 151.997998ms for postStartSetup
	I1227 09:42:19.496328  430416 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1227 09:42:19.496391  430416 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-212930
	I1227 09:42:19.514124  430416 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33336 SSHKeyPath:/home/jenkins/minikube-integration/22344-301174/.minikube/machines/pause-212930/id_rsa Username:docker}
	I1227 09:42:19.612288  430416 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1227 09:42:19.617386  430416 fix.go:56] duration metric: took 6.854220318s for fixHost
	I1227 09:42:19.617414  430416 start.go:83] releasing machines lock for "pause-212930", held for 6.854271593s
	I1227 09:42:19.617482  430416 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" pause-212930
	I1227 09:42:19.635420  430416 ssh_runner.go:195] Run: cat /version.json
	I1227 09:42:19.635468  430416 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1227 09:42:19.635538  430416 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-212930
	I1227 09:42:19.635472  430416 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-212930
	I1227 09:42:19.653729  430416 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33336 SSHKeyPath:/home/jenkins/minikube-integration/22344-301174/.minikube/machines/pause-212930/id_rsa Username:docker}
	I1227 09:42:19.655553  430416 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33336 SSHKeyPath:/home/jenkins/minikube-integration/22344-301174/.minikube/machines/pause-212930/id_rsa Username:docker}
	I1227 09:42:19.749667  430416 ssh_runner.go:195] Run: systemctl --version
	I1227 09:42:19.851206  430416 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1227 09:42:19.909312  430416 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1227 09:42:19.913885  430416 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1227 09:42:19.913959  430416 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1227 09:42:19.923704  430416 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1227 09:42:19.923732  430416 start.go:496] detecting cgroup driver to use...
	I1227 09:42:19.923785  430416 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1227 09:42:19.923860  430416 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1227 09:42:19.940168  430416 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1227 09:42:19.952930  430416 docker.go:218] disabling cri-docker service (if available) ...
	I1227 09:42:19.953036  430416 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1227 09:42:19.971481  430416 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1227 09:42:19.985445  430416 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1227 09:42:20.142802  430416 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1227 09:42:20.288248  430416 docker.go:234] disabling docker service ...
	I1227 09:42:20.288317  430416 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1227 09:42:20.304414  430416 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1227 09:42:20.320168  430416 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1227 09:42:20.482479  430416 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1227 09:42:20.629348  430416 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1227 09:42:20.643822  430416 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1227 09:42:20.659310  430416 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1227 09:42:20.659397  430416 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 09:42:20.668310  430416 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1227 09:42:20.668419  430416 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 09:42:20.677313  430416 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 09:42:20.686522  430416 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 09:42:20.695656  430416 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1227 09:42:20.703907  430416 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 09:42:20.712738  430416 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 09:42:20.721070  430416 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 09:42:20.729858  430416 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1227 09:42:20.737355  430416 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1227 09:42:20.745081  430416 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1227 09:42:20.883536  430416 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1227 09:42:21.177333  430416 start.go:553] Will wait 60s for socket path /var/run/crio/crio.sock
	I1227 09:42:21.177418  430416 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1227 09:42:21.181406  430416 start.go:574] Will wait 60s for crictl version
	I1227 09:42:21.181532  430416 ssh_runner.go:195] Run: which crictl
	I1227 09:42:21.185092  430416 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1227 09:42:21.209180  430416 start.go:590] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1227 09:42:21.209363  430416 ssh_runner.go:195] Run: crio --version
	I1227 09:42:21.237522  430416 ssh_runner.go:195] Run: crio --version
	I1227 09:42:21.278704  430416 out.go:179] * Preparing Kubernetes v1.35.0 on CRI-O 1.34.3 ...
	I1227 09:42:21.282069  430416 cli_runner.go:164] Run: docker network inspect pause-212930 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1227 09:42:21.307883  430416 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1227 09:42:21.312584  430416 kubeadm.go:884] updating cluster {Name:pause-212930 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:pause-212930 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerName
s:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false regist
ry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false} ...
	I1227 09:42:21.312768  430416 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime crio
	I1227 09:42:21.312834  430416 ssh_runner.go:195] Run: sudo crictl images --output json
	I1227 09:42:21.403069  430416 crio.go:561] all images are preloaded for cri-o runtime.
	I1227 09:42:21.403097  430416 crio.go:433] Images already preloaded, skipping extraction
	I1227 09:42:21.403151  430416 ssh_runner.go:195] Run: sudo crictl images --output json
	I1227 09:42:21.503833  430416 crio.go:561] all images are preloaded for cri-o runtime.
	I1227 09:42:21.503855  430416 cache_images.go:86] Images are preloaded, skipping loading
	I1227 09:42:21.503863  430416 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.35.0 crio true true} ...
	I1227 09:42:21.503969  430416 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=pause-212930 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0 ClusterName:pause-212930 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1227 09:42:21.504048  430416 ssh_runner.go:195] Run: crio config
	I1227 09:42:21.667570  430416 cni.go:84] Creating CNI manager for ""
	I1227 09:42:21.667645  430416 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1227 09:42:21.667679  430416 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1227 09:42:21.667719  430416 kubeadm.go:197] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.35.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:pause-212930 NodeName:pause-212930 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernete
s/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock failCgroupV1:false hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1227 09:42:21.667875  430416 kubeadm.go:203] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "pause-212930"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	failCgroupV1: false
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1227 09:42:21.667970  430416 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0
	I1227 09:42:21.681431  430416 binaries.go:51] Found k8s binaries, skipping transfer
	I1227 09:42:21.681544  430416 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1227 09:42:21.697638  430416 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (362 bytes)
	I1227 09:42:21.720491  430416 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1227 09:42:21.743608  430416 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2229 bytes)
	I1227 09:42:21.766517  430416 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1227 09:42:21.773479  430416 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1227 09:42:21.980852  430416 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1227 09:42:21.997123  430416 certs.go:69] Setting up /home/jenkins/minikube-integration/22344-301174/.minikube/profiles/pause-212930 for IP: 192.168.76.2
	I1227 09:42:21.997142  430416 certs.go:195] generating shared ca certs ...
	I1227 09:42:21.997163  430416 certs.go:227] acquiring lock for ca certs: {Name:mk8bd99999eb300c33129cb11fd55a1be3001328 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 09:42:21.997305  430416 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22344-301174/.minikube/ca.key
	I1227 09:42:21.997347  430416 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22344-301174/.minikube/proxy-client-ca.key
	I1227 09:42:21.997354  430416 certs.go:257] generating profile certs ...
	I1227 09:42:21.997437  430416 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22344-301174/.minikube/profiles/pause-212930/client.key
	I1227 09:42:21.998368  430416 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22344-301174/.minikube/profiles/pause-212930/apiserver.key.0d8d2798
	I1227 09:42:21.999077  430416 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22344-301174/.minikube/profiles/pause-212930/proxy-client.key
	I1227 09:42:21.999219  430416 certs.go:484] found cert: /home/jenkins/minikube-integration/22344-301174/.minikube/certs/303043.pem (1338 bytes)
	W1227 09:42:21.999251  430416 certs.go:480] ignoring /home/jenkins/minikube-integration/22344-301174/.minikube/certs/303043_empty.pem, impossibly tiny 0 bytes
	I1227 09:42:21.999258  430416 certs.go:484] found cert: /home/jenkins/minikube-integration/22344-301174/.minikube/certs/ca-key.pem (1675 bytes)
	I1227 09:42:21.999284  430416 certs.go:484] found cert: /home/jenkins/minikube-integration/22344-301174/.minikube/certs/ca.pem (1082 bytes)
	I1227 09:42:21.999312  430416 certs.go:484] found cert: /home/jenkins/minikube-integration/22344-301174/.minikube/certs/cert.pem (1123 bytes)
	I1227 09:42:21.999336  430416 certs.go:484] found cert: /home/jenkins/minikube-integration/22344-301174/.minikube/certs/key.pem (1675 bytes)
	I1227 09:42:21.999384  430416 certs.go:484] found cert: /home/jenkins/minikube-integration/22344-301174/.minikube/files/etc/ssl/certs/3030432.pem (1708 bytes)
	I1227 09:42:22.000089  430416 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22344-301174/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1227 09:42:22.028404  430416 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22344-301174/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1227 09:42:22.059636  430416 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22344-301174/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1227 09:42:22.087497  430416 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22344-301174/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1227 09:42:22.108436  430416 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22344-301174/.minikube/profiles/pause-212930/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1227 09:42:22.133599  430416 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22344-301174/.minikube/profiles/pause-212930/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1227 09:42:22.163221  430416 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22344-301174/.minikube/profiles/pause-212930/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1227 09:42:22.192274  430416 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22344-301174/.minikube/profiles/pause-212930/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1227 09:42:22.220536  430416 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22344-301174/.minikube/certs/303043.pem --> /usr/share/ca-certificates/303043.pem (1338 bytes)
	I1227 09:42:22.252444  430416 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22344-301174/.minikube/files/etc/ssl/certs/3030432.pem --> /usr/share/ca-certificates/3030432.pem (1708 bytes)
	I1227 09:42:22.283844  430416 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22344-301174/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1227 09:42:22.319233  430416 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (722 bytes)
	I1227 09:42:22.352431  430416 ssh_runner.go:195] Run: openssl version
	I1227 09:42:22.363423  430416 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1227 09:42:22.373611  430416 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1227 09:42:22.391847  430416 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1227 09:42:22.398978  430416 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 27 09:13 /usr/share/ca-certificates/minikubeCA.pem
	I1227 09:42:22.399088  430416 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1227 09:42:17.899777  430965 delete.go:124] DEMOLISHING missing-upgrade-080776 ...
	I1227 09:42:17.899875  430965 cli_runner.go:164] Run: docker container inspect missing-upgrade-080776 --format={{.State.Status}}
	W1227 09:42:17.915673  430965 cli_runner.go:211] docker container inspect missing-upgrade-080776 --format={{.State.Status}} returned with exit code 1
	W1227 09:42:17.915734  430965 stop.go:83] unable to get state: unknown state "missing-upgrade-080776": docker container inspect missing-upgrade-080776 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-080776
	I1227 09:42:17.915755  430965 delete.go:128] stophost failed (probably ok): ssh power off: unknown state "missing-upgrade-080776": docker container inspect missing-upgrade-080776 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-080776
	I1227 09:42:17.916234  430965 cli_runner.go:164] Run: docker container inspect missing-upgrade-080776 --format={{.State.Status}}
	W1227 09:42:17.931659  430965 cli_runner.go:211] docker container inspect missing-upgrade-080776 --format={{.State.Status}} returned with exit code 1
	I1227 09:42:17.931724  430965 delete.go:82] Unable to get host status for missing-upgrade-080776, assuming it has already been deleted: state: unknown state "missing-upgrade-080776": docker container inspect missing-upgrade-080776 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-080776
	I1227 09:42:17.931799  430965 cli_runner.go:164] Run: docker container inspect -f {{.Id}} missing-upgrade-080776
	W1227 09:42:17.947046  430965 cli_runner.go:211] docker container inspect -f {{.Id}} missing-upgrade-080776 returned with exit code 1
	I1227 09:42:17.947090  430965 kic.go:371] could not find the container missing-upgrade-080776 to remove it. will try anyways
	I1227 09:42:17.947151  430965 cli_runner.go:164] Run: docker container inspect missing-upgrade-080776 --format={{.State.Status}}
	W1227 09:42:17.962584  430965 cli_runner.go:211] docker container inspect missing-upgrade-080776 --format={{.State.Status}} returned with exit code 1
	W1227 09:42:17.962666  430965 oci.go:84] error getting container status, will try to delete anyways: unknown state "missing-upgrade-080776": docker container inspect missing-upgrade-080776 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-080776
	I1227 09:42:17.962737  430965 cli_runner.go:164] Run: docker exec --privileged -t missing-upgrade-080776 /bin/bash -c "sudo init 0"
	W1227 09:42:17.976515  430965 cli_runner.go:211] docker exec --privileged -t missing-upgrade-080776 /bin/bash -c "sudo init 0" returned with exit code 1
	I1227 09:42:17.976548  430965 oci.go:659] error shutdown missing-upgrade-080776: docker exec --privileged -t missing-upgrade-080776 /bin/bash -c "sudo init 0": exit status 1
	stdout:
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-080776
	I1227 09:42:18.976783  430965 cli_runner.go:164] Run: docker container inspect missing-upgrade-080776 --format={{.State.Status}}
	W1227 09:42:18.995280  430965 cli_runner.go:211] docker container inspect missing-upgrade-080776 --format={{.State.Status}} returned with exit code 1
	I1227 09:42:18.995348  430965 oci.go:671] temporary error verifying shutdown: unknown state "missing-upgrade-080776": docker container inspect missing-upgrade-080776 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-080776
	I1227 09:42:18.995360  430965 oci.go:673] temporary error: container missing-upgrade-080776 status is  but expect it to be exited
	I1227 09:42:18.995403  430965 retry.go:84] will retry after 400ms: couldn't verify container is exited. %v: unknown state "missing-upgrade-080776": docker container inspect missing-upgrade-080776 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-080776
	I1227 09:42:19.372198  430965 cli_runner.go:164] Run: docker container inspect missing-upgrade-080776 --format={{.State.Status}}
	W1227 09:42:19.387531  430965 cli_runner.go:211] docker container inspect missing-upgrade-080776 --format={{.State.Status}} returned with exit code 1
	I1227 09:42:19.387586  430965 oci.go:671] temporary error verifying shutdown: unknown state "missing-upgrade-080776": docker container inspect missing-upgrade-080776 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-080776
	I1227 09:42:19.387596  430965 oci.go:673] temporary error: container missing-upgrade-080776 status is  but expect it to be exited
	I1227 09:42:20.026341  430965 cli_runner.go:164] Run: docker container inspect missing-upgrade-080776 --format={{.State.Status}}
	W1227 09:42:20.066202  430965 cli_runner.go:211] docker container inspect missing-upgrade-080776 --format={{.State.Status}} returned with exit code 1
	I1227 09:42:20.066280  430965 oci.go:671] temporary error verifying shutdown: unknown state "missing-upgrade-080776": docker container inspect missing-upgrade-080776 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-080776
	I1227 09:42:20.066290  430965 oci.go:673] temporary error: container missing-upgrade-080776 status is  but expect it to be exited
	I1227 09:42:21.753130  430965 cli_runner.go:164] Run: docker container inspect missing-upgrade-080776 --format={{.State.Status}}
	W1227 09:42:21.782377  430965 cli_runner.go:211] docker container inspect missing-upgrade-080776 --format={{.State.Status}} returned with exit code 1
	I1227 09:42:21.782441  430965 oci.go:671] temporary error verifying shutdown: unknown state "missing-upgrade-080776": docker container inspect missing-upgrade-080776 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-080776
	I1227 09:42:21.782460  430965 oci.go:673] temporary error: container missing-upgrade-080776 status is  but expect it to be exited
	I1227 09:42:22.478949  430416 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1227 09:42:22.487765  430416 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/303043.pem
	I1227 09:42:22.510687  430416 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/303043.pem /etc/ssl/certs/303043.pem
	I1227 09:42:22.519132  430416 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/303043.pem
	I1227 09:42:22.523877  430416 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 27 09:17 /usr/share/ca-certificates/303043.pem
	I1227 09:42:22.524025  430416 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/303043.pem
	I1227 09:42:22.582107  430416 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1227 09:42:22.590544  430416 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/3030432.pem
	I1227 09:42:22.598517  430416 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/3030432.pem /etc/ssl/certs/3030432.pem
	I1227 09:42:22.606941  430416 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3030432.pem
	I1227 09:42:22.611297  430416 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 27 09:17 /usr/share/ca-certificates/3030432.pem
	I1227 09:42:22.611408  430416 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3030432.pem
	I1227 09:42:22.655975  430416 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1227 09:42:22.664293  430416 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1227 09:42:22.669689  430416 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1227 09:42:22.716972  430416 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1227 09:42:22.760673  430416 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1227 09:42:22.801801  430416 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1227 09:42:22.845927  430416 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1227 09:42:22.891949  430416 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1227 09:42:22.946491  430416 kubeadm.go:401] StartCluster: {Name:pause-212930 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:pause-212930 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[
] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-
aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1227 09:42:22.946672  430416 cri.go:61] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1227 09:42:22.946756  430416 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1227 09:42:23.005031  430416 cri.go:96] found id: "aa41a510f7223dca1286c7561932eedc33fa445732808526059960960b028886"
	I1227 09:42:23.005117  430416 cri.go:96] found id: "78e44803c8142565ec41e1482889f7692ec1eb9df4682d12be987925aba4dc71"
	I1227 09:42:23.005136  430416 cri.go:96] found id: "fb5d5abd088bf2b2019159161bad02e4b126be613fbd648e5ba75ce0a8d89008"
	I1227 09:42:23.005166  430416 cri.go:96] found id: "36a549f100e5b12b82a2441b6f9c890e71d25fb1797c9e47186609cd2e2e1a6b"
	I1227 09:42:23.005188  430416 cri.go:96] found id: "c93fe8161deae8dd5efcbd7f268e872176eb4d2c3df8e52e409d42dd300f7cea"
	I1227 09:42:23.005211  430416 cri.go:96] found id: "e43c40729d4c0469cf5b2a084aa9ebe018a885cdfe51f7d958c4ff17a340bdeb"
	I1227 09:42:23.005234  430416 cri.go:96] found id: "368c7201327cbe37eff9370846fa248f727cbfe09186ebc53e2cef14b8a9dc23"
	I1227 09:42:23.005265  430416 cri.go:96] found id: "a92514a62346a43118a86dbc159a30b0a844a38b3ffde54e8fe87cd4f8ff9786"
	I1227 09:42:23.005282  430416 cri.go:96] found id: "8e8d093798bba459f2cdf4bfbdf2b34b4f04f26043d223f4d234c6db2c17986b"
	I1227 09:42:23.005315  430416 cri.go:96] found id: "123709a40ad132b23744a3beeceee9b5e5c7095f906902ee953204b4702278fe"
	I1227 09:42:23.005342  430416 cri.go:96] found id: "03c077c9fc8b59712914c587598b07b45434df3ac40128669bc567d1738e39f4"
	I1227 09:42:23.005364  430416 cri.go:96] found id: "1074f4a0ea38ddbcac7244f9ddcdef61ec82f29cbe311ebaaaa4751e92095d5b"
	I1227 09:42:23.005387  430416 cri.go:96] found id: "8984abd55ae2a6b70149f64ccc0e67debf1bfd4c7a664ace1295f91f08bc8c23"
	I1227 09:42:23.005410  430416 cri.go:96] found id: "f1c6a1c1c62397748c6399aaa614424035b1cae6e1916e7a474286917d5684ff"
	I1227 09:42:23.005439  430416 cri.go:96] found id: ""
	I1227 09:42:23.005525  430416 ssh_runner.go:195] Run: sudo runc list -f json
	W1227 09:42:23.033124  430416 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-27T09:42:23Z" level=error msg="open /run/runc: no such file or directory"
	I1227 09:42:23.033233  430416 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1227 09:42:23.051487  430416 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1227 09:42:23.051559  430416 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1227 09:42:23.051632  430416 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1227 09:42:23.059931  430416 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1227 09:42:23.060701  430416 kubeconfig.go:125] found "pause-212930" server: "https://192.168.76.2:8443"
	I1227 09:42:23.061624  430416 kapi.go:59] client config for pause-212930: &rest.Config{Host:"https://192.168.76.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22344-301174/.minikube/profiles/pause-212930/client.crt", KeyFile:"/home/jenkins/minikube-integration/22344-301174/.minikube/profiles/pause-212930/client.key", CAFile:"/home/jenkins/minikube-integration/22344-301174/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]s
tring(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1f30470), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1227 09:42:23.062677  430416 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=true
	I1227 09:42:23.062788  430416 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=true
	I1227 09:42:23.062824  430416 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1227 09:42:23.062846  430416 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1227 09:42:23.062886  430416 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1227 09:42:23.062908  430416 envvar.go:172] "Feature gate default state" feature="InOrderInformersBatchProcess" enabled=true
	I1227 09:42:23.063282  430416 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1227 09:42:23.077762  430416 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.76.2
	I1227 09:42:23.077839  430416 kubeadm.go:602] duration metric: took 26.259931ms to restartPrimaryControlPlane
	I1227 09:42:23.077866  430416 kubeadm.go:403] duration metric: took 131.384317ms to StartCluster
	I1227 09:42:23.077897  430416 settings.go:142] acquiring lock: {Name:mkd3c0090464e277fd903dd0170c6ace5a6172ea Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 09:42:23.077977  430416 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22344-301174/kubeconfig
	I1227 09:42:23.078992  430416 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22344-301174/kubeconfig: {Name:mk3a3b85dc4ae4614cb3a9a7459a1ec0ee3f4d9f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 09:42:23.079271  430416 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1227 09:42:23.079694  430416 config.go:182] Loaded profile config "pause-212930": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1227 09:42:23.079718  430416 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1227 09:42:23.082597  430416 out.go:179] * Verifying Kubernetes components...
	I1227 09:42:23.082688  430416 out.go:179] * Enabled addons: 
	I1227 09:42:23.085467  430416 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1227 09:42:23.085578  430416 addons.go:530] duration metric: took 5.864863ms for enable addons: enabled=[]
	I1227 09:42:23.295015  430416 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1227 09:42:23.309701  430416 node_ready.go:35] waiting up to 6m0s for node "pause-212930" to be "Ready" ...
	I1227 09:42:25.044551  430416 node_ready.go:49] node "pause-212930" is "Ready"
	I1227 09:42:25.044579  430416 node_ready.go:38] duration metric: took 1.734843697s for node "pause-212930" to be "Ready" ...
	I1227 09:42:25.044591  430416 api_server.go:52] waiting for apiserver process to appear ...
	I1227 09:42:25.044649  430416 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 09:42:25.062867  430416 api_server.go:72] duration metric: took 1.983534382s to wait for apiserver process to appear ...
	I1227 09:42:25.062892  430416 api_server.go:88] waiting for apiserver healthz status ...
	I1227 09:42:25.062921  430416 api_server.go:299] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1227 09:42:25.080990  430416 api_server.go:325] https://192.168.76.2:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1227 09:42:25.081067  430416 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1227 09:42:25.563759  430416 api_server.go:299] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1227 09:42:25.572268  430416 api_server.go:325] https://192.168.76.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1227 09:42:25.572304  430416 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1227 09:42:26.063634  430416 api_server.go:299] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1227 09:42:26.072323  430416 api_server.go:325] https://192.168.76.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1227 09:42:26.072407  430416 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1227 09:42:26.563030  430416 api_server.go:299] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1227 09:42:26.573153  430416 api_server.go:325] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1227 09:42:26.574440  430416 api_server.go:141] control plane version: v1.35.0
	I1227 09:42:26.574475  430416 api_server.go:131] duration metric: took 1.511575457s to wait for apiserver health ...
	I1227 09:42:26.574484  430416 system_pods.go:43] waiting for kube-system pods to appear ...
	I1227 09:42:26.578755  430416 system_pods.go:59] 7 kube-system pods found
	I1227 09:42:26.578852  430416 system_pods.go:61] "coredns-7d764666f9-j52xk" [7506b606-698b-481c-aac2-86984f3866e4] Running
	I1227 09:42:26.578876  430416 system_pods.go:61] "etcd-pause-212930" [5fe7f336-926a-4740-bcd1-0ef4efe13456] Running
	I1227 09:42:26.578915  430416 system_pods.go:61] "kindnet-l2mpb" [0633095a-3161-4f93-951b-90597bcc80cb] Running
	I1227 09:42:26.578941  430416 system_pods.go:61] "kube-apiserver-pause-212930" [323d69e8-83e1-441d-91c3-6f40f5b90b85] Running
	I1227 09:42:26.578964  430416 system_pods.go:61] "kube-controller-manager-pause-212930" [c0117059-f3c1-4352-8ae1-4d8a92e83dc3] Running
	I1227 09:42:26.579013  430416 system_pods.go:61] "kube-proxy-w88ml" [b077dc1e-d0af-48bd-b8b0-4f775f0c07b9] Running
	I1227 09:42:26.579039  430416 system_pods.go:61] "kube-scheduler-pause-212930" [c943ea12-6351-4069-974d-211bbafa2b2e] Running
	I1227 09:42:26.579086  430416 system_pods.go:74] duration metric: took 4.571932ms to wait for pod list to return data ...
	I1227 09:42:26.579116  430416 default_sa.go:34] waiting for default service account to be created ...
	I1227 09:42:26.584456  430416 default_sa.go:45] found service account: "default"
	I1227 09:42:26.584486  430416 default_sa.go:55] duration metric: took 5.348943ms for default service account to be created ...
	I1227 09:42:26.584497  430416 system_pods.go:116] waiting for k8s-apps to be running ...
	I1227 09:42:26.587649  430416 system_pods.go:86] 7 kube-system pods found
	I1227 09:42:26.587683  430416 system_pods.go:89] "coredns-7d764666f9-j52xk" [7506b606-698b-481c-aac2-86984f3866e4] Running
	I1227 09:42:26.587690  430416 system_pods.go:89] "etcd-pause-212930" [5fe7f336-926a-4740-bcd1-0ef4efe13456] Running
	I1227 09:42:26.587695  430416 system_pods.go:89] "kindnet-l2mpb" [0633095a-3161-4f93-951b-90597bcc80cb] Running
	I1227 09:42:26.587699  430416 system_pods.go:89] "kube-apiserver-pause-212930" [323d69e8-83e1-441d-91c3-6f40f5b90b85] Running
	I1227 09:42:26.587723  430416 system_pods.go:89] "kube-controller-manager-pause-212930" [c0117059-f3c1-4352-8ae1-4d8a92e83dc3] Running
	I1227 09:42:26.587735  430416 system_pods.go:89] "kube-proxy-w88ml" [b077dc1e-d0af-48bd-b8b0-4f775f0c07b9] Running
	I1227 09:42:26.587740  430416 system_pods.go:89] "kube-scheduler-pause-212930" [c943ea12-6351-4069-974d-211bbafa2b2e] Running
	I1227 09:42:26.587752  430416 system_pods.go:126] duration metric: took 3.24846ms to wait for k8s-apps to be running ...
	I1227 09:42:26.587763  430416 system_svc.go:44] waiting for kubelet service to be running ....
	I1227 09:42:26.587831  430416 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1227 09:42:26.606674  430416 system_svc.go:56] duration metric: took 18.901076ms WaitForService to wait for kubelet
	I1227 09:42:26.606745  430416 kubeadm.go:587] duration metric: took 3.527415468s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1227 09:42:26.606782  430416 node_conditions.go:102] verifying NodePressure condition ...
	I1227 09:42:26.610090  430416 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1227 09:42:26.610186  430416 node_conditions.go:123] node cpu capacity is 2
	I1227 09:42:26.610216  430416 node_conditions.go:105] duration metric: took 3.411761ms to run NodePressure ...
	I1227 09:42:26.610245  430416 start.go:242] waiting for startup goroutines ...
	I1227 09:42:26.610283  430416 start.go:247] waiting for cluster config update ...
	I1227 09:42:26.610308  430416 start.go:256] writing updated cluster config ...
	I1227 09:42:26.610642  430416 ssh_runner.go:195] Run: rm -f paused
	I1227 09:42:26.614765  430416 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1227 09:42:26.615466  430416 kapi.go:59] client config for pause-212930: &rest.Config{Host:"https://192.168.76.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22344-301174/.minikube/profiles/pause-212930/client.crt", KeyFile:"/home/jenkins/minikube-integration/22344-301174/.minikube/profiles/pause-212930/client.key", CAFile:"/home/jenkins/minikube-integration/22344-301174/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]s
tring(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1f30470), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1227 09:42:26.618745  430416 pod_ready.go:83] waiting for pod "coredns-7d764666f9-j52xk" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 09:42:26.624590  430416 pod_ready.go:94] pod "coredns-7d764666f9-j52xk" is "Ready"
	I1227 09:42:26.624666  430416 pod_ready.go:86] duration metric: took 5.859653ms for pod "coredns-7d764666f9-j52xk" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 09:42:26.627650  430416 pod_ready.go:83] waiting for pod "etcd-pause-212930" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 09:42:26.633261  430416 pod_ready.go:94] pod "etcd-pause-212930" is "Ready"
	I1227 09:42:26.633300  430416 pod_ready.go:86] duration metric: took 5.622423ms for pod "etcd-pause-212930" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 09:42:26.636121  430416 pod_ready.go:83] waiting for pod "kube-apiserver-pause-212930" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 09:42:26.640955  430416 pod_ready.go:94] pod "kube-apiserver-pause-212930" is "Ready"
	I1227 09:42:26.641026  430416 pod_ready.go:86] duration metric: took 4.87674ms for pod "kube-apiserver-pause-212930" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 09:42:26.643472  430416 pod_ready.go:83] waiting for pod "kube-controller-manager-pause-212930" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 09:42:27.019099  430416 pod_ready.go:94] pod "kube-controller-manager-pause-212930" is "Ready"
	I1227 09:42:27.019127  430416 pod_ready.go:86] duration metric: took 375.630834ms for pod "kube-controller-manager-pause-212930" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 09:42:27.219307  430416 pod_ready.go:83] waiting for pod "kube-proxy-w88ml" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 09:42:24.198858  430965 cli_runner.go:164] Run: docker container inspect missing-upgrade-080776 --format={{.State.Status}}
	W1227 09:42:24.227858  430965 cli_runner.go:211] docker container inspect missing-upgrade-080776 --format={{.State.Status}} returned with exit code 1
	I1227 09:42:24.227918  430965 oci.go:671] temporary error verifying shutdown: unknown state "missing-upgrade-080776": docker container inspect missing-upgrade-080776 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-080776
	I1227 09:42:24.227926  430965 oci.go:673] temporary error: container missing-upgrade-080776 status is  but expect it to be exited
	I1227 09:42:25.643606  430965 cli_runner.go:164] Run: docker container inspect missing-upgrade-080776 --format={{.State.Status}}
	W1227 09:42:25.659924  430965 cli_runner.go:211] docker container inspect missing-upgrade-080776 --format={{.State.Status}} returned with exit code 1
	I1227 09:42:25.660000  430965 oci.go:671] temporary error verifying shutdown: unknown state "missing-upgrade-080776": docker container inspect missing-upgrade-080776 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-080776
	I1227 09:42:25.660011  430965 oci.go:673] temporary error: container missing-upgrade-080776 status is  but expect it to be exited
	I1227 09:42:27.619463  430416 pod_ready.go:94] pod "kube-proxy-w88ml" is "Ready"
	I1227 09:42:27.619491  430416 pod_ready.go:86] duration metric: took 400.152707ms for pod "kube-proxy-w88ml" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 09:42:27.819727  430416 pod_ready.go:83] waiting for pod "kube-scheduler-pause-212930" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 09:42:28.219356  430416 pod_ready.go:94] pod "kube-scheduler-pause-212930" is "Ready"
	I1227 09:42:28.219439  430416 pod_ready.go:86] duration metric: took 399.638535ms for pod "kube-scheduler-pause-212930" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 09:42:28.219459  430416 pod_ready.go:40] duration metric: took 1.604623575s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1227 09:42:28.315358  430416 start.go:625] kubectl: 1.33.2, cluster: 1.35.0 (minor skew: 2)
	I1227 09:42:28.318499  430416 out.go:203] 
	W1227 09:42:28.321581  430416 out.go:285] ! /usr/local/bin/kubectl is version 1.33.2, which may have incompatibilities with Kubernetes 1.35.0.
	I1227 09:42:28.324398  430416 out.go:179]   - Want kubectl v1.35.0? Try 'minikube kubectl -- get pods -A'
	I1227 09:42:28.327382  430416 out.go:179] * Done! kubectl is now configured to use "pause-212930" cluster and "default" namespace by default
	I1227 09:42:29.823742  430965 cli_runner.go:164] Run: docker container inspect missing-upgrade-080776 --format={{.State.Status}}
	W1227 09:42:29.842290  430965 cli_runner.go:211] docker container inspect missing-upgrade-080776 --format={{.State.Status}} returned with exit code 1
	I1227 09:42:29.842376  430965 oci.go:671] temporary error verifying shutdown: unknown state "missing-upgrade-080776": docker container inspect missing-upgrade-080776 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-080776
	I1227 09:42:29.842387  430965 oci.go:673] temporary error: container missing-upgrade-080776 status is  but expect it to be exited
	I1227 09:42:29.842418  430965 retry.go:84] will retry after 7.8s: couldn't verify container is exited. %v: unknown state "missing-upgrade-080776": docker container inspect missing-upgrade-080776 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-080776
	
	
	==> CRI-O <==
	Dec 27 09:42:21 pause-212930 crio[2084]: time="2025-12-27T09:42:21.485555141Z" level=info msg="Starting container: 36a549f100e5b12b82a2441b6f9c890e71d25fb1797c9e47186609cd2e2e1a6b" id=3657b67d-6f92-4980-8966-658a7bc43798 name=/runtime.v1.RuntimeService/StartContainer
	Dec 27 09:42:21 pause-212930 crio[2084]: time="2025-12-27T09:42:21.486054856Z" level=info msg="Created container c93fe8161deae8dd5efcbd7f268e872176eb4d2c3df8e52e409d42dd300f7cea: kube-system/kube-scheduler-pause-212930/kube-scheduler" id=1a754748-9609-43dd-be6c-c73c4adc9daa name=/runtime.v1.RuntimeService/CreateContainer
	Dec 27 09:42:21 pause-212930 crio[2084]: time="2025-12-27T09:42:21.499070432Z" level=info msg="Starting container: 78e44803c8142565ec41e1482889f7692ec1eb9df4682d12be987925aba4dc71" id=f5e11a5b-56c2-48c1-96d1-accf4a87aa45 name=/runtime.v1.RuntimeService/StartContainer
	Dec 27 09:42:21 pause-212930 crio[2084]: time="2025-12-27T09:42:21.516851223Z" level=info msg="Started container" PID=2188 containerID=36a549f100e5b12b82a2441b6f9c890e71d25fb1797c9e47186609cd2e2e1a6b description=kube-system/kube-apiserver-pause-212930/kube-apiserver id=3657b67d-6f92-4980-8966-658a7bc43798 name=/runtime.v1.RuntimeService/StartContainer sandboxID=230deea308df7bac2f3a284f829fd83a5010ae55ec8895270ff1fed1812642c4
	Dec 27 09:42:21 pause-212930 crio[2084]: time="2025-12-27T09:42:21.528760603Z" level=info msg="Created container fb5d5abd088bf2b2019159161bad02e4b126be613fbd648e5ba75ce0a8d89008: kube-system/etcd-pause-212930/etcd" id=faf7ff9e-161b-493c-a9f7-1cd293b2d1b8 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 27 09:42:21 pause-212930 crio[2084]: time="2025-12-27T09:42:21.529135196Z" level=info msg="Starting container: c93fe8161deae8dd5efcbd7f268e872176eb4d2c3df8e52e409d42dd300f7cea" id=af70f7db-c8fe-4cf2-b452-2dff5ff89fa2 name=/runtime.v1.RuntimeService/StartContainer
	Dec 27 09:42:21 pause-212930 crio[2084]: time="2025-12-27T09:42:21.534834617Z" level=info msg="Starting container: fb5d5abd088bf2b2019159161bad02e4b126be613fbd648e5ba75ce0a8d89008" id=98e06a18-9889-4b0f-83d5-64cf04443d9f name=/runtime.v1.RuntimeService/StartContainer
	Dec 27 09:42:21 pause-212930 crio[2084]: time="2025-12-27T09:42:21.535950652Z" level=info msg="Started container" PID=2205 containerID=78e44803c8142565ec41e1482889f7692ec1eb9df4682d12be987925aba4dc71 description=kube-system/kindnet-l2mpb/kindnet-cni id=f5e11a5b-56c2-48c1-96d1-accf4a87aa45 name=/runtime.v1.RuntimeService/StartContainer sandboxID=fd098f3f156ed010c45afc444cce1539ac78633abcdbbcb60c64f58bdc8177b6
	Dec 27 09:42:21 pause-212930 crio[2084]: time="2025-12-27T09:42:21.537033981Z" level=info msg="Started container" PID=2216 containerID=fb5d5abd088bf2b2019159161bad02e4b126be613fbd648e5ba75ce0a8d89008 description=kube-system/etcd-pause-212930/etcd id=98e06a18-9889-4b0f-83d5-64cf04443d9f name=/runtime.v1.RuntimeService/StartContainer sandboxID=da386ae3581cdd7845e504c6c22377264821d416c52496825402686bfcf7addc
	Dec 27 09:42:21 pause-212930 crio[2084]: time="2025-12-27T09:42:21.540143387Z" level=info msg="Created container aa41a510f7223dca1286c7561932eedc33fa445732808526059960960b028886: kube-system/coredns-7d764666f9-j52xk/coredns" id=43277de1-85f7-4f41-8e88-db4cbc7533c6 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 27 09:42:21 pause-212930 crio[2084]: time="2025-12-27T09:42:21.544301635Z" level=info msg="Starting container: aa41a510f7223dca1286c7561932eedc33fa445732808526059960960b028886" id=6756bfb7-7feb-4a2f-bc6c-291be06a25ce name=/runtime.v1.RuntimeService/StartContainer
	Dec 27 09:42:21 pause-212930 crio[2084]: time="2025-12-27T09:42:21.556162653Z" level=info msg="Started container" PID=2182 containerID=c93fe8161deae8dd5efcbd7f268e872176eb4d2c3df8e52e409d42dd300f7cea description=kube-system/kube-scheduler-pause-212930/kube-scheduler id=af70f7db-c8fe-4cf2-b452-2dff5ff89fa2 name=/runtime.v1.RuntimeService/StartContainer sandboxID=64872c42cfb38f9b1943c4630d7cb0365b1a1f8bfad37e6dd74933a8cc41a02d
	Dec 27 09:42:21 pause-212930 crio[2084]: time="2025-12-27T09:42:21.581849884Z" level=info msg="Started container" PID=2219 containerID=aa41a510f7223dca1286c7561932eedc33fa445732808526059960960b028886 description=kube-system/coredns-7d764666f9-j52xk/coredns id=6756bfb7-7feb-4a2f-bc6c-291be06a25ce name=/runtime.v1.RuntimeService/StartContainer sandboxID=702192942d5ccfffe8985898cfade8e7fefab5bef7cc36f41c666730a5696cb8
	Dec 27 09:42:31 pause-212930 crio[2084]: time="2025-12-27T09:42:31.899973382Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Dec 27 09:42:31 pause-212930 crio[2084]: time="2025-12-27T09:42:31.904122227Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 27 09:42:31 pause-212930 crio[2084]: time="2025-12-27T09:42:31.904156936Z" level=info msg="Updated default CNI network name to kindnet"
	Dec 27 09:42:31 pause-212930 crio[2084]: time="2025-12-27T09:42:31.90417909Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Dec 27 09:42:31 pause-212930 crio[2084]: time="2025-12-27T09:42:31.907462422Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 27 09:42:31 pause-212930 crio[2084]: time="2025-12-27T09:42:31.907495284Z" level=info msg="Updated default CNI network name to kindnet"
	Dec 27 09:42:31 pause-212930 crio[2084]: time="2025-12-27T09:42:31.907514041Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Dec 27 09:42:31 pause-212930 crio[2084]: time="2025-12-27T09:42:31.910823368Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 27 09:42:31 pause-212930 crio[2084]: time="2025-12-27T09:42:31.910960502Z" level=info msg="Updated default CNI network name to kindnet"
	Dec 27 09:42:31 pause-212930 crio[2084]: time="2025-12-27T09:42:31.911039198Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Dec 27 09:42:31 pause-212930 crio[2084]: time="2025-12-27T09:42:31.91441836Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 27 09:42:31 pause-212930 crio[2084]: time="2025-12-27T09:42:31.914453216Z" level=info msg="Updated default CNI network name to kindnet"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                    NAMESPACE
	aa41a510f7223       e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf                                     12 seconds ago      Running             coredns                   1                   702192942d5cc       coredns-7d764666f9-j52xk               kube-system
	78e44803c8142       c96ee3c17498748ccc544ba99ee8ffeb020fc335b230b43cd28bf43bed229a13                                     12 seconds ago      Running             kindnet-cni               1                   fd098f3f156ed       kindnet-l2mpb                          kube-system
	fb5d5abd088bf       271e49a0ebc56647476845128fcd2a73bb138beeca3878cc3bf52b4ff1172a57                                     12 seconds ago      Running             etcd                      1                   da386ae3581cd       etcd-pause-212930                      kube-system
	36a549f100e5b       c3fcf259c473a57a5d7da116e29161904491091743512d27467c907c5516f856                                     12 seconds ago      Running             kube-apiserver            1                   230deea308df7       kube-apiserver-pause-212930            kube-system
	c93fe8161deae       ddc8422d4d35a6fc66c34be61e24df795e5cebf197eb546f62740d0bafef874f                                     12 seconds ago      Running             kube-scheduler            1                   64872c42cfb38       kube-scheduler-pause-212930            kube-system
	e43c40729d4c0       88898f1d1a62a3ea9db5d4d099dee7aa52ebe8191016c5b3c721388a309983e0                                     12 seconds ago      Running             kube-controller-manager   1                   24339f8acf8fc       kube-controller-manager-pause-212930   kube-system
	368c7201327cb       de369f46c2ff55c31ea783a663eb203caa820f3db1f9b9c935e79e7d1e9fd9e5                                     12 seconds ago      Running             kube-proxy                1                   1291c634d9fd1       kube-proxy-w88ml                       kube-system
	a92514a62346a       e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf                                     24 seconds ago      Exited              coredns                   0                   702192942d5cc       coredns-7d764666f9-j52xk               kube-system
	8e8d093798bba       docker.io/kindest/kindnetd@sha256:f1260f5691195cc9a693dc0b55178aa724d944efd62486a8320f0583272b1fa3   35 seconds ago      Exited              kindnet-cni               0                   fd098f3f156ed       kindnet-l2mpb                          kube-system
	123709a40ad13       de369f46c2ff55c31ea783a663eb203caa820f3db1f9b9c935e79e7d1e9fd9e5                                     38 seconds ago      Exited              kube-proxy                0                   1291c634d9fd1       kube-proxy-w88ml                       kube-system
	03c077c9fc8b5       c3fcf259c473a57a5d7da116e29161904491091743512d27467c907c5516f856                                     52 seconds ago      Exited              kube-apiserver            0                   230deea308df7       kube-apiserver-pause-212930            kube-system
	1074f4a0ea38d       ddc8422d4d35a6fc66c34be61e24df795e5cebf197eb546f62740d0bafef874f                                     52 seconds ago      Exited              kube-scheduler            0                   64872c42cfb38       kube-scheduler-pause-212930            kube-system
	8984abd55ae2a       271e49a0ebc56647476845128fcd2a73bb138beeca3878cc3bf52b4ff1172a57                                     52 seconds ago      Exited              etcd                      0                   da386ae3581cd       etcd-pause-212930                      kube-system
	f1c6a1c1c6239       88898f1d1a62a3ea9db5d4d099dee7aa52ebe8191016c5b3c721388a309983e0                                     52 seconds ago      Exited              kube-controller-manager   0                   24339f8acf8fc       kube-controller-manager-pause-212930   kube-system
	
	
	==> coredns [a92514a62346a43118a86dbc159a30b0a844a38b3ffde54e8fe87cd4f8ff9786] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 3e2243e8b9e7116f563b83b1933f477a68ba9ad4a829ed5d7e54629fb2ce53528b9bc6023030be20be434ad805fd246296dd428c64e9bbef3a70f22b8621f560
	CoreDNS-1.13.1
	linux/arm64, go1.25.2, 1db4568
	[INFO] 127.0.0.1:50170 - 44552 "HINFO IN 4153360780497730766.5679363157234129485. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.041262471s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [aa41a510f7223dca1286c7561932eedc33fa445732808526059960960b028886] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Plugins not ready: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Plugins not ready: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 3e2243e8b9e7116f563b83b1933f477a68ba9ad4a829ed5d7e54629fb2ce53528b9bc6023030be20be434ad805fd246296dd428c64e9bbef3a70f22b8621f560
	CoreDNS-1.13.1
	linux/arm64, go1.25.2, 1db4568
	[INFO] 127.0.0.1:49243 - 65142 "HINFO IN 5311795279361908259.7512581213843604179. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.013767651s
	
	
	==> describe nodes <==
	Name:               pause-212930
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=pause-212930
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=a2daf445edf4872fd9586416ba5dbf507613db86
	                    minikube.k8s.io/name=pause-212930
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_27T09_41_51_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 27 Dec 2025 09:41:46 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-212930
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 27 Dec 2025 09:42:25 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 27 Dec 2025 09:42:09 +0000   Sat, 27 Dec 2025 09:41:42 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 27 Dec 2025 09:42:09 +0000   Sat, 27 Dec 2025 09:41:42 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 27 Dec 2025 09:42:09 +0000   Sat, 27 Dec 2025 09:41:42 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 27 Dec 2025 09:42:09 +0000   Sat, 27 Dec 2025 09:42:09 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    pause-212930
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022304Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022304Ki
	  pods:               110
	System Info:
	  Machine ID:                 56495bcb733b49eb3642bf15694bbc8c
	  System UUID:                8d0e12be-86e6-4a4d-b390-142e4fbcd202
	  Boot ID:                    bc2ae459-717b-455e-ba6b-b9cf89c5e3c8
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.3
	  Kubelet Version:            v1.35.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-7d764666f9-j52xk                100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     39s
	  kube-system                 etcd-pause-212930                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         45s
	  kube-system                 kindnet-l2mpb                           100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      40s
	  kube-system                 kube-apiserver-pause-212930             250m (12%)    0 (0%)      0 (0%)           0 (0%)         44s
	  kube-system                 kube-controller-manager-pause-212930    200m (10%)    0 (0%)      0 (0%)           0 (0%)         44s
	  kube-system                 kube-proxy-w88ml                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         40s
	  kube-system                 kube-scheduler-pause-212930             100m (5%)     0 (0%)      0 (0%)           0 (0%)         44s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type    Reason          Age   From             Message
	  ----    ------          ----  ----             -------
	  Normal  RegisteredNode  41s   node-controller  Node pause-212930 event: Registered Node pause-212930 in Controller
	  Normal  RegisteredNode  6s    node-controller  Node pause-212930 event: Registered Node pause-212930 in Controller
	
	
	==> dmesg <==
	[Dec27 09:21] overlayfs: idmapped layers are currently not supported
	[Dec27 09:22] overlayfs: idmapped layers are currently not supported
	[Dec27 09:23] overlayfs: idmapped layers are currently not supported
	[Dec27 09:24] overlayfs: idmapped layers are currently not supported
	[  +3.021431] overlayfs: idmapped layers are currently not supported
	[Dec27 09:25] overlayfs: idmapped layers are currently not supported
	[ +42.046056] overlayfs: idmapped layers are currently not supported
	[Dec27 09:26] overlayfs: idmapped layers are currently not supported
	[  +3.426470] overlayfs: idmapped layers are currently not supported
	[Dec27 09:27] overlayfs: idmapped layers are currently not supported
	[Dec27 09:28] overlayfs: idmapped layers are currently not supported
	[Dec27 09:29] overlayfs: idmapped layers are currently not supported
	[Dec27 09:32] overlayfs: idmapped layers are currently not supported
	[Dec27 09:33] overlayfs: idmapped layers are currently not supported
	[Dec27 09:34] overlayfs: idmapped layers are currently not supported
	[Dec27 09:35] overlayfs: idmapped layers are currently not supported
	[ +35.855481] overlayfs: idmapped layers are currently not supported
	[Dec27 09:36] overlayfs: idmapped layers are currently not supported
	[Dec27 09:37] overlayfs: idmapped layers are currently not supported
	[  +4.962986] overlayfs: idmapped layers are currently not supported
	[Dec27 09:38] overlayfs: idmapped layers are currently not supported
	[ +23.051169] overlayfs: idmapped layers are currently not supported
	[Dec27 09:39] overlayfs: idmapped layers are currently not supported
	[ +34.383287] overlayfs: idmapped layers are currently not supported
	[Dec27 09:41] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [8984abd55ae2a6b70149f64ccc0e67debf1bfd4c7a664ace1295f91f08bc8c23] <==
	{"level":"info","ts":"2025-12-27T09:41:42.794923Z","caller":"membership/cluster.go:682","msg":"set initial cluster version","cluster-id":"6f20f2c4b2fb5f8a","local-member-id":"ea7e25599daad906","cluster-version":"3.6"}
	{"level":"info","ts":"2025-12-27T09:41:42.808745Z","caller":"api/capability.go:76","msg":"enabled capabilities for version","cluster-version":"3.6"}
	{"level":"info","ts":"2025-12-27T09:41:42.815858Z","caller":"etcdserver/server.go:2440","msg":"cluster version is updated","cluster-version":"3.6"}
	{"level":"info","ts":"2025-12-27T09:41:42.823939Z","caller":"v3rpc/health.go:63","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-12-27T09:41:42.833908Z","caller":"version/monitor.go:116","msg":"cluster version differs from storage version.","cluster-version":"3.6.0","storage-version":"3.5.0"}
	{"level":"info","ts":"2025-12-27T09:41:42.834287Z","caller":"schema/migration.go:65","msg":"updated storage version","new-storage-version":"3.6.0"}
	{"level":"info","ts":"2025-12-27T09:41:42.851012Z","caller":"embed/serve.go:283","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.76.2:2379"}
	{"level":"info","ts":"2025-12-27T09:42:14.101856Z","caller":"osutil/interrupt_unix.go:65","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2025-12-27T09:42:14.101905Z","caller":"embed/etcd.go:426","msg":"closing etcd server","name":"pause-212930","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.76.2:2380"],"advertise-client-urls":["https://192.168.76.2:2379"]}
	{"level":"error","ts":"2025-12-27T09:42:14.102004Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-12-27T09:42:14.253709Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"warn","ts":"2025-12-27T09:42:14.254083Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.76.2:2379: use of closed network connection"}
	{"level":"error","ts":"2025-12-27T09:42:14.253868Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2381: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-12-27T09:42:14.253904Z","caller":"etcdserver/server.go:1297","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"ea7e25599daad906","current-leader-member-id":"ea7e25599daad906"}
	{"level":"warn","ts":"2025-12-27T09:42:14.254012Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-12-27T09:42:14.254266Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"error","ts":"2025-12-27T09:42:14.254306Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-12-27T09:42:14.254369Z","caller":"etcdserver/server.go:2358","msg":"server has stopped; stopping storage version's monitor"}
	{"level":"info","ts":"2025-12-27T09:42:14.254405Z","caller":"etcdserver/server.go:2335","msg":"server has stopped; stopping cluster version's monitor"}
	{"level":"warn","ts":"2025-12-27T09:42:14.254125Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.76.2:2379: use of closed network connection"}
	{"level":"error","ts":"2025-12-27T09:42:14.254665Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.76.2:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-12-27T09:42:14.257610Z","caller":"embed/etcd.go:621","msg":"stopping serving peer traffic","address":"192.168.76.2:2380"}
	{"level":"error","ts":"2025-12-27T09:42:14.257720Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.76.2:2380: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-12-27T09:42:14.257748Z","caller":"embed/etcd.go:626","msg":"stopped serving peer traffic","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2025-12-27T09:42:14.257763Z","caller":"embed/etcd.go:428","msg":"closed etcd server","name":"pause-212930","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.76.2:2380"],"advertise-client-urls":["https://192.168.76.2:2379"]}
	
	
	==> etcd [fb5d5abd088bf2b2019159161bad02e4b126be613fbd648e5ba75ce0a8d89008] <==
	{"level":"info","ts":"2025-12-27T09:42:21.799333Z","caller":"embed/etcd.go:640","msg":"serving peer traffic","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2025-12-27T09:42:21.800409Z","caller":"embed/etcd.go:890","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2025-12-27T09:42:21.800554Z","caller":"fileutil/purge.go:49","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-12-27T09:42:21.800857Z","caller":"fileutil/purge.go:49","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-12-27T09:42:21.800716Z","caller":"membership/cluster.go:433","msg":"ignore already added member","cluster-id":"6f20f2c4b2fb5f8a","local-member-id":"ea7e25599daad906","added-peer-id":"ea7e25599daad906","added-peer-peer-urls":["https://192.168.76.2:2380"],"added-peer-is-learner":false}
	{"level":"info","ts":"2025-12-27T09:42:21.801070Z","caller":"membership/cluster.go:674","msg":"updated cluster version","cluster-id":"6f20f2c4b2fb5f8a","local-member-id":"ea7e25599daad906","from":"3.6","to":"3.6"}
	{"level":"info","ts":"2025-12-27T09:42:21.800770Z","caller":"embed/etcd.go:611","msg":"cmux::serve","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2025-12-27T09:42:22.026358Z","logger":"raft","caller":"v3@v3.6.0/raft.go:988","msg":"ea7e25599daad906 is starting a new election at term 2"}
	{"level":"info","ts":"2025-12-27T09:42:22.026412Z","logger":"raft","caller":"v3@v3.6.0/raft.go:930","msg":"ea7e25599daad906 became pre-candidate at term 2"}
	{"level":"info","ts":"2025-12-27T09:42:22.026464Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1077","msg":"ea7e25599daad906 received MsgPreVoteResp from ea7e25599daad906 at term 2"}
	{"level":"info","ts":"2025-12-27T09:42:22.026478Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1693","msg":"ea7e25599daad906 has received 1 MsgPreVoteResp votes and 0 vote rejections"}
	{"level":"info","ts":"2025-12-27T09:42:22.026498Z","logger":"raft","caller":"v3@v3.6.0/raft.go:912","msg":"ea7e25599daad906 became candidate at term 3"}
	{"level":"info","ts":"2025-12-27T09:42:22.031661Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1077","msg":"ea7e25599daad906 received MsgVoteResp from ea7e25599daad906 at term 3"}
	{"level":"info","ts":"2025-12-27T09:42:22.031724Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1693","msg":"ea7e25599daad906 has received 1 MsgVoteResp votes and 0 vote rejections"}
	{"level":"info","ts":"2025-12-27T09:42:22.031746Z","logger":"raft","caller":"v3@v3.6.0/raft.go:970","msg":"ea7e25599daad906 became leader at term 3"}
	{"level":"info","ts":"2025-12-27T09:42:22.031756Z","logger":"raft","caller":"v3@v3.6.0/node.go:370","msg":"raft.node: ea7e25599daad906 elected leader ea7e25599daad906 at term 3"}
	{"level":"info","ts":"2025-12-27T09:42:22.044357Z","caller":"etcdserver/server.go:1820","msg":"published local member to cluster through raft","local-member-id":"ea7e25599daad906","local-member-attributes":"{Name:pause-212930 ClientURLs:[https://192.168.76.2:2379]}","cluster-id":"6f20f2c4b2fb5f8a","publish-timeout":"7s"}
	{"level":"info","ts":"2025-12-27T09:42:22.044401Z","caller":"embed/serve.go:138","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-12-27T09:42:22.044640Z","caller":"embed/serve.go:138","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-12-27T09:42:22.045541Z","caller":"v3rpc/health.go:63","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-12-27T09:42:22.047829Z","caller":"embed/serve.go:283","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.76.2:2379"}
	{"level":"info","ts":"2025-12-27T09:42:22.048720Z","caller":"v3rpc/health.go:63","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-12-27T09:42:22.049413Z","caller":"embed/serve.go:283","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2025-12-27T09:42:22.059158Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-12-27T09:42:22.059231Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	
	
	==> kernel <==
	 09:42:34 up  2:25,  0 user,  load average: 3.32, 2.40, 2.42
	Linux pause-212930 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [78e44803c8142565ec41e1482889f7692ec1eb9df4682d12be987925aba4dc71] <==
	I1227 09:42:21.693134       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1227 09:42:21.714327       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1227 09:42:21.714476       1 main.go:148] setting mtu 1500 for CNI 
	I1227 09:42:21.714488       1 main.go:178] kindnetd IP family: "ipv4"
	I1227 09:42:21.714504       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-12-27T09:42:21Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1227 09:42:21.899292       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1227 09:42:21.899325       1 controller.go:381] "Waiting for informer caches to sync"
	I1227 09:42:21.899335       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1227 09:42:21.900043       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1227 09:42:25.200074       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1227 09:42:25.200182       1 metrics.go:72] Registering metrics
	I1227 09:42:25.200308       1 controller.go:711] "Syncing nftables rules"
	I1227 09:42:31.899368       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1227 09:42:31.899408       1 main.go:301] handling current node
	
	
	==> kindnet [8e8d093798bba459f2cdf4bfbdf2b34b4f04f26043d223f4d234c6db2c17986b] <==
	I1227 09:41:58.827450       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1227 09:41:58.827811       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1227 09:41:58.827959       1 main.go:148] setting mtu 1500 for CNI 
	I1227 09:41:58.827998       1 main.go:178] kindnetd IP family: "ipv4"
	I1227 09:41:58.828039       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-12-27T09:41:59Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1227 09:41:59.027480       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1227 09:41:59.027557       1 controller.go:381] "Waiting for informer caches to sync"
	I1227 09:41:59.027593       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1227 09:41:59.027755       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1227 09:41:59.328667       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1227 09:41:59.328758       1 metrics.go:72] Registering metrics
	I1227 09:41:59.328833       1 controller.go:711] "Syncing nftables rules"
	I1227 09:42:09.027795       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1227 09:42:09.029162       1 main.go:301] handling current node
	
	
	==> kube-apiserver [03c077c9fc8b59712914c587598b07b45434df3ac40128669bc567d1738e39f4] <==
	W1227 09:42:14.131194       1 logging.go:55] [core] [Channel #75 SubChannel #77]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1227 09:42:14.131254       1 logging.go:55] [core] [Channel #123 SubChannel #125]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1227 09:42:14.131309       1 logging.go:55] [core] [Channel #147 SubChannel #149]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1227 09:42:14.131510       1 logging.go:55] [core] [Channel #171 SubChannel #173]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1227 09:42:14.131689       1 logging.go:55] [core] [Channel #27 SubChannel #29]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1227 09:42:14.131780       1 logging.go:55] [core] [Channel #203 SubChannel #205]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1227 09:42:14.131871       1 logging.go:55] [core] [Channel #255 SubChannel #257]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1227 09:42:14.136251       1 logging.go:55] [core] [Channel #107 SubChannel #109]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1227 09:42:14.136739       1 logging.go:55] [core] [Channel #79 SubChannel #81]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1227 09:42:14.137457       1 logging.go:55] [core] [Channel #2 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1227 09:42:14.137666       1 logging.go:55] [core] [Channel #59 SubChannel #61]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1227 09:42:14.137734       1 logging.go:55] [core] [Channel #251 SubChannel #253]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1227 09:42:14.137783       1 logging.go:55] [core] [Channel #55 SubChannel #57]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1227 09:42:14.137845       1 logging.go:55] [core] [Channel #83 SubChannel #85]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1227 09:42:14.137899       1 logging.go:55] [core] [Channel #87 SubChannel #89]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1227 09:42:14.137948       1 logging.go:55] [core] [Channel #167 SubChannel #169]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1227 09:42:14.137999       1 logging.go:55] [core] [Channel #207 SubChannel #209]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1227 09:42:14.138098       1 logging.go:55] [core] [Channel #179 SubChannel #181]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1227 09:42:14.138161       1 logging.go:55] [core] [Channel #139 SubChannel #141]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1227 09:42:14.138210       1 logging.go:55] [core] [Channel #191 SubChannel #193]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1227 09:42:14.138255       1 logging.go:55] [core] [Channel #143 SubChannel #145]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1227 09:42:14.138327       1 logging.go:55] [core] [Channel #12 SubChannel #14]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1227 09:42:14.138395       1 logging.go:55] [core] [Channel #155 SubChannel #157]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1227 09:42:14.138503       1 logging.go:55] [core] [Channel #183 SubChannel #185]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-apiserver [36a549f100e5b12b82a2441b6f9c890e71d25fb1797c9e47186609cd2e2e1a6b] <==
	I1227 09:42:24.791634       1 apf_controller.go:377] Starting API Priority and Fairness config controller
	I1227 09:42:25.113095       1 cidrallocator.go:302] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1227 09:42:25.133572       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1227 09:42:25.151957       1 shared_informer.go:377] "Caches are synced"
	I1227 09:42:25.151994       1 policy_source.go:248] refreshing policies
	I1227 09:42:25.155301       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1227 09:42:25.155502       1 shared_informer.go:356] "Caches are synced" controller="crd-autoregister"
	I1227 09:42:25.155563       1 aggregator.go:187] initial CRD sync complete...
	I1227 09:42:25.155594       1 autoregister_controller.go:144] Starting autoregister controller
	I1227 09:42:25.155622       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1227 09:42:25.155650       1 cache.go:39] Caches are synced for autoregister controller
	I1227 09:42:25.155723       1 shared_informer.go:377] "Caches are synced"
	I1227 09:42:25.164191       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1227 09:42:25.164199       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1227 09:42:25.165569       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1227 09:42:25.166003       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1227 09:42:25.166024       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1227 09:42:25.170015       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1227 09:42:25.191224       1 shared_informer.go:377] "Caches are synced"
	I1227 09:42:25.191651       1 shared_informer.go:377] "Caches are synced"
	I1227 09:42:25.191715       1 cache.go:39] Caches are synced for LocalAvailability controller
	E1227 09:42:25.202585       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1227 09:42:25.211119       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1227 09:42:25.801872       1 storage_scheduling.go:139] all system priority classes are created successfully or already exist.
	I1227 09:42:27.074003       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	
	
	==> kube-controller-manager [e43c40729d4c0469cf5b2a084aa9ebe018a885cdfe51f7d958c4ff17a340bdeb] <==
	I1227 09:42:28.252738       1 shared_informer.go:377] "Caches are synced"
	I1227 09:42:28.252809       1 shared_informer.go:377] "Caches are synced"
	I1227 09:42:28.252932       1 node_lifecycle_controller.go:1234] "Initializing eviction metric for zone" zone=""
	I1227 09:42:28.253060       1 node_lifecycle_controller.go:886] "Missing timestamp for Node. Assuming now as a timestamp" node="pause-212930"
	I1227 09:42:28.253249       1 node_lifecycle_controller.go:1080] "Controller detected that zone is now in new state" zone="" newState="Normal"
	I1227 09:42:28.253499       1 shared_informer.go:377] "Caches are synced"
	I1227 09:42:28.253566       1 shared_informer.go:377] "Caches are synced"
	I1227 09:42:28.253614       1 shared_informer.go:377] "Caches are synced"
	I1227 09:42:28.253691       1 shared_informer.go:377] "Caches are synced"
	I1227 09:42:28.253836       1 shared_informer.go:377] "Caches are synced"
	I1227 09:42:28.253940       1 shared_informer.go:377] "Caches are synced"
	I1227 09:42:28.255108       1 shared_informer.go:377] "Caches are synced"
	I1227 09:42:28.257203       1 shared_informer.go:377] "Caches are synced"
	I1227 09:42:28.257310       1 shared_informer.go:377] "Caches are synced"
	I1227 09:42:28.257553       1 shared_informer.go:377] "Caches are synced"
	I1227 09:42:28.257664       1 shared_informer.go:377] "Caches are synced"
	I1227 09:42:28.257750       1 shared_informer.go:377] "Caches are synced"
	I1227 09:42:28.257813       1 shared_informer.go:377] "Caches are synced"
	I1227 09:42:28.259701       1 shared_informer.go:377] "Caches are synced"
	I1227 09:42:28.269862       1 shared_informer.go:370] "Waiting for caches to sync"
	I1227 09:42:28.270575       1 shared_informer.go:377] "Caches are synced"
	I1227 09:42:28.361232       1 shared_informer.go:377] "Caches are synced"
	I1227 09:42:28.361257       1 garbagecollector.go:166] "Garbage collector: all resource monitors have synced"
	I1227 09:42:28.361262       1 garbagecollector.go:169] "Proceeding to collect garbage"
	I1227 09:42:28.371829       1 shared_informer.go:377] "Caches are synced"
	
	
	==> kube-controller-manager [f1c6a1c1c62397748c6399aaa614424035b1cae6e1916e7a474286917d5684ff] <==
	I1227 09:41:53.950463       1 shared_informer.go:377] "Caches are synced"
	I1227 09:41:53.950469       1 shared_informer.go:377] "Caches are synced"
	I1227 09:41:53.950476       1 shared_informer.go:377] "Caches are synced"
	I1227 09:41:53.951656       1 shared_informer.go:377] "Caches are synced"
	I1227 09:41:53.951787       1 shared_informer.go:377] "Caches are synced"
	I1227 09:41:53.954324       1 shared_informer.go:377] "Caches are synced"
	I1227 09:41:53.954365       1 shared_informer.go:377] "Caches are synced"
	I1227 09:41:53.954375       1 shared_informer.go:377] "Caches are synced"
	I1227 09:41:53.954471       1 shared_informer.go:377] "Caches are synced"
	I1227 09:41:54.017104       1 shared_informer.go:377] "Caches are synced"
	I1227 09:41:53.954493       1 shared_informer.go:377] "Caches are synced"
	I1227 09:41:53.949573       1 shared_informer.go:377] "Caches are synced"
	I1227 09:41:53.950439       1 shared_informer.go:377] "Caches are synced"
	I1227 09:41:53.950446       1 shared_informer.go:377] "Caches are synced"
	I1227 09:41:53.984021       1 range_allocator.go:433] "Set node PodCIDR" node="pause-212930" podCIDRs=["10.244.0.0/24"]
	I1227 09:41:53.950243       1 shared_informer.go:377] "Caches are synced"
	I1227 09:41:53.950369       1 shared_informer.go:377] "Caches are synced"
	I1227 09:41:53.950452       1 shared_informer.go:377] "Caches are synced"
	I1227 09:41:53.994494       1 shared_informer.go:370] "Waiting for caches to sync"
	I1227 09:41:53.954483       1 shared_informer.go:377] "Caches are synced"
	I1227 09:41:54.324404       1 shared_informer.go:377] "Caches are synced"
	I1227 09:41:54.351400       1 shared_informer.go:377] "Caches are synced"
	I1227 09:41:54.351509       1 garbagecollector.go:166] "Garbage collector: all resource monitors have synced"
	I1227 09:41:54.351539       1 garbagecollector.go:169] "Proceeding to collect garbage"
	I1227 09:42:13.965865       1 node_lifecycle_controller.go:1057] "Controller detected that some Nodes are Ready. Exiting master disruption mode"
	
	
	==> kube-proxy [123709a40ad132b23744a3beeceee9b5e5c7095f906902ee953204b4702278fe] <==
	I1227 09:41:55.845426       1 server_linux.go:53] "Using iptables proxy"
	I1227 09:41:56.035042       1 shared_informer.go:370] "Waiting for caches to sync"
	I1227 09:41:56.135750       1 shared_informer.go:377] "Caches are synced"
	I1227 09:41:56.135784       1 server.go:218] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1227 09:41:56.135892       1 server.go:255] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1227 09:41:56.268866       1 server.go:264] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1227 09:41:56.268923       1 server_linux.go:136] "Using iptables Proxier"
	I1227 09:41:56.281862       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1227 09:41:56.286784       1 server.go:529] "Version info" version="v1.35.0"
	I1227 09:41:56.286901       1 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1227 09:41:56.288411       1 config.go:200] "Starting service config controller"
	I1227 09:41:56.288422       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1227 09:41:56.288438       1 config.go:106] "Starting endpoint slice config controller"
	I1227 09:41:56.288442       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1227 09:41:56.288453       1 config.go:403] "Starting serviceCIDR config controller"
	I1227 09:41:56.288457       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1227 09:41:56.289059       1 config.go:309] "Starting node config controller"
	I1227 09:41:56.289066       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1227 09:41:56.289072       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1227 09:41:56.390228       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1227 09:41:56.390262       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1227 09:41:56.390288       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-proxy [368c7201327cbe37eff9370846fa248f727cbfe09186ebc53e2cef14b8a9dc23] <==
	I1227 09:42:21.988327       1 server_linux.go:53] "Using iptables proxy"
	I1227 09:42:22.519382       1 shared_informer.go:370] "Waiting for caches to sync"
	I1227 09:42:25.220011       1 shared_informer.go:377] "Caches are synced"
	I1227 09:42:25.220054       1 server.go:218] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1227 09:42:25.220138       1 server.go:255] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1227 09:42:25.251488       1 server.go:264] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1227 09:42:25.251628       1 server_linux.go:136] "Using iptables Proxier"
	I1227 09:42:25.257254       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1227 09:42:25.257882       1 server.go:529] "Version info" version="v1.35.0"
	I1227 09:42:25.258137       1 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1227 09:42:25.260111       1 config.go:200] "Starting service config controller"
	I1227 09:42:25.260161       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1227 09:42:25.260184       1 config.go:106] "Starting endpoint slice config controller"
	I1227 09:42:25.260188       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1227 09:42:25.260198       1 config.go:403] "Starting serviceCIDR config controller"
	I1227 09:42:25.260202       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1227 09:42:25.261027       1 config.go:309] "Starting node config controller"
	I1227 09:42:25.261091       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1227 09:42:25.261122       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1227 09:42:25.360834       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1227 09:42:25.360949       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1227 09:42:25.360964       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [1074f4a0ea38ddbcac7244f9ddcdef61ec82f29cbe311ebaaaa4751e92095d5b] <==
	E1227 09:41:47.237167       1 reflector.go:204] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.PersistentVolume"
	E1227 09:41:47.262474       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ReplicaSet"
	E1227 09:41:47.285085       1 reflector.go:204] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.PersistentVolumeClaim"
	E1227 09:41:47.319285       1 reflector.go:204] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.VolumeAttachment"
	E1227 09:41:47.392124       1 reflector.go:204] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.CSIStorageCapacity"
	E1227 09:41:47.418891       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ReplicationController"
	E1227 09:41:47.421010       1 reflector.go:204] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.CSINode"
	E1227 09:41:47.434519       1 reflector.go:204] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.CSIDriver"
	E1227 09:41:47.561846       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ResourceSlice"
	E1227 09:41:47.566671       1 reflector.go:204] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.StorageClass"
	E1227 09:41:47.572994       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Service"
	E1227 09:41:47.573986       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Namespace"
	E1227 09:41:47.617470       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ResourceClaim"
	E1227 09:41:47.714535       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1268" type="*v1.ConfigMap"
	E1227 09:41:47.753123       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Node"
	E1227 09:41:47.788105       1 reflector.go:204] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.DeviceClass"
	E1227 09:41:47.804965       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Pod"
	E1227 09:41:47.891815       1 reflector.go:204] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.PodDisruptionBudget"
	I1227 09:41:49.630257       1 shared_informer.go:377] "Caches are synced"
	I1227 09:42:14.116390       1 secure_serving.go:259] Stopped listening on 127.0.0.1:10259
	I1227 09:42:14.116422       1 server.go:263] "[graceful-termination] secure server has stopped listening"
	I1227 09:42:14.116436       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	I1227 09:42:14.116504       1 configmap_cafile_content.go:226] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1227 09:42:14.116630       1 server.go:265] "[graceful-termination] secure server is exiting"
	E1227 09:42:14.116651       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [c93fe8161deae8dd5efcbd7f268e872176eb4d2c3df8e52e409d42dd300f7cea] <==
	I1227 09:42:23.414347       1 serving.go:386] Generated self-signed cert in-memory
	W1227 09:42:25.025450       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1227 09:42:25.025561       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1227 09:42:25.025595       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1227 09:42:25.025683       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1227 09:42:25.119873       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.35.0"
	I1227 09:42:25.122445       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1227 09:42:25.126953       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1227 09:42:25.127383       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1227 09:42:25.127455       1 shared_informer.go:370] "Waiting for caches to sync"
	I1227 09:42:25.127539       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1227 09:42:25.228315       1 shared_informer.go:377] "Caches are synced"
	
	
	==> kubelet <==
	Dec 27 09:42:25 pause-212930 kubelet[1295]: E1227 09:42:25.082018    1295 status_manager.go:1045] "Failed to get status for pod" err="pods \"kube-proxy-w88ml\" is forbidden: User \"system:node:pause-212930\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-212930' and this object" podUID="b077dc1e-d0af-48bd-b8b0-4f775f0c07b9" pod="kube-system/kube-proxy-w88ml"
	Dec 27 09:42:25 pause-212930 kubelet[1295]: E1227 09:42:25.086219    1295 status_manager.go:1045] "Failed to get status for pod" err="pods \"kindnet-l2mpb\" is forbidden: User \"system:node:pause-212930\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-212930' and this object" podUID="0633095a-3161-4f93-951b-90597bcc80cb" pod="kube-system/kindnet-l2mpb"
	Dec 27 09:42:25 pause-212930 kubelet[1295]: E1227 09:42:25.089193    1295 status_manager.go:1045] "Failed to get status for pod" err="pods \"coredns-7d764666f9-j52xk\" is forbidden: User \"system:node:pause-212930\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-212930' and this object" podUID="7506b606-698b-481c-aac2-86984f3866e4" pod="kube-system/coredns-7d764666f9-j52xk"
	Dec 27 09:42:25 pause-212930 kubelet[1295]: E1227 09:42:25.091189    1295 status_manager.go:1045] "Failed to get status for pod" err="pods \"kindnet-l2mpb\" is forbidden: User \"system:node:pause-212930\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-212930' and this object" podUID="0633095a-3161-4f93-951b-90597bcc80cb" pod="kube-system/kindnet-l2mpb"
	Dec 27 09:42:25 pause-212930 kubelet[1295]: E1227 09:42:25.093177    1295 status_manager.go:1045] "Failed to get status for pod" err="pods \"coredns-7d764666f9-j52xk\" is forbidden: User \"system:node:pause-212930\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-212930' and this object" podUID="7506b606-698b-481c-aac2-86984f3866e4" pod="kube-system/coredns-7d764666f9-j52xk"
	Dec 27 09:42:25 pause-212930 kubelet[1295]: E1227 09:42:25.100666    1295 status_manager.go:1045] "Failed to get status for pod" err="pods \"kube-scheduler-pause-212930\" is forbidden: User \"system:node:pause-212930\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-212930' and this object" podUID="7d1c6692bbfacffa6abe69f95d71bf07" pod="kube-system/kube-scheduler-pause-212930"
	Dec 27 09:42:25 pause-212930 kubelet[1295]: E1227 09:42:25.103602    1295 status_manager.go:1045] "Failed to get status for pod" err=<
	Dec 27 09:42:25 pause-212930 kubelet[1295]:         pods "etcd-pause-212930" is forbidden: User "system:node:pause-212930" cannot get resource "pods" in API group "" in the namespace "kube-system": no relationship found between node 'pause-212930' and this object
	Dec 27 09:42:25 pause-212930 kubelet[1295]:         RBAC: [clusterrole.rbac.authorization.k8s.io "system:certificates.k8s.io:certificatesigningrequests:selfnodeclient" not found, clusterrole.rbac.authorization.k8s.io "system:basic-user" not found, clusterrole.rbac.authorization.k8s.io "system:public-info-viewer" not found, clusterrole.rbac.authorization.k8s.io "system:discovery" not found]
	Dec 27 09:42:25 pause-212930 kubelet[1295]:  > podUID="de8ba352182032d872d8f55cb8dd7bbf" pod="kube-system/etcd-pause-212930"
	Dec 27 09:42:25 pause-212930 kubelet[1295]: E1227 09:42:25.107793    1295 status_manager.go:1045] "Failed to get status for pod" err=<
	Dec 27 09:42:25 pause-212930 kubelet[1295]:         pods "kube-apiserver-pause-212930" is forbidden: User "system:node:pause-212930" cannot get resource "pods" in API group "" in the namespace "kube-system": no relationship found between node 'pause-212930' and this object
	Dec 27 09:42:25 pause-212930 kubelet[1295]:         RBAC: [clusterrole.rbac.authorization.k8s.io "system:public-info-viewer" not found, clusterrole.rbac.authorization.k8s.io "system:discovery" not found, clusterrole.rbac.authorization.k8s.io "system:certificates.k8s.io:certificatesigningrequests:selfnodeclient" not found, clusterrole.rbac.authorization.k8s.io "system:basic-user" not found]
	Dec 27 09:42:25 pause-212930 kubelet[1295]:  > podUID="da941abb28a05575d73bb68025dd7154" pod="kube-system/kube-apiserver-pause-212930"
	Dec 27 09:42:25 pause-212930 kubelet[1295]: E1227 09:42:25.114404    1295 status_manager.go:1045] "Failed to get status for pod" err=<
	Dec 27 09:42:25 pause-212930 kubelet[1295]:         pods "kube-controller-manager-pause-212930" is forbidden: User "system:node:pause-212930" cannot get resource "pods" in API group "" in the namespace "kube-system": no relationship found between node 'pause-212930' and this object
	Dec 27 09:42:25 pause-212930 kubelet[1295]:         RBAC: [clusterrole.rbac.authorization.k8s.io "system:public-info-viewer" not found, clusterrole.rbac.authorization.k8s.io "system:discovery" not found, clusterrole.rbac.authorization.k8s.io "system:certificates.k8s.io:certificatesigningrequests:selfnodeclient" not found, clusterrole.rbac.authorization.k8s.io "system:basic-user" not found]
	Dec 27 09:42:25 pause-212930 kubelet[1295]:  > podUID="1c0a609087841c89e458b5d24d8dec71" pod="kube-system/kube-controller-manager-pause-212930"
	Dec 27 09:42:25 pause-212930 kubelet[1295]: E1227 09:42:25.959785    1295 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/etcd-pause-212930" containerName="etcd"
	Dec 27 09:42:26 pause-212930 kubelet[1295]: E1227 09:42:26.555905    1295 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-pause-212930" containerName="kube-scheduler"
	Dec 27 09:42:27 pause-212930 kubelet[1295]: E1227 09:42:27.868930    1295 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-pause-212930" containerName="kube-apiserver"
	Dec 27 09:42:28 pause-212930 kubelet[1295]: E1227 09:42:28.590800    1295 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-controller-manager-pause-212930" containerName="kube-controller-manager"
	Dec 27 09:42:28 pause-212930 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Dec 27 09:42:28 pause-212930 systemd[1]: kubelet.service: Deactivated successfully.
	Dec 27 09:42:28 pause-212930 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p pause-212930 -n pause-212930
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p pause-212930 -n pause-212930: exit status 2 (354.999313ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:270: (dbg) Run:  kubectl --context pause-212930 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:294: <<< TestPause/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:295: ---------------------/post-mortem---------------------------------
--- FAIL: TestPause/serial/Pause (6.84s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (2.52s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p old-k8s-version-156305 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable metrics-server -p old-k8s-version-156305 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 11 (257.181853ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-27T10:00:38Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-arm64 addons enable metrics-server -p old-k8s-version-156305 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 11
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context old-k8s-version-156305 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:213: (dbg) Non-zero exit: kubectl --context old-k8s-version-156305 describe deploy/metrics-server -n kube-system: exit status 1 (107.046153ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): deployments.apps "metrics-server" not found

                                                
                                                
** /stderr **
start_stop_delete_test.go:215: failed to get info on auto-pause deployments. args "kubectl --context old-k8s-version-156305 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:219: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect old-k8s-version-156305
helpers_test.go:244: (dbg) docker inspect old-k8s-version-156305:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "347dbce10dafb51b72ca7dcef615f0de8b8f7496d754df760f0354e3d747f426",
	        "Created": "2025-12-27T09:59:32.848675789Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 492891,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-27T09:59:32.928473143Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:713d59eed1cb7ea89d31e3cbef8f6274a6ab7509a421e96cf0197c334e76398d",
	        "ResolvConfPath": "/var/lib/docker/containers/347dbce10dafb51b72ca7dcef615f0de8b8f7496d754df760f0354e3d747f426/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/347dbce10dafb51b72ca7dcef615f0de8b8f7496d754df760f0354e3d747f426/hostname",
	        "HostsPath": "/var/lib/docker/containers/347dbce10dafb51b72ca7dcef615f0de8b8f7496d754df760f0354e3d747f426/hosts",
	        "LogPath": "/var/lib/docker/containers/347dbce10dafb51b72ca7dcef615f0de8b8f7496d754df760f0354e3d747f426/347dbce10dafb51b72ca7dcef615f0de8b8f7496d754df760f0354e3d747f426-json.log",
	        "Name": "/old-k8s-version-156305",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-156305:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "old-k8s-version-156305",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "347dbce10dafb51b72ca7dcef615f0de8b8f7496d754df760f0354e3d747f426",
	                "LowerDir": "/var/lib/docker/overlay2/68e3c3f0278cd7b3013cea940f132fed7eeabae4011f0b02c966dba9f26d7618-init/diff:/var/lib/docker/overlay2/888349771a41a46b09e7de676064af9effbe2f5ae2a8ba49ad062335fb2a70e5/diff",
	                "MergedDir": "/var/lib/docker/overlay2/68e3c3f0278cd7b3013cea940f132fed7eeabae4011f0b02c966dba9f26d7618/merged",
	                "UpperDir": "/var/lib/docker/overlay2/68e3c3f0278cd7b3013cea940f132fed7eeabae4011f0b02c966dba9f26d7618/diff",
	                "WorkDir": "/var/lib/docker/overlay2/68e3c3f0278cd7b3013cea940f132fed7eeabae4011f0b02c966dba9f26d7618/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-156305",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-156305/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-156305",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-156305",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-156305",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "2d7525122f610755ef39f25937436f1b0e42a5e00986979ff4e0b4f17aab5298",
	            "SandboxKey": "/var/run/docker/netns/2d7525122f61",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33421"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33422"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33425"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33423"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33424"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-156305": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "32:ca:1f:65:48:15",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "05c9d9029c4a7ae450dccaf37503f6c9dee72aa6f5a06e1cc6293b09c389163d",
	                    "EndpointID": "fd32dc92b3b60937f82000944ccd5fc5e8f51be4c0a8af2e7f0bfa949fa3585f",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "old-k8s-version-156305",
	                        "347dbce10daf"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-156305 -n old-k8s-version-156305
helpers_test.go:253: <<< TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p old-k8s-version-156305 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-arm64 -p old-k8s-version-156305 logs -n 25: (1.19981415s)
helpers_test.go:261: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────
────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │          PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────
────┤
	│ ssh     │ -p cilium-246753 sudo systemctl cat cri-docker --no-pager                                                                                                                                                                                     │ cilium-246753             │ jenkins │ v1.37.0 │ 27 Dec 25 09:52 UTC │                     │
	│ ssh     │ -p cilium-246753 sudo cat /etc/systemd/system/cri-docker.service.d/10-cni.conf                                                                                                                                                                │ cilium-246753             │ jenkins │ v1.37.0 │ 27 Dec 25 09:52 UTC │                     │
	│ ssh     │ -p cilium-246753 sudo cat /usr/lib/systemd/system/cri-docker.service                                                                                                                                                                          │ cilium-246753             │ jenkins │ v1.37.0 │ 27 Dec 25 09:52 UTC │                     │
	│ ssh     │ -p cilium-246753 sudo cri-dockerd --version                                                                                                                                                                                                   │ cilium-246753             │ jenkins │ v1.37.0 │ 27 Dec 25 09:52 UTC │                     │
	│ ssh     │ -p cilium-246753 sudo systemctl status containerd --all --full --no-pager                                                                                                                                                                     │ cilium-246753             │ jenkins │ v1.37.0 │ 27 Dec 25 09:52 UTC │                     │
	│ ssh     │ -p cilium-246753 sudo systemctl cat containerd --no-pager                                                                                                                                                                                     │ cilium-246753             │ jenkins │ v1.37.0 │ 27 Dec 25 09:52 UTC │                     │
	│ ssh     │ -p cilium-246753 sudo cat /lib/systemd/system/containerd.service                                                                                                                                                                              │ cilium-246753             │ jenkins │ v1.37.0 │ 27 Dec 25 09:52 UTC │                     │
	│ ssh     │ -p cilium-246753 sudo cat /etc/containerd/config.toml                                                                                                                                                                                         │ cilium-246753             │ jenkins │ v1.37.0 │ 27 Dec 25 09:52 UTC │                     │
	│ ssh     │ -p cilium-246753 sudo containerd config dump                                                                                                                                                                                                  │ cilium-246753             │ jenkins │ v1.37.0 │ 27 Dec 25 09:52 UTC │                     │
	│ ssh     │ -p cilium-246753 sudo systemctl status crio --all --full --no-pager                                                                                                                                                                           │ cilium-246753             │ jenkins │ v1.37.0 │ 27 Dec 25 09:52 UTC │                     │
	│ ssh     │ -p cilium-246753 sudo systemctl cat crio --no-pager                                                                                                                                                                                           │ cilium-246753             │ jenkins │ v1.37.0 │ 27 Dec 25 09:52 UTC │                     │
	│ ssh     │ -p cilium-246753 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                                                                                                 │ cilium-246753             │ jenkins │ v1.37.0 │ 27 Dec 25 09:52 UTC │                     │
	│ ssh     │ -p cilium-246753 sudo crio config                                                                                                                                                                                                             │ cilium-246753             │ jenkins │ v1.37.0 │ 27 Dec 25 09:52 UTC │                     │
	│ delete  │ -p cilium-246753                                                                                                                                                                                                                              │ cilium-246753             │ jenkins │ v1.37.0 │ 27 Dec 25 09:52 UTC │ 27 Dec 25 09:52 UTC │
	│ start   │ -p cert-expiration-028595 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio                                                                                                                                        │ cert-expiration-028595    │ jenkins │ v1.37.0 │ 27 Dec 25 09:52 UTC │ 27 Dec 25 09:52 UTC │
	│ start   │ -p cert-expiration-028595 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio                                                                                                                                     │ cert-expiration-028595    │ jenkins │ v1.37.0 │ 27 Dec 25 09:55 UTC │ 27 Dec 25 09:55 UTC │
	│ delete  │ -p cert-expiration-028595                                                                                                                                                                                                                     │ cert-expiration-028595    │ jenkins │ v1.37.0 │ 27 Dec 25 09:55 UTC │ 27 Dec 25 09:55 UTC │
	│ start   │ -p force-systemd-flag-779725 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                                                                                                                   │ force-systemd-flag-779725 │ jenkins │ v1.37.0 │ 27 Dec 25 09:55 UTC │                     │
	│ delete  │ -p force-systemd-env-029895                                                                                                                                                                                                                   │ force-systemd-env-029895  │ jenkins │ v1.37.0 │ 27 Dec 25 09:58 UTC │ 27 Dec 25 09:58 UTC │
	│ start   │ -p cert-options-057459 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio                     │ cert-options-057459       │ jenkins │ v1.37.0 │ 27 Dec 25 09:58 UTC │ 27 Dec 25 09:59 UTC │
	│ ssh     │ cert-options-057459 ssh openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt                                                                                                                                                   │ cert-options-057459       │ jenkins │ v1.37.0 │ 27 Dec 25 09:59 UTC │ 27 Dec 25 09:59 UTC │
	│ ssh     │ -p cert-options-057459 -- sudo cat /etc/kubernetes/admin.conf                                                                                                                                                                                 │ cert-options-057459       │ jenkins │ v1.37.0 │ 27 Dec 25 09:59 UTC │ 27 Dec 25 09:59 UTC │
	│ delete  │ -p cert-options-057459                                                                                                                                                                                                                        │ cert-options-057459       │ jenkins │ v1.37.0 │ 27 Dec 25 09:59 UTC │ 27 Dec 25 09:59 UTC │
	│ start   │ -p old-k8s-version-156305 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-156305    │ jenkins │ v1.37.0 │ 27 Dec 25 09:59 UTC │ 27 Dec 25 10:00 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-156305 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-156305    │ jenkins │ v1.37.0 │ 27 Dec 25 10:00 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────
────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/27 09:59:25
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1227 09:59:25.488878  492443 out.go:360] Setting OutFile to fd 1 ...
	I1227 09:59:25.489018  492443 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 09:59:25.489032  492443 out.go:374] Setting ErrFile to fd 2...
	I1227 09:59:25.489038  492443 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 09:59:25.489315  492443 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22344-301174/.minikube/bin
	I1227 09:59:25.489790  492443 out.go:368] Setting JSON to false
	I1227 09:59:25.490756  492443 start.go:133] hostinfo: {"hostname":"ip-172-31-31-251","uptime":9715,"bootTime":1766819851,"procs":176,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1227 09:59:25.490831  492443 start.go:143] virtualization:  
	I1227 09:59:25.494434  492443 out.go:179] * [old-k8s-version-156305] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1227 09:59:25.498787  492443 out.go:179]   - MINIKUBE_LOCATION=22344
	I1227 09:59:25.498942  492443 notify.go:221] Checking for updates...
	I1227 09:59:25.505252  492443 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1227 09:59:25.508389  492443 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22344-301174/kubeconfig
	I1227 09:59:25.511343  492443 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22344-301174/.minikube
	I1227 09:59:25.514365  492443 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1227 09:59:25.517399  492443 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1227 09:59:25.520925  492443 config.go:182] Loaded profile config "force-systemd-flag-779725": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1227 09:59:25.521041  492443 driver.go:422] Setting default libvirt URI to qemu:///system
	I1227 09:59:25.554066  492443 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1227 09:59:25.554230  492443 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1227 09:59:25.610874  492443 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-27 09:59:25.601975027 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1227 09:59:25.610980  492443 docker.go:319] overlay module found
	I1227 09:59:25.616075  492443 out.go:179] * Using the docker driver based on user configuration
	I1227 09:59:25.619066  492443 start.go:309] selected driver: docker
	I1227 09:59:25.619090  492443 start.go:928] validating driver "docker" against <nil>
	I1227 09:59:25.619106  492443 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1227 09:59:25.619842  492443 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1227 09:59:25.684287  492443 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-27 09:59:25.675263204 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1227 09:59:25.684438  492443 start_flags.go:333] no existing cluster config was found, will generate one from the flags 
	I1227 09:59:25.684654  492443 start_flags.go:1019] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1227 09:59:25.687789  492443 out.go:179] * Using Docker driver with root privileges
	I1227 09:59:25.690689  492443 cni.go:84] Creating CNI manager for ""
	I1227 09:59:25.690755  492443 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1227 09:59:25.690769  492443 start_flags.go:342] Found "CNI" CNI - setting NetworkPlugin=cni
	I1227 09:59:25.690845  492443 start.go:353] cluster config:
	{Name:old-k8s-version-156305 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-156305 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SS
HAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1227 09:59:25.694046  492443 out.go:179] * Starting "old-k8s-version-156305" primary control-plane node in "old-k8s-version-156305" cluster
	I1227 09:59:25.696952  492443 cache.go:134] Beginning downloading kic base image for docker with crio
	I1227 09:59:25.699841  492443 out.go:179] * Pulling base image v0.0.48-1766570851-22316 ...
	I1227 09:59:25.702775  492443 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1227 09:59:25.702835  492443 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22344-301174/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4
	I1227 09:59:25.702846  492443 cache.go:65] Caching tarball of preloaded images
	I1227 09:59:25.702854  492443 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a in local docker daemon
	I1227 09:59:25.702933  492443 preload.go:251] Found /home/jenkins/minikube-integration/22344-301174/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1227 09:59:25.702943  492443 cache.go:68] Finished verifying existence of preloaded tar for v1.28.0 on crio
	I1227 09:59:25.703062  492443 profile.go:143] Saving config to /home/jenkins/minikube-integration/22344-301174/.minikube/profiles/old-k8s-version-156305/config.json ...
	I1227 09:59:25.703079  492443 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22344-301174/.minikube/profiles/old-k8s-version-156305/config.json: {Name:mka71a29b9543884c236ef3e66857cf62cb6e5a9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 09:59:25.722871  492443 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a in local docker daemon, skipping pull
	I1227 09:59:25.722905  492443 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a exists in daemon, skipping load
	I1227 09:59:25.722929  492443 cache.go:243] Successfully downloaded all kic artifacts
	I1227 09:59:25.722961  492443 start.go:360] acquireMachinesLock for old-k8s-version-156305: {Name:mk38a9d425ae861a3d9f927feaf86bb827ff0e6c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1227 09:59:25.723085  492443 start.go:364] duration metric: took 102.36µs to acquireMachinesLock for "old-k8s-version-156305"
	I1227 09:59:25.723115  492443 start.go:93] Provisioning new machine with config: &{Name:old-k8s-version-156305 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-156305 Namespace:default APIServerHAVIP:
APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQ
emuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false} &{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1227 09:59:25.723198  492443 start.go:125] createHost starting for "" (driver="docker")
	I1227 09:59:25.726492  492443 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1227 09:59:25.726744  492443 start.go:159] libmachine.API.Create for "old-k8s-version-156305" (driver="docker")
	I1227 09:59:25.726783  492443 client.go:173] LocalClient.Create starting
	I1227 09:59:25.726873  492443 main.go:144] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22344-301174/.minikube/certs/ca.pem
	I1227 09:59:25.726913  492443 main.go:144] libmachine: Decoding PEM data...
	I1227 09:59:25.726938  492443 main.go:144] libmachine: Parsing certificate...
	I1227 09:59:25.726998  492443 main.go:144] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22344-301174/.minikube/certs/cert.pem
	I1227 09:59:25.727018  492443 main.go:144] libmachine: Decoding PEM data...
	I1227 09:59:25.727030  492443 main.go:144] libmachine: Parsing certificate...
	I1227 09:59:25.727383  492443 cli_runner.go:164] Run: docker network inspect old-k8s-version-156305 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1227 09:59:25.746322  492443 cli_runner.go:211] docker network inspect old-k8s-version-156305 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1227 09:59:25.746417  492443 network_create.go:284] running [docker network inspect old-k8s-version-156305] to gather additional debugging logs...
	I1227 09:59:25.746441  492443 cli_runner.go:164] Run: docker network inspect old-k8s-version-156305
	W1227 09:59:25.763115  492443 cli_runner.go:211] docker network inspect old-k8s-version-156305 returned with exit code 1
	I1227 09:59:25.763146  492443 network_create.go:287] error running [docker network inspect old-k8s-version-156305]: docker network inspect old-k8s-version-156305: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network old-k8s-version-156305 not found
	I1227 09:59:25.763166  492443 network_create.go:289] output of [docker network inspect old-k8s-version-156305]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network old-k8s-version-156305 not found
	
	** /stderr **
	I1227 09:59:25.763268  492443 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1227 09:59:25.779865  492443 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-cc45c0939b74 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:be:a9:eb:a6:c1:01} reservation:<nil>}
	I1227 09:59:25.780248  492443 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-b122a856da6d IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:2e:eb:89:95:c1:62} reservation:<nil>}
	I1227 09:59:25.780501  492443 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-a1a16649dea9 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:9e:51:2b:90:d1:ea} reservation:<nil>}
	I1227 09:59:25.780807  492443 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-489f01168e32 IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:de:17:c3:81:51:6c} reservation:<nil>}
	I1227 09:59:25.781252  492443 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001aa89f0}
	I1227 09:59:25.781274  492443 network_create.go:124] attempt to create docker network old-k8s-version-156305 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I1227 09:59:25.781334  492443 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=old-k8s-version-156305 old-k8s-version-156305
	I1227 09:59:25.839679  492443 network_create.go:108] docker network old-k8s-version-156305 192.168.85.0/24 created
	I1227 09:59:25.839711  492443 kic.go:121] calculated static IP "192.168.85.2" for the "old-k8s-version-156305" container
	I1227 09:59:25.839804  492443 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1227 09:59:25.856701  492443 cli_runner.go:164] Run: docker volume create old-k8s-version-156305 --label name.minikube.sigs.k8s.io=old-k8s-version-156305 --label created_by.minikube.sigs.k8s.io=true
	I1227 09:59:25.874747  492443 oci.go:103] Successfully created a docker volume old-k8s-version-156305
	I1227 09:59:25.874843  492443 cli_runner.go:164] Run: docker run --rm --name old-k8s-version-156305-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=old-k8s-version-156305 --entrypoint /usr/bin/test -v old-k8s-version-156305:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a -d /var/lib
	I1227 09:59:26.420441  492443 oci.go:107] Successfully prepared a docker volume old-k8s-version-156305
	I1227 09:59:26.420523  492443 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1227 09:59:26.420552  492443 kic.go:194] Starting extracting preloaded images to volume ...
	I1227 09:59:26.420627  492443 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22344-301174/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v old-k8s-version-156305:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a -I lz4 -xf /preloaded.tar -C /extractDir
	I1227 09:59:32.771626  492443 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22344-301174/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v old-k8s-version-156305:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a -I lz4 -xf /preloaded.tar -C /extractDir: (6.350958495s)
	I1227 09:59:32.771662  492443 kic.go:203] duration metric: took 6.351107346s to extract preloaded images to volume ...
	W1227 09:59:32.771798  492443 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1227 09:59:32.771920  492443 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1227 09:59:32.833831  492443 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname old-k8s-version-156305 --name old-k8s-version-156305 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=old-k8s-version-156305 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=old-k8s-version-156305 --network old-k8s-version-156305 --ip 192.168.85.2 --volume old-k8s-version-156305:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a
	I1227 09:59:33.147481  492443 cli_runner.go:164] Run: docker container inspect old-k8s-version-156305 --format={{.State.Running}}
	I1227 09:59:33.174648  492443 cli_runner.go:164] Run: docker container inspect old-k8s-version-156305 --format={{.State.Status}}
	I1227 09:59:33.208776  492443 cli_runner.go:164] Run: docker exec old-k8s-version-156305 stat /var/lib/dpkg/alternatives/iptables
	I1227 09:59:33.261162  492443 oci.go:144] the created container "old-k8s-version-156305" has a running status.
	I1227 09:59:33.261196  492443 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/22344-301174/.minikube/machines/old-k8s-version-156305/id_rsa...
	I1227 09:59:33.503609  492443 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22344-301174/.minikube/machines/old-k8s-version-156305/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1227 09:59:33.536965  492443 cli_runner.go:164] Run: docker container inspect old-k8s-version-156305 --format={{.State.Status}}
	I1227 09:59:33.559701  492443 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1227 09:59:33.559721  492443 kic_runner.go:114] Args: [docker exec --privileged old-k8s-version-156305 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1227 09:59:33.614012  492443 cli_runner.go:164] Run: docker container inspect old-k8s-version-156305 --format={{.State.Status}}
	I1227 09:59:33.640485  492443 machine.go:94] provisionDockerMachine start ...
	I1227 09:59:33.640579  492443 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-156305
	I1227 09:59:33.659849  492443 main.go:144] libmachine: Using SSH client type: native
	I1227 09:59:33.663211  492443 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dbbe0] 0x3de0e0 <nil>  [] 0s} 127.0.0.1 33421 <nil> <nil>}
	I1227 09:59:33.663241  492443 main.go:144] libmachine: About to run SSH command:
	hostname
	I1227 09:59:33.663828  492443 main.go:144] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:45826->127.0.0.1:33421: read: connection reset by peer
	I1227 09:59:36.805517  492443 main.go:144] libmachine: SSH cmd err, output: <nil>: old-k8s-version-156305
	
	I1227 09:59:36.805568  492443 ubuntu.go:182] provisioning hostname "old-k8s-version-156305"
	I1227 09:59:36.805687  492443 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-156305
	I1227 09:59:36.823147  492443 main.go:144] libmachine: Using SSH client type: native
	I1227 09:59:36.823460  492443 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dbbe0] 0x3de0e0 <nil>  [] 0s} 127.0.0.1 33421 <nil> <nil>}
	I1227 09:59:36.823474  492443 main.go:144] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-156305 && echo "old-k8s-version-156305" | sudo tee /etc/hostname
	I1227 09:59:36.972406  492443 main.go:144] libmachine: SSH cmd err, output: <nil>: old-k8s-version-156305
	
	I1227 09:59:36.972485  492443 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-156305
	I1227 09:59:36.991237  492443 main.go:144] libmachine: Using SSH client type: native
	I1227 09:59:36.991565  492443 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dbbe0] 0x3de0e0 <nil>  [] 0s} 127.0.0.1 33421 <nil> <nil>}
	I1227 09:59:36.991590  492443 main.go:144] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-156305' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-156305/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-156305' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1227 09:59:37.130571  492443 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	I1227 09:59:37.130602  492443 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22344-301174/.minikube CaCertPath:/home/jenkins/minikube-integration/22344-301174/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22344-301174/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22344-301174/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22344-301174/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22344-301174/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22344-301174/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22344-301174/.minikube}
	I1227 09:59:37.130634  492443 ubuntu.go:190] setting up certificates
	I1227 09:59:37.130644  492443 provision.go:84] configureAuth start
	I1227 09:59:37.130714  492443 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-156305
	I1227 09:59:37.151763  492443 provision.go:143] copyHostCerts
	I1227 09:59:37.151832  492443 exec_runner.go:144] found /home/jenkins/minikube-integration/22344-301174/.minikube/ca.pem, removing ...
	I1227 09:59:37.151840  492443 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22344-301174/.minikube/ca.pem
	I1227 09:59:37.151917  492443 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22344-301174/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22344-301174/.minikube/ca.pem (1082 bytes)
	I1227 09:59:37.152004  492443 exec_runner.go:144] found /home/jenkins/minikube-integration/22344-301174/.minikube/cert.pem, removing ...
	I1227 09:59:37.152009  492443 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22344-301174/.minikube/cert.pem
	I1227 09:59:37.152033  492443 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22344-301174/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22344-301174/.minikube/cert.pem (1123 bytes)
	I1227 09:59:37.152090  492443 exec_runner.go:144] found /home/jenkins/minikube-integration/22344-301174/.minikube/key.pem, removing ...
	I1227 09:59:37.152094  492443 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22344-301174/.minikube/key.pem
	I1227 09:59:37.152121  492443 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22344-301174/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22344-301174/.minikube/key.pem (1675 bytes)
	I1227 09:59:37.152169  492443 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22344-301174/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22344-301174/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22344-301174/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-156305 san=[127.0.0.1 192.168.85.2 localhost minikube old-k8s-version-156305]
	I1227 09:59:37.572577  492443 provision.go:177] copyRemoteCerts
	I1227 09:59:37.572671  492443 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1227 09:59:37.572746  492443 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-156305
	I1227 09:59:37.590442  492443 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33421 SSHKeyPath:/home/jenkins/minikube-integration/22344-301174/.minikube/machines/old-k8s-version-156305/id_rsa Username:docker}
	I1227 09:59:37.690324  492443 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22344-301174/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1227 09:59:37.707944  492443 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22344-301174/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I1227 09:59:37.726623  492443 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22344-301174/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1227 09:59:37.743926  492443 provision.go:87] duration metric: took 613.269117ms to configureAuth
	I1227 09:59:37.743954  492443 ubuntu.go:206] setting minikube options for container-runtime
	I1227 09:59:37.744141  492443 config.go:182] Loaded profile config "old-k8s-version-156305": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1227 09:59:37.744242  492443 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-156305
	I1227 09:59:37.761696  492443 main.go:144] libmachine: Using SSH client type: native
	I1227 09:59:37.762005  492443 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dbbe0] 0x3de0e0 <nil>  [] 0s} 127.0.0.1 33421 <nil> <nil>}
	I1227 09:59:37.762024  492443 main.go:144] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1227 09:59:38.081540  492443 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1227 09:59:38.081568  492443 machine.go:97] duration metric: took 4.441059s to provisionDockerMachine
	I1227 09:59:38.081589  492443 client.go:176] duration metric: took 12.354786915s to LocalClient.Create
	I1227 09:59:38.081637  492443 start.go:167] duration metric: took 12.354894888s to libmachine.API.Create "old-k8s-version-156305"
	I1227 09:59:38.081644  492443 start.go:293] postStartSetup for "old-k8s-version-156305" (driver="docker")
	I1227 09:59:38.081654  492443 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1227 09:59:38.081732  492443 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1227 09:59:38.081782  492443 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-156305
	I1227 09:59:38.099247  492443 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33421 SSHKeyPath:/home/jenkins/minikube-integration/22344-301174/.minikube/machines/old-k8s-version-156305/id_rsa Username:docker}
	I1227 09:59:38.198370  492443 ssh_runner.go:195] Run: cat /etc/os-release
	I1227 09:59:38.201622  492443 main.go:144] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1227 09:59:38.201652  492443 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1227 09:59:38.201663  492443 filesync.go:126] Scanning /home/jenkins/minikube-integration/22344-301174/.minikube/addons for local assets ...
	I1227 09:59:38.201716  492443 filesync.go:126] Scanning /home/jenkins/minikube-integration/22344-301174/.minikube/files for local assets ...
	I1227 09:59:38.201800  492443 filesync.go:149] local asset: /home/jenkins/minikube-integration/22344-301174/.minikube/files/etc/ssl/certs/3030432.pem -> 3030432.pem in /etc/ssl/certs
	I1227 09:59:38.201913  492443 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1227 09:59:38.209405  492443 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22344-301174/.minikube/files/etc/ssl/certs/3030432.pem --> /etc/ssl/certs/3030432.pem (1708 bytes)
	I1227 09:59:38.226969  492443 start.go:296] duration metric: took 145.310246ms for postStartSetup
	I1227 09:59:38.227379  492443 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-156305
	I1227 09:59:38.244231  492443 profile.go:143] Saving config to /home/jenkins/minikube-integration/22344-301174/.minikube/profiles/old-k8s-version-156305/config.json ...
	I1227 09:59:38.244524  492443 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1227 09:59:38.244575  492443 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-156305
	I1227 09:59:38.261538  492443 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33421 SSHKeyPath:/home/jenkins/minikube-integration/22344-301174/.minikube/machines/old-k8s-version-156305/id_rsa Username:docker}
	I1227 09:59:38.359125  492443 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1227 09:59:38.363770  492443 start.go:128] duration metric: took 12.640557427s to createHost
	I1227 09:59:38.363798  492443 start.go:83] releasing machines lock for "old-k8s-version-156305", held for 12.64069982s
	I1227 09:59:38.363870  492443 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-156305
	I1227 09:59:38.380269  492443 ssh_runner.go:195] Run: cat /version.json
	I1227 09:59:38.380327  492443 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-156305
	I1227 09:59:38.380601  492443 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1227 09:59:38.380669  492443 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-156305
	I1227 09:59:38.398976  492443 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33421 SSHKeyPath:/home/jenkins/minikube-integration/22344-301174/.minikube/machines/old-k8s-version-156305/id_rsa Username:docker}
	I1227 09:59:38.407176  492443 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33421 SSHKeyPath:/home/jenkins/minikube-integration/22344-301174/.minikube/machines/old-k8s-version-156305/id_rsa Username:docker}
	I1227 09:59:38.494023  492443 ssh_runner.go:195] Run: systemctl --version
	I1227 09:59:38.590970  492443 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1227 09:59:38.626607  492443 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1227 09:59:38.630984  492443 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1227 09:59:38.631064  492443 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1227 09:59:38.665678  492443 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1227 09:59:38.665700  492443 start.go:496] detecting cgroup driver to use...
	I1227 09:59:38.665734  492443 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1227 09:59:38.665781  492443 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1227 09:59:38.695282  492443 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1227 09:59:38.712029  492443 docker.go:218] disabling cri-docker service (if available) ...
	I1227 09:59:38.712142  492443 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1227 09:59:38.729918  492443 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1227 09:59:38.749674  492443 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1227 09:59:38.879352  492443 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1227 09:59:39.011101  492443 docker.go:234] disabling docker service ...
	I1227 09:59:39.011240  492443 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1227 09:59:39.033639  492443 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1227 09:59:39.048045  492443 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1227 09:59:39.178031  492443 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1227 09:59:39.296804  492443 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1227 09:59:39.309790  492443 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1227 09:59:39.324328  492443 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1227 09:59:39.324458  492443 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 09:59:39.334474  492443 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1227 09:59:39.334582  492443 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 09:59:39.343671  492443 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 09:59:39.352324  492443 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 09:59:39.360911  492443 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1227 09:59:39.368980  492443 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 09:59:39.377950  492443 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 09:59:39.392030  492443 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 09:59:39.403931  492443 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1227 09:59:39.412113  492443 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1227 09:59:39.419988  492443 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1227 09:59:39.543125  492443 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1227 09:59:39.737484  492443 start.go:553] Will wait 60s for socket path /var/run/crio/crio.sock
	I1227 09:59:39.737618  492443 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1227 09:59:39.741587  492443 start.go:574] Will wait 60s for crictl version
	I1227 09:59:39.741662  492443 ssh_runner.go:195] Run: which crictl
	I1227 09:59:39.745116  492443 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1227 09:59:39.769058  492443 start.go:590] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1227 09:59:39.769145  492443 ssh_runner.go:195] Run: crio --version
	I1227 09:59:39.796940  492443 ssh_runner.go:195] Run: crio --version
	I1227 09:59:39.829834  492443 out.go:179] * Preparing Kubernetes v1.28.0 on CRI-O 1.34.3 ...
	I1227 09:59:39.832780  492443 cli_runner.go:164] Run: docker network inspect old-k8s-version-156305 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1227 09:59:39.852259  492443 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1227 09:59:39.856138  492443 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1227 09:59:39.866030  492443 kubeadm.go:884] updating cluster {Name:old-k8s-version-156305 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-156305 Namespace:default APIServerHAVIP: APIServerName:minik
ubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirm
warePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false} ...
	I1227 09:59:39.866228  492443 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1227 09:59:39.866314  492443 ssh_runner.go:195] Run: sudo crictl images --output json
	I1227 09:59:39.898905  492443 crio.go:561] all images are preloaded for cri-o runtime.
	I1227 09:59:39.898934  492443 crio.go:433] Images already preloaded, skipping extraction
	I1227 09:59:39.898995  492443 ssh_runner.go:195] Run: sudo crictl images --output json
	I1227 09:59:39.923212  492443 crio.go:561] all images are preloaded for cri-o runtime.
	I1227 09:59:39.923237  492443 cache_images.go:86] Images are preloaded, skipping loading
	I1227 09:59:39.923245  492443 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.28.0 crio true true} ...
	I1227 09:59:39.923341  492443 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=old-k8s-version-156305 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-156305 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1227 09:59:39.923422  492443 ssh_runner.go:195] Run: crio config
	I1227 09:59:39.981044  492443 cni.go:84] Creating CNI manager for ""
	I1227 09:59:39.981071  492443 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1227 09:59:39.981113  492443 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1227 09:59:39.981144  492443 kubeadm.go:197] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.28.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-156305 NodeName:old-k8s-version-156305 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPo
dPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1227 09:59:39.981298  492443 kubeadm.go:203] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "old-k8s-version-156305"
	  kubeletExtraArgs:
	    node-ip: 192.168.85.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1227 09:59:39.981372  492443 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.0
	I1227 09:59:39.990198  492443 binaries.go:51] Found k8s binaries, skipping transfer
	I1227 09:59:39.990301  492443 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1227 09:59:39.998243  492443 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (372 bytes)
	I1227 09:59:40.015553  492443 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1227 09:59:40.032574  492443 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2160 bytes)
	I1227 09:59:40.047583  492443 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1227 09:59:40.051937  492443 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1227 09:59:40.063069  492443 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1227 09:59:40.189327  492443 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1227 09:59:40.210844  492443 certs.go:69] Setting up /home/jenkins/minikube-integration/22344-301174/.minikube/profiles/old-k8s-version-156305 for IP: 192.168.85.2
	I1227 09:59:40.210869  492443 certs.go:195] generating shared ca certs ...
	I1227 09:59:40.210885  492443 certs.go:227] acquiring lock for ca certs: {Name:mk8bd99999eb300c33129cb11fd55a1be3001328 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 09:59:40.211040  492443 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22344-301174/.minikube/ca.key
	I1227 09:59:40.211087  492443 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22344-301174/.minikube/proxy-client-ca.key
	I1227 09:59:40.211100  492443 certs.go:257] generating profile certs ...
	I1227 09:59:40.211160  492443 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22344-301174/.minikube/profiles/old-k8s-version-156305/client.key
	I1227 09:59:40.211189  492443 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22344-301174/.minikube/profiles/old-k8s-version-156305/client.crt with IP's: []
	I1227 09:59:40.346248  492443 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22344-301174/.minikube/profiles/old-k8s-version-156305/client.crt ...
	I1227 09:59:40.346284  492443 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22344-301174/.minikube/profiles/old-k8s-version-156305/client.crt: {Name:mk6e76788e71f4894bc9dd49a6397a60e246d67c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 09:59:40.346534  492443 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22344-301174/.minikube/profiles/old-k8s-version-156305/client.key ...
	I1227 09:59:40.346554  492443 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22344-301174/.minikube/profiles/old-k8s-version-156305/client.key: {Name:mkf1eaa6c5804539b3ad986b69ddaf27704fc8d2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 09:59:40.346713  492443 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22344-301174/.minikube/profiles/old-k8s-version-156305/apiserver.key.aa518b85
	I1227 09:59:40.346735  492443 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22344-301174/.minikube/profiles/old-k8s-version-156305/apiserver.crt.aa518b85 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.85.2]
	I1227 09:59:40.663706  492443 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22344-301174/.minikube/profiles/old-k8s-version-156305/apiserver.crt.aa518b85 ...
	I1227 09:59:40.663740  492443 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22344-301174/.minikube/profiles/old-k8s-version-156305/apiserver.crt.aa518b85: {Name:mk8d2d35fdf01e5bf5a8d52feeb4b0e515e91911 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 09:59:40.663926  492443 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22344-301174/.minikube/profiles/old-k8s-version-156305/apiserver.key.aa518b85 ...
	I1227 09:59:40.663941  492443 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22344-301174/.minikube/profiles/old-k8s-version-156305/apiserver.key.aa518b85: {Name:mk8656f7e7982afa32147ea2b6d568088185e5e6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 09:59:40.664076  492443 certs.go:382] copying /home/jenkins/minikube-integration/22344-301174/.minikube/profiles/old-k8s-version-156305/apiserver.crt.aa518b85 -> /home/jenkins/minikube-integration/22344-301174/.minikube/profiles/old-k8s-version-156305/apiserver.crt
	I1227 09:59:40.664158  492443 certs.go:386] copying /home/jenkins/minikube-integration/22344-301174/.minikube/profiles/old-k8s-version-156305/apiserver.key.aa518b85 -> /home/jenkins/minikube-integration/22344-301174/.minikube/profiles/old-k8s-version-156305/apiserver.key
	I1227 09:59:40.664221  492443 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22344-301174/.minikube/profiles/old-k8s-version-156305/proxy-client.key
	I1227 09:59:40.664240  492443 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22344-301174/.minikube/profiles/old-k8s-version-156305/proxy-client.crt with IP's: []
	I1227 09:59:40.762436  492443 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22344-301174/.minikube/profiles/old-k8s-version-156305/proxy-client.crt ...
	I1227 09:59:40.762465  492443 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22344-301174/.minikube/profiles/old-k8s-version-156305/proxy-client.crt: {Name:mkd9696ba22044006202751ba34313d3ffad5985 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 09:59:40.762642  492443 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22344-301174/.minikube/profiles/old-k8s-version-156305/proxy-client.key ...
	I1227 09:59:40.762657  492443 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22344-301174/.minikube/profiles/old-k8s-version-156305/proxy-client.key: {Name:mkb7bd020245357641b6fb886196ca50cbb44fad Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 09:59:40.762838  492443 certs.go:484] found cert: /home/jenkins/minikube-integration/22344-301174/.minikube/certs/303043.pem (1338 bytes)
	W1227 09:59:40.762889  492443 certs.go:480] ignoring /home/jenkins/minikube-integration/22344-301174/.minikube/certs/303043_empty.pem, impossibly tiny 0 bytes
	I1227 09:59:40.762904  492443 certs.go:484] found cert: /home/jenkins/minikube-integration/22344-301174/.minikube/certs/ca-key.pem (1675 bytes)
	I1227 09:59:40.762934  492443 certs.go:484] found cert: /home/jenkins/minikube-integration/22344-301174/.minikube/certs/ca.pem (1082 bytes)
	I1227 09:59:40.762963  492443 certs.go:484] found cert: /home/jenkins/minikube-integration/22344-301174/.minikube/certs/cert.pem (1123 bytes)
	I1227 09:59:40.762992  492443 certs.go:484] found cert: /home/jenkins/minikube-integration/22344-301174/.minikube/certs/key.pem (1675 bytes)
	I1227 09:59:40.763042  492443 certs.go:484] found cert: /home/jenkins/minikube-integration/22344-301174/.minikube/files/etc/ssl/certs/3030432.pem (1708 bytes)
	I1227 09:59:40.763667  492443 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22344-301174/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1227 09:59:40.782606  492443 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22344-301174/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1227 09:59:40.801270  492443 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22344-301174/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1227 09:59:40.818894  492443 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22344-301174/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1227 09:59:40.836548  492443 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22344-301174/.minikube/profiles/old-k8s-version-156305/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1227 09:59:40.853951  492443 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22344-301174/.minikube/profiles/old-k8s-version-156305/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1227 09:59:40.871708  492443 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22344-301174/.minikube/profiles/old-k8s-version-156305/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1227 09:59:40.896418  492443 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22344-301174/.minikube/profiles/old-k8s-version-156305/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1227 09:59:40.921449  492443 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22344-301174/.minikube/files/etc/ssl/certs/3030432.pem --> /usr/share/ca-certificates/3030432.pem (1708 bytes)
	I1227 09:59:40.946789  492443 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22344-301174/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1227 09:59:40.982062  492443 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22344-301174/.minikube/certs/303043.pem --> /usr/share/ca-certificates/303043.pem (1338 bytes)
	I1227 09:59:41.006927  492443 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (722 bytes)
	I1227 09:59:41.021765  492443 ssh_runner.go:195] Run: openssl version
	I1227 09:59:41.028886  492443 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/303043.pem
	I1227 09:59:41.036789  492443 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/303043.pem /etc/ssl/certs/303043.pem
	I1227 09:59:41.044576  492443 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/303043.pem
	I1227 09:59:41.048496  492443 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 27 09:17 /usr/share/ca-certificates/303043.pem
	I1227 09:59:41.048610  492443 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/303043.pem
	I1227 09:59:41.090176  492443 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1227 09:59:41.098107  492443 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/303043.pem /etc/ssl/certs/51391683.0
	I1227 09:59:41.105848  492443 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/3030432.pem
	I1227 09:59:41.113710  492443 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/3030432.pem /etc/ssl/certs/3030432.pem
	I1227 09:59:41.121462  492443 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3030432.pem
	I1227 09:59:41.125556  492443 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 27 09:17 /usr/share/ca-certificates/3030432.pem
	I1227 09:59:41.125631  492443 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3030432.pem
	I1227 09:59:41.167644  492443 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1227 09:59:41.175561  492443 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/3030432.pem /etc/ssl/certs/3ec20f2e.0
	I1227 09:59:41.183042  492443 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1227 09:59:41.190837  492443 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1227 09:59:41.198366  492443 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1227 09:59:41.202028  492443 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 27 09:13 /usr/share/ca-certificates/minikubeCA.pem
	I1227 09:59:41.202093  492443 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1227 09:59:41.243117  492443 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1227 09:59:41.250942  492443 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1227 09:59:41.258676  492443 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1227 09:59:41.262486  492443 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1227 09:59:41.262543  492443 kubeadm.go:401] StartCluster: {Name:old-k8s-version-156305 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-156305 Namespace:default APIServerHAVIP: APIServerName:minikube
CA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwar
ePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1227 09:59:41.262622  492443 cri.go:61] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1227 09:59:41.262700  492443 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1227 09:59:41.289232  492443 cri.go:96] found id: ""
	I1227 09:59:41.289304  492443 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1227 09:59:41.297330  492443 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1227 09:59:41.305379  492443 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1227 09:59:41.305481  492443 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1227 09:59:41.313535  492443 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1227 09:59:41.313558  492443 kubeadm.go:158] found existing configuration files:
	
	I1227 09:59:41.313617  492443 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1227 09:59:41.321783  492443 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1227 09:59:41.321852  492443 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1227 09:59:41.329906  492443 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1227 09:59:41.337930  492443 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1227 09:59:41.338025  492443 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1227 09:59:41.345657  492443 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1227 09:59:41.353689  492443 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1227 09:59:41.353816  492443 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1227 09:59:41.361830  492443 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1227 09:59:41.370221  492443 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1227 09:59:41.370321  492443 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1227 09:59:41.377837  492443 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.28.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1227 09:59:41.422265  492443 kubeadm.go:319] [init] Using Kubernetes version: v1.28.0
	I1227 09:59:41.422326  492443 kubeadm.go:319] [preflight] Running pre-flight checks
	I1227 09:59:41.465539  492443 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1227 09:59:41.465619  492443 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1227 09:59:41.465667  492443 kubeadm.go:319] OS: Linux
	I1227 09:59:41.465717  492443 kubeadm.go:319] CGROUPS_CPU: enabled
	I1227 09:59:41.465769  492443 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1227 09:59:41.465820  492443 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1227 09:59:41.465883  492443 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1227 09:59:41.465936  492443 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1227 09:59:41.465988  492443 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1227 09:59:41.466038  492443 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1227 09:59:41.466091  492443 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1227 09:59:41.466190  492443 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1227 09:59:41.553278  492443 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1227 09:59:41.553423  492443 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1227 09:59:41.553536  492443 kubeadm.go:319] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1227 09:59:41.778571  492443 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1227 09:59:41.782762  492443 out.go:252]   - Generating certificates and keys ...
	I1227 09:59:41.782955  492443 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1227 09:59:41.783077  492443 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1227 09:59:43.487903  492443 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1227 09:59:44.130531  492443 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1227 09:59:44.923366  492443 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1227 09:59:45.517953  492443 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1227 09:59:46.350496  492443 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1227 09:59:46.350872  492443 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [localhost old-k8s-version-156305] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1227 09:59:46.733204  492443 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1227 09:59:46.733590  492443 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [localhost old-k8s-version-156305] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1227 09:59:47.049912  492443 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1227 09:59:47.887135  492443 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1227 09:59:48.481524  492443 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1227 09:59:48.481609  492443 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1227 09:59:49.198628  492443 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1227 09:59:49.627707  492443 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1227 09:59:50.121859  492443 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1227 09:59:50.833042  492443 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1227 09:59:50.833652  492443 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1227 09:59:50.836362  492443 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1227 09:59:50.839741  492443 out.go:252]   - Booting up control plane ...
	I1227 09:59:50.839869  492443 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1227 09:59:50.839953  492443 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1227 09:59:50.840880  492443 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1227 09:59:50.862333  492443 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1227 09:59:50.862517  492443 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1227 09:59:50.862561  492443 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1227 09:59:50.986755  492443 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1227 09:59:58.487247  492443 kubeadm.go:319] [apiclient] All control plane components are healthy after 7.503121 seconds
	I1227 09:59:58.487378  492443 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1227 09:59:58.504650  492443 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1227 09:59:59.032398  492443 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1227 09:59:59.032610  492443 kubeadm.go:319] [mark-control-plane] Marking the node old-k8s-version-156305 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1227 09:59:59.547405  492443 kubeadm.go:319] [bootstrap-token] Using token: aq1uxr.8yfhgfg93gsxqyyv
	I1227 09:59:59.550257  492443 out.go:252]   - Configuring RBAC rules ...
	I1227 09:59:59.550387  492443 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1227 09:59:59.555768  492443 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1227 09:59:59.568522  492443 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1227 09:59:59.573156  492443 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1227 09:59:59.577188  492443 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1227 09:59:59.581516  492443 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1227 09:59:59.597841  492443 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1227 09:59:59.925525  492443 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1227 09:59:59.981676  492443 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1227 09:59:59.986449  492443 kubeadm.go:319] 
	I1227 09:59:59.986526  492443 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1227 09:59:59.986537  492443 kubeadm.go:319] 
	I1227 09:59:59.986610  492443 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1227 09:59:59.986618  492443 kubeadm.go:319] 
	I1227 09:59:59.986642  492443 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1227 09:59:59.986701  492443 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1227 09:59:59.986753  492443 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1227 09:59:59.986761  492443 kubeadm.go:319] 
	I1227 09:59:59.986811  492443 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1227 09:59:59.986819  492443 kubeadm.go:319] 
	I1227 09:59:59.986870  492443 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1227 09:59:59.986878  492443 kubeadm.go:319] 
	I1227 09:59:59.986927  492443 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1227 09:59:59.987001  492443 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1227 09:59:59.987069  492443 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1227 09:59:59.987075  492443 kubeadm.go:319] 
	I1227 09:59:59.987154  492443 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1227 09:59:59.987230  492443 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1227 09:59:59.987238  492443 kubeadm.go:319] 
	I1227 09:59:59.987317  492443 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token aq1uxr.8yfhgfg93gsxqyyv \
	I1227 09:59:59.987418  492443 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:08d8899242412948893d7bdf12a68d0584eea0f77b00577c527378f6462e133c \
	I1227 09:59:59.987441  492443 kubeadm.go:319] 	--control-plane 
	I1227 09:59:59.987450  492443 kubeadm.go:319] 
	I1227 09:59:59.987530  492443 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1227 09:59:59.987537  492443 kubeadm.go:319] 
	I1227 09:59:59.987614  492443 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token aq1uxr.8yfhgfg93gsxqyyv \
	I1227 09:59:59.987714  492443 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:08d8899242412948893d7bdf12a68d0584eea0f77b00577c527378f6462e133c 
	I1227 09:59:59.991633  492443 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1227 09:59:59.991754  492443 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1227 09:59:59.991828  492443 cni.go:84] Creating CNI manager for ""
	I1227 09:59:59.991855  492443 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1227 09:59:59.997095  492443 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1227 10:00:00.000142  492443 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1227 10:00:00.006535  492443 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.0/kubectl ...
	I1227 10:00:00.006557  492443 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2620 bytes)
	I1227 10:00:00.167799  492443 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1227 10:00:02.147139  492443 ssh_runner.go:235] Completed: sudo /var/lib/minikube/binaries/v1.28.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml: (1.979297194s)
	I1227 10:00:02.147184  492443 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1227 10:00:02.147337  492443 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1227 10:00:02.147426  492443 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes old-k8s-version-156305 minikube.k8s.io/updated_at=2025_12_27T10_00_02_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=a2daf445edf4872fd9586416ba5dbf507613db86 minikube.k8s.io/name=old-k8s-version-156305 minikube.k8s.io/primary=true
	I1227 10:00:02.445474  492443 ops.go:34] apiserver oom_adj: -16
	I1227 10:00:02.445635  492443 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1227 10:00:02.946697  492443 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1227 10:00:03.446478  492443 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1227 10:00:03.946438  492443 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1227 10:00:04.445879  492443 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1227 10:00:04.945832  492443 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1227 10:00:05.446509  492443 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1227 10:00:05.945741  492443 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1227 10:00:06.445876  492443 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1227 10:00:06.946245  492443 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1227 10:00:07.446376  492443 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1227 10:00:07.945777  492443 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1227 10:00:08.445939  492443 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1227 10:00:08.945882  492443 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1227 10:00:09.445760  492443 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1227 10:00:09.945679  492443 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1227 10:00:10.445780  492443 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1227 10:00:10.496698  484533 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.000942644s
	I1227 10:00:10.496730  484533 kubeadm.go:319] 
	I1227 10:00:10.496789  484533 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1227 10:00:10.496827  484533 kubeadm.go:319] 	- The kubelet is not running
	I1227 10:00:10.496936  484533 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1227 10:00:10.496945  484533 kubeadm.go:319] 
	I1227 10:00:10.497048  484533 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1227 10:00:10.497084  484533 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1227 10:00:10.497119  484533 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1227 10:00:10.497127  484533 kubeadm.go:319] 
	I1227 10:00:10.512739  484533 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1227 10:00:10.513169  484533 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1227 10:00:10.513287  484533 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1227 10:00:10.513526  484533 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1227 10:00:10.513534  484533 kubeadm.go:319] 
	I1227 10:00:10.513603  484533 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	W1227 10:00:10.513743  484533 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [force-systemd-flag-779725 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [force-systemd-flag-779725 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000942644s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	I1227 10:00:10.513833  484533 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1227 10:00:10.945663  484533 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1227 10:00:10.962736  484533 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1227 10:00:10.962805  484533 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1227 10:00:10.974233  484533 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1227 10:00:10.974259  484533 kubeadm.go:158] found existing configuration files:
	
	I1227 10:00:10.974312  484533 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1227 10:00:10.984084  484533 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1227 10:00:10.984155  484533 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1227 10:00:10.992747  484533 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1227 10:00:11.002221  484533 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1227 10:00:11.002302  484533 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1227 10:00:11.013560  484533 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1227 10:00:11.023445  484533 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1227 10:00:11.023515  484533 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1227 10:00:11.033374  484533 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1227 10:00:11.044183  484533 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1227 10:00:11.044265  484533 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1227 10:00:11.053082  484533 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1227 10:00:11.114718  484533 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0
	I1227 10:00:11.115046  484533 kubeadm.go:319] [preflight] Running pre-flight checks
	I1227 10:00:11.208881  484533 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1227 10:00:11.208952  484533 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1227 10:00:11.208993  484533 kubeadm.go:319] OS: Linux
	I1227 10:00:11.209043  484533 kubeadm.go:319] CGROUPS_CPU: enabled
	I1227 10:00:11.209096  484533 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1227 10:00:11.209147  484533 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1227 10:00:11.209199  484533 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1227 10:00:11.209249  484533 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1227 10:00:11.209305  484533 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1227 10:00:11.209356  484533 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1227 10:00:11.209409  484533 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1227 10:00:11.209459  484533 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1227 10:00:11.292695  484533 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1227 10:00:11.292850  484533 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1227 10:00:11.292982  484533 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1227 10:00:11.302691  484533 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1227 10:00:10.946409  492443 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1227 10:00:11.446442  492443 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1227 10:00:11.945737  492443 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1227 10:00:12.075401  492443 kubeadm.go:1114] duration metric: took 9.928123066s to wait for elevateKubeSystemPrivileges
	I1227 10:00:12.075444  492443 kubeadm.go:403] duration metric: took 30.812904233s to StartCluster
	I1227 10:00:12.075464  492443 settings.go:142] acquiring lock: {Name:mkd3c0090464e277fd903dd0170c6ace5a6172ea Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 10:00:12.075542  492443 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22344-301174/kubeconfig
	I1227 10:00:12.076327  492443 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22344-301174/kubeconfig: {Name:mk3a3b85dc4ae4614cb3a9a7459a1ec0ee3f4d9f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 10:00:12.076601  492443 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1227 10:00:12.076715  492443 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1227 10:00:12.077029  492443 config.go:182] Loaded profile config "old-k8s-version-156305": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1227 10:00:12.077093  492443 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1227 10:00:12.077171  492443 addons.go:70] Setting storage-provisioner=true in profile "old-k8s-version-156305"
	I1227 10:00:12.077188  492443 addons.go:239] Setting addon storage-provisioner=true in "old-k8s-version-156305"
	I1227 10:00:12.077231  492443 host.go:66] Checking if "old-k8s-version-156305" exists ...
	I1227 10:00:12.077842  492443 cli_runner.go:164] Run: docker container inspect old-k8s-version-156305 --format={{.State.Status}}
	I1227 10:00:12.078513  492443 addons.go:70] Setting default-storageclass=true in profile "old-k8s-version-156305"
	I1227 10:00:12.078555  492443 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "old-k8s-version-156305"
	I1227 10:00:12.078883  492443 cli_runner.go:164] Run: docker container inspect old-k8s-version-156305 --format={{.State.Status}}
	I1227 10:00:12.083184  492443 out.go:179] * Verifying Kubernetes components...
	I1227 10:00:12.091084  492443 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1227 10:00:12.131462  492443 addons.go:239] Setting addon default-storageclass=true in "old-k8s-version-156305"
	I1227 10:00:12.131503  492443 host.go:66] Checking if "old-k8s-version-156305" exists ...
	I1227 10:00:12.131951  492443 cli_runner.go:164] Run: docker container inspect old-k8s-version-156305 --format={{.State.Status}}
	I1227 10:00:12.133076  492443 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1227 10:00:11.307682  484533 out.go:252]   - Generating certificates and keys ...
	I1227 10:00:11.307842  484533 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1227 10:00:11.307943  484533 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1227 10:00:11.308050  484533 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1227 10:00:11.308151  484533 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1227 10:00:11.308259  484533 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1227 10:00:11.308351  484533 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1227 10:00:11.308444  484533 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1227 10:00:11.308552  484533 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1227 10:00:11.308661  484533 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1227 10:00:11.308778  484533 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1227 10:00:11.308853  484533 kubeadm.go:319] [certs] Using the existing "sa" key
	I1227 10:00:11.308930  484533 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1227 10:00:11.911987  484533 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1227 10:00:12.188169  484533 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1227 10:00:12.376543  484533 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1227 10:00:12.810540  484533 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1227 10:00:12.909733  484533 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1227 10:00:12.914047  484533 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1227 10:00:12.914141  484533 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1227 10:00:12.917330  484533 out.go:252]   - Booting up control plane ...
	I1227 10:00:12.917452  484533 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1227 10:00:12.917776  484533 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1227 10:00:12.917872  484533 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1227 10:00:12.960022  484533 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1227 10:00:12.960353  484533 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1227 10:00:12.969156  484533 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1227 10:00:12.969480  484533 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1227 10:00:12.970864  484533 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1227 10:00:12.136122  492443 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1227 10:00:12.136148  492443 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1227 10:00:12.136229  492443 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-156305
	I1227 10:00:12.170358  492443 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33421 SSHKeyPath:/home/jenkins/minikube-integration/22344-301174/.minikube/machines/old-k8s-version-156305/id_rsa Username:docker}
	I1227 10:00:12.182333  492443 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1227 10:00:12.182356  492443 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1227 10:00:12.182419  492443 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-156305
	I1227 10:00:12.212176  492443 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33421 SSHKeyPath:/home/jenkins/minikube-integration/22344-301174/.minikube/machines/old-k8s-version-156305/id_rsa Username:docker}
	I1227 10:00:12.728860  492443 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1227 10:00:12.816830  492443 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1227 10:00:12.946033  492443 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.85.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1227 10:00:12.946302  492443 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1227 10:00:14.196919  492443 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.467970667s)
	I1227 10:00:14.197019  492443 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.380121553s)
	I1227 10:00:14.197067  492443 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (1.25070882s)
	I1227 10:00:14.198190  492443 node_ready.go:35] waiting up to 6m0s for node "old-k8s-version-156305" to be "Ready" ...
	I1227 10:00:14.197096  492443 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.85.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.250985926s)
	I1227 10:00:14.198455  492443 start.go:987] {"host.minikube.internal": 192.168.85.1} host record injected into CoreDNS's ConfigMap
	I1227 10:00:14.282237  492443 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1227 10:00:14.285074  492443 addons.go:530] duration metric: took 2.207973171s for enable addons: enabled=[storage-provisioner default-storageclass]
	I1227 10:00:14.703535  492443 kapi.go:214] "coredns" deployment in "kube-system" namespace and "old-k8s-version-156305" context rescaled to 1 replicas
	I1227 10:00:13.228211  484533 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1227 10:00:13.228346  484533 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	W1227 10:00:16.203353  492443 node_ready.go:57] node "old-k8s-version-156305" has "Ready":"False" status (will retry)
	W1227 10:00:18.702251  492443 node_ready.go:57] node "old-k8s-version-156305" has "Ready":"False" status (will retry)
	W1227 10:00:21.202044  492443 node_ready.go:57] node "old-k8s-version-156305" has "Ready":"False" status (will retry)
	W1227 10:00:23.702071  492443 node_ready.go:57] node "old-k8s-version-156305" has "Ready":"False" status (will retry)
	W1227 10:00:26.202116  492443 node_ready.go:57] node "old-k8s-version-156305" has "Ready":"False" status (will retry)
	I1227 10:00:27.201538  492443 node_ready.go:49] node "old-k8s-version-156305" is "Ready"
	I1227 10:00:27.201566  492443 node_ready.go:38] duration metric: took 13.003346083s for node "old-k8s-version-156305" to be "Ready" ...
	I1227 10:00:27.201579  492443 api_server.go:52] waiting for apiserver process to appear ...
	I1227 10:00:27.201647  492443 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 10:00:27.215055  492443 api_server.go:72] duration metric: took 15.138404989s to wait for apiserver process to appear ...
	I1227 10:00:27.215079  492443 api_server.go:88] waiting for apiserver healthz status ...
	I1227 10:00:27.215098  492443 api_server.go:299] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1227 10:00:27.227273  492443 api_server.go:325] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1227 10:00:27.228720  492443 api_server.go:141] control plane version: v1.28.0
	I1227 10:00:27.228743  492443 api_server.go:131] duration metric: took 13.657811ms to wait for apiserver health ...
	I1227 10:00:27.228752  492443 system_pods.go:43] waiting for kube-system pods to appear ...
	I1227 10:00:27.233403  492443 system_pods.go:59] 8 kube-system pods found
	I1227 10:00:27.233438  492443 system_pods.go:61] "coredns-5dd5756b68-5jmbh" [1eb3c15a-a576-4711-849e-790fa87ddc70] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1227 10:00:27.233445  492443 system_pods.go:61] "etcd-old-k8s-version-156305" [788b8b14-c8e8-45f5-85a5-d93643420eaf] Running
	I1227 10:00:27.233450  492443 system_pods.go:61] "kindnet-w2m9v" [fba5eff1-7424-451f-9109-7e58587628ef] Running
	I1227 10:00:27.233455  492443 system_pods.go:61] "kube-apiserver-old-k8s-version-156305" [8408f30c-4be2-4423-8ed2-bfaf23f66b8f] Running
	I1227 10:00:27.233460  492443 system_pods.go:61] "kube-controller-manager-old-k8s-version-156305" [4647838c-b00e-4945-995c-58b03a1dea94] Running
	I1227 10:00:27.233466  492443 system_pods.go:61] "kube-proxy-pkr8q" [1e2c235b-d7bb-427a-8c56-988f64794d9d] Running
	I1227 10:00:27.233476  492443 system_pods.go:61] "kube-scheduler-old-k8s-version-156305" [8fa045f4-b8c0-4d14-9b36-f0a5ec8d6b7b] Running
	I1227 10:00:27.233482  492443 system_pods.go:61] "storage-provisioner" [f6bd7b49-196a-44fd-87ef-c75c1aec15de] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1227 10:00:27.233488  492443 system_pods.go:74] duration metric: took 4.730258ms to wait for pod list to return data ...
	I1227 10:00:27.233497  492443 default_sa.go:34] waiting for default service account to be created ...
	I1227 10:00:27.237133  492443 default_sa.go:45] found service account: "default"
	I1227 10:00:27.237158  492443 default_sa.go:55] duration metric: took 3.654191ms for default service account to be created ...
	I1227 10:00:27.237167  492443 system_pods.go:116] waiting for k8s-apps to be running ...
	I1227 10:00:27.245370  492443 system_pods.go:86] 8 kube-system pods found
	I1227 10:00:27.245454  492443 system_pods.go:89] "coredns-5dd5756b68-5jmbh" [1eb3c15a-a576-4711-849e-790fa87ddc70] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1227 10:00:27.245477  492443 system_pods.go:89] "etcd-old-k8s-version-156305" [788b8b14-c8e8-45f5-85a5-d93643420eaf] Running
	I1227 10:00:27.245518  492443 system_pods.go:89] "kindnet-w2m9v" [fba5eff1-7424-451f-9109-7e58587628ef] Running
	I1227 10:00:27.245546  492443 system_pods.go:89] "kube-apiserver-old-k8s-version-156305" [8408f30c-4be2-4423-8ed2-bfaf23f66b8f] Running
	I1227 10:00:27.245569  492443 system_pods.go:89] "kube-controller-manager-old-k8s-version-156305" [4647838c-b00e-4945-995c-58b03a1dea94] Running
	I1227 10:00:27.245593  492443 system_pods.go:89] "kube-proxy-pkr8q" [1e2c235b-d7bb-427a-8c56-988f64794d9d] Running
	I1227 10:00:27.245628  492443 system_pods.go:89] "kube-scheduler-old-k8s-version-156305" [8fa045f4-b8c0-4d14-9b36-f0a5ec8d6b7b] Running
	I1227 10:00:27.245656  492443 system_pods.go:89] "storage-provisioner" [f6bd7b49-196a-44fd-87ef-c75c1aec15de] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1227 10:00:27.245700  492443 retry.go:84] will retry after 300ms: missing components: kube-dns
	I1227 10:00:27.526076  492443 system_pods.go:86] 8 kube-system pods found
	I1227 10:00:27.526124  492443 system_pods.go:89] "coredns-5dd5756b68-5jmbh" [1eb3c15a-a576-4711-849e-790fa87ddc70] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1227 10:00:27.526132  492443 system_pods.go:89] "etcd-old-k8s-version-156305" [788b8b14-c8e8-45f5-85a5-d93643420eaf] Running
	I1227 10:00:27.526140  492443 system_pods.go:89] "kindnet-w2m9v" [fba5eff1-7424-451f-9109-7e58587628ef] Running
	I1227 10:00:27.526171  492443 system_pods.go:89] "kube-apiserver-old-k8s-version-156305" [8408f30c-4be2-4423-8ed2-bfaf23f66b8f] Running
	I1227 10:00:27.526178  492443 system_pods.go:89] "kube-controller-manager-old-k8s-version-156305" [4647838c-b00e-4945-995c-58b03a1dea94] Running
	I1227 10:00:27.526183  492443 system_pods.go:89] "kube-proxy-pkr8q" [1e2c235b-d7bb-427a-8c56-988f64794d9d] Running
	I1227 10:00:27.526193  492443 system_pods.go:89] "kube-scheduler-old-k8s-version-156305" [8fa045f4-b8c0-4d14-9b36-f0a5ec8d6b7b] Running
	I1227 10:00:27.526199  492443 system_pods.go:89] "storage-provisioner" [f6bd7b49-196a-44fd-87ef-c75c1aec15de] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1227 10:00:27.767864  492443 system_pods.go:86] 8 kube-system pods found
	I1227 10:00:27.767898  492443 system_pods.go:89] "coredns-5dd5756b68-5jmbh" [1eb3c15a-a576-4711-849e-790fa87ddc70] Running
	I1227 10:00:27.767906  492443 system_pods.go:89] "etcd-old-k8s-version-156305" [788b8b14-c8e8-45f5-85a5-d93643420eaf] Running
	I1227 10:00:27.767920  492443 system_pods.go:89] "kindnet-w2m9v" [fba5eff1-7424-451f-9109-7e58587628ef] Running
	I1227 10:00:27.767925  492443 system_pods.go:89] "kube-apiserver-old-k8s-version-156305" [8408f30c-4be2-4423-8ed2-bfaf23f66b8f] Running
	I1227 10:00:27.767932  492443 system_pods.go:89] "kube-controller-manager-old-k8s-version-156305" [4647838c-b00e-4945-995c-58b03a1dea94] Running
	I1227 10:00:27.767936  492443 system_pods.go:89] "kube-proxy-pkr8q" [1e2c235b-d7bb-427a-8c56-988f64794d9d] Running
	I1227 10:00:27.767941  492443 system_pods.go:89] "kube-scheduler-old-k8s-version-156305" [8fa045f4-b8c0-4d14-9b36-f0a5ec8d6b7b] Running
	I1227 10:00:27.767946  492443 system_pods.go:89] "storage-provisioner" [f6bd7b49-196a-44fd-87ef-c75c1aec15de] Running
	I1227 10:00:27.767961  492443 system_pods.go:126] duration metric: took 530.788141ms to wait for k8s-apps to be running ...
	I1227 10:00:27.767978  492443 system_svc.go:44] waiting for kubelet service to be running ....
	I1227 10:00:27.768042  492443 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1227 10:00:27.782482  492443 system_svc.go:56] duration metric: took 14.494622ms WaitForService to wait for kubelet
	I1227 10:00:27.782522  492443 kubeadm.go:587] duration metric: took 15.70588679s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1227 10:00:27.782544  492443 node_conditions.go:102] verifying NodePressure condition ...
	I1227 10:00:27.786003  492443 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1227 10:00:27.786037  492443 node_conditions.go:123] node cpu capacity is 2
	I1227 10:00:27.786052  492443 node_conditions.go:105] duration metric: took 3.502132ms to run NodePressure ...
	I1227 10:00:27.786065  492443 start.go:242] waiting for startup goroutines ...
	I1227 10:00:27.786072  492443 start.go:247] waiting for cluster config update ...
	I1227 10:00:27.786083  492443 start.go:256] writing updated cluster config ...
	I1227 10:00:27.786436  492443 ssh_runner.go:195] Run: rm -f paused
	I1227 10:00:27.790469  492443 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1227 10:00:27.794956  492443 pod_ready.go:83] waiting for pod "coredns-5dd5756b68-5jmbh" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 10:00:27.800530  492443 pod_ready.go:94] pod "coredns-5dd5756b68-5jmbh" is "Ready"
	I1227 10:00:27.800561  492443 pod_ready.go:86] duration metric: took 5.570826ms for pod "coredns-5dd5756b68-5jmbh" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 10:00:27.804036  492443 pod_ready.go:83] waiting for pod "etcd-old-k8s-version-156305" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 10:00:27.809191  492443 pod_ready.go:94] pod "etcd-old-k8s-version-156305" is "Ready"
	I1227 10:00:27.809270  492443 pod_ready.go:86] duration metric: took 5.203749ms for pod "etcd-old-k8s-version-156305" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 10:00:27.812609  492443 pod_ready.go:83] waiting for pod "kube-apiserver-old-k8s-version-156305" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 10:00:27.817976  492443 pod_ready.go:94] pod "kube-apiserver-old-k8s-version-156305" is "Ready"
	I1227 10:00:27.818002  492443 pod_ready.go:86] duration metric: took 5.323201ms for pod "kube-apiserver-old-k8s-version-156305" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 10:00:27.821306  492443 pod_ready.go:83] waiting for pod "kube-controller-manager-old-k8s-version-156305" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 10:00:28.194430  492443 pod_ready.go:94] pod "kube-controller-manager-old-k8s-version-156305" is "Ready"
	I1227 10:00:28.194463  492443 pod_ready.go:86] duration metric: took 373.123641ms for pod "kube-controller-manager-old-k8s-version-156305" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 10:00:28.395419  492443 pod_ready.go:83] waiting for pod "kube-proxy-pkr8q" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 10:00:28.795010  492443 pod_ready.go:94] pod "kube-proxy-pkr8q" is "Ready"
	I1227 10:00:28.795042  492443 pod_ready.go:86] duration metric: took 399.594298ms for pod "kube-proxy-pkr8q" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 10:00:28.995181  492443 pod_ready.go:83] waiting for pod "kube-scheduler-old-k8s-version-156305" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 10:00:29.394996  492443 pod_ready.go:94] pod "kube-scheduler-old-k8s-version-156305" is "Ready"
	I1227 10:00:29.395023  492443 pod_ready.go:86] duration metric: took 399.812327ms for pod "kube-scheduler-old-k8s-version-156305" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 10:00:29.395036  492443 pod_ready.go:40] duration metric: took 1.604532919s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1227 10:00:29.448698  492443 start.go:625] kubectl: 1.33.2, cluster: 1.28.0 (minor skew: 5)
	I1227 10:00:29.451945  492443 out.go:203] 
	W1227 10:00:29.454918  492443 out.go:285] ! /usr/local/bin/kubectl is version 1.33.2, which may have incompatibilities with Kubernetes 1.28.0.
	I1227 10:00:29.457949  492443 out.go:179]   - Want kubectl v1.28.0? Try 'minikube kubectl -- get pods -A'
	I1227 10:00:29.461640  492443 out.go:179] * Done! kubectl is now configured to use "old-k8s-version-156305" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Dec 27 10:00:27 old-k8s-version-156305 crio[836]: time="2025-12-27T10:00:27.175280672Z" level=info msg="Created container 25a4dd58ef1d5570cdc63882e1f49e627ab9270a9008adac97db2adab408f35f: kube-system/storage-provisioner/storage-provisioner" id=06c4a363-9ee7-4961-8832-8e061a99e7ea name=/runtime.v1.RuntimeService/CreateContainer
	Dec 27 10:00:27 old-k8s-version-156305 crio[836]: time="2025-12-27T10:00:27.176140334Z" level=info msg="Starting container: 25a4dd58ef1d5570cdc63882e1f49e627ab9270a9008adac97db2adab408f35f" id=a161cbac-f68f-479b-b2e0-ebbfe75758d8 name=/runtime.v1.RuntimeService/StartContainer
	Dec 27 10:00:27 old-k8s-version-156305 crio[836]: time="2025-12-27T10:00:27.180486095Z" level=info msg="Started container" PID=1933 containerID=25a4dd58ef1d5570cdc63882e1f49e627ab9270a9008adac97db2adab408f35f description=kube-system/storage-provisioner/storage-provisioner id=a161cbac-f68f-479b-b2e0-ebbfe75758d8 name=/runtime.v1.RuntimeService/StartContainer sandboxID=22b10f2484c49e9ddbb43fdee224a821165bb1e02996a87a1ebee71eafc0e42e
	Dec 27 10:00:29 old-k8s-version-156305 crio[836]: time="2025-12-27T10:00:29.967673042Z" level=info msg="Running pod sandbox: default/busybox/POD" id=d84e51cc-8a30-4eb3-b3f6-3f7e9896aed8 name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 27 10:00:29 old-k8s-version-156305 crio[836]: time="2025-12-27T10:00:29.967743229Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 27 10:00:29 old-k8s-version-156305 crio[836]: time="2025-12-27T10:00:29.973752797Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:ee91743ac80f3ee43ec5f49702d6906f9b05f062412df14a71eb9062eea332f7 UID:dfb2f88f-5b2b-4d4e-947d-54a4743f76e3 NetNS:/var/run/netns/a95c3141-f42e-45a4-acfe-fefeb86836fc Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x400012db40}] Aliases:map[]}"
	Dec 27 10:00:29 old-k8s-version-156305 crio[836]: time="2025-12-27T10:00:29.973931023Z" level=info msg="Adding pod default_busybox to CNI network \"kindnet\" (type=ptp)"
	Dec 27 10:00:29 old-k8s-version-156305 crio[836]: time="2025-12-27T10:00:29.984914479Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:ee91743ac80f3ee43ec5f49702d6906f9b05f062412df14a71eb9062eea332f7 UID:dfb2f88f-5b2b-4d4e-947d-54a4743f76e3 NetNS:/var/run/netns/a95c3141-f42e-45a4-acfe-fefeb86836fc Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x400012db40}] Aliases:map[]}"
	Dec 27 10:00:29 old-k8s-version-156305 crio[836]: time="2025-12-27T10:00:29.98507764Z" level=info msg="Checking pod default_busybox for CNI network kindnet (type=ptp)"
	Dec 27 10:00:29 old-k8s-version-156305 crio[836]: time="2025-12-27T10:00:29.989235395Z" level=info msg="Ran pod sandbox ee91743ac80f3ee43ec5f49702d6906f9b05f062412df14a71eb9062eea332f7 with infra container: default/busybox/POD" id=d84e51cc-8a30-4eb3-b3f6-3f7e9896aed8 name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 27 10:00:29 old-k8s-version-156305 crio[836]: time="2025-12-27T10:00:29.990419131Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=101420be-5105-4e18-9ec1-c8f2c453508b name=/runtime.v1.ImageService/ImageStatus
	Dec 27 10:00:29 old-k8s-version-156305 crio[836]: time="2025-12-27T10:00:29.99067899Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=101420be-5105-4e18-9ec1-c8f2c453508b name=/runtime.v1.ImageService/ImageStatus
	Dec 27 10:00:29 old-k8s-version-156305 crio[836]: time="2025-12-27T10:00:29.990727861Z" level=info msg="Neither image nor artfiact gcr.io/k8s-minikube/busybox:1.28.4-glibc found" id=101420be-5105-4e18-9ec1-c8f2c453508b name=/runtime.v1.ImageService/ImageStatus
	Dec 27 10:00:29 old-k8s-version-156305 crio[836]: time="2025-12-27T10:00:29.99147964Z" level=info msg="Pulling image: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=b519949f-90c2-490c-b551-11fc68a33ace name=/runtime.v1.ImageService/PullImage
	Dec 27 10:00:29 old-k8s-version-156305 crio[836]: time="2025-12-27T10:00:29.994554452Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Dec 27 10:00:32 old-k8s-version-156305 crio[836]: time="2025-12-27T10:00:32.090489428Z" level=info msg="Pulled image: gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e" id=b519949f-90c2-490c-b551-11fc68a33ace name=/runtime.v1.ImageService/PullImage
	Dec 27 10:00:32 old-k8s-version-156305 crio[836]: time="2025-12-27T10:00:32.091293738Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=5b6d1a76-e21f-40cd-b291-ace5cd9dea54 name=/runtime.v1.ImageService/ImageStatus
	Dec 27 10:00:32 old-k8s-version-156305 crio[836]: time="2025-12-27T10:00:32.09912275Z" level=info msg="Creating container: default/busybox/busybox" id=be8f65a3-2e53-4d41-89df-e476c9171096 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 27 10:00:32 old-k8s-version-156305 crio[836]: time="2025-12-27T10:00:32.099261845Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 27 10:00:32 old-k8s-version-156305 crio[836]: time="2025-12-27T10:00:32.104471117Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 27 10:00:32 old-k8s-version-156305 crio[836]: time="2025-12-27T10:00:32.10495319Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 27 10:00:32 old-k8s-version-156305 crio[836]: time="2025-12-27T10:00:32.123563129Z" level=info msg="Created container 8cdfe7e881a9ed9e7907d04cb47d4af21d4ca030cda3e9400e9d2f0e8649f93e: default/busybox/busybox" id=be8f65a3-2e53-4d41-89df-e476c9171096 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 27 10:00:32 old-k8s-version-156305 crio[836]: time="2025-12-27T10:00:32.124500249Z" level=info msg="Starting container: 8cdfe7e881a9ed9e7907d04cb47d4af21d4ca030cda3e9400e9d2f0e8649f93e" id=780107b3-fe11-47ab-bc50-adf48ffd3bda name=/runtime.v1.RuntimeService/StartContainer
	Dec 27 10:00:32 old-k8s-version-156305 crio[836]: time="2025-12-27T10:00:32.129148578Z" level=info msg="Started container" PID=1991 containerID=8cdfe7e881a9ed9e7907d04cb47d4af21d4ca030cda3e9400e9d2f0e8649f93e description=default/busybox/busybox id=780107b3-fe11-47ab-bc50-adf48ffd3bda name=/runtime.v1.RuntimeService/StartContainer sandboxID=ee91743ac80f3ee43ec5f49702d6906f9b05f062412df14a71eb9062eea332f7
	Dec 27 10:00:37 old-k8s-version-156305 crio[836]: time="2025-12-27T10:00:37.830867885Z" level=error msg="Unhandled Error: unable to upgrade websocket connection: websocket server finished before becoming ready (logger=\"UnhandledError\")"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                              NAMESPACE
	8cdfe7e881a9e       gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e   7 seconds ago       Running             busybox                   0                   ee91743ac80f3       busybox                                          default
	25a4dd58ef1d5       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                      12 seconds ago      Running             storage-provisioner       0                   22b10f2484c49       storage-provisioner                              kube-system
	775f108c57c92       97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108                                      12 seconds ago      Running             coredns                   0                   1e6f3b3106c73       coredns-5dd5756b68-5jmbh                         kube-system
	f1d70fc1c602c       docker.io/kindest/kindnetd@sha256:f1260f5691195cc9a693dc0b55178aa724d944efd62486a8320f0583272b1fa3    23 seconds ago      Running             kindnet-cni               0                   10f82c94a624a       kindnet-w2m9v                                    kube-system
	4b1308eba9a3f       940f54a5bcae9dd4c97844fa36d12cc5d9078cffd5e677ad0df1528c12f3240d                                      25 seconds ago      Running             kube-proxy                0                   d4dbecd1c7821       kube-proxy-pkr8q                                 kube-system
	ca1508a569c4f       46cc66ccc7c19b4b30625b0aa4e178792add2385659205d7c6fcbd05d78c23e5                                      46 seconds ago      Running             kube-controller-manager   0                   38c4be2f8d3aa       kube-controller-manager-old-k8s-version-156305   kube-system
	d7bf42ae79814       762dce4090c5f4789bb5dbb933d5b50bc1a2357d7739bbce30d949820e5a38ee                                      46 seconds ago      Running             kube-scheduler            0                   1701dbe4585bb       kube-scheduler-old-k8s-version-156305            kube-system
	689f9a2418821       00543d2fe5d71095984891a0609ee504b81f9d72a69a0ad02039d4e135213766                                      46 seconds ago      Running             kube-apiserver            0                   36e49b4f24c50       kube-apiserver-old-k8s-version-156305            kube-system
	1c3f3e6625876       9cdd6470f48c8b127530b7ce6ea4b3524137984481e48bcde619735890840ace                                      46 seconds ago      Running             etcd                      0                   a28589026b1e3       etcd-old-k8s-version-156305                      kube-system
	
	
	==> coredns [775f108c57c92376590de66a7cbd57bdc4db3fcaaadfbdcd9db5b3d84aa30db4] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 8aa94104b4dae56b00431f7362ac05b997af2246775de35dc2eb361b0707b2fa7199f9ddfdba27fdef1331b76d09c41700f6cb5d00836dabab7c0df8e651283f
	CoreDNS-1.10.1
	linux/arm64, go1.20, 055b2c3
	[INFO] 127.0.0.1:36101 - 7281 "HINFO IN 5918389040821848786.2920282752688233109. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.013535193s
	
	
	==> describe nodes <==
	Name:               old-k8s-version-156305
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=old-k8s-version-156305
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=a2daf445edf4872fd9586416ba5dbf507613db86
	                    minikube.k8s.io/name=old-k8s-version-156305
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_27T10_00_02_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 27 Dec 2025 09:59:56 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  old-k8s-version-156305
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 27 Dec 2025 10:00:30 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 27 Dec 2025 10:00:31 +0000   Sat, 27 Dec 2025 09:59:52 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 27 Dec 2025 10:00:31 +0000   Sat, 27 Dec 2025 09:59:52 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 27 Dec 2025 10:00:31 +0000   Sat, 27 Dec 2025 09:59:52 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 27 Dec 2025 10:00:31 +0000   Sat, 27 Dec 2025 10:00:26 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    old-k8s-version-156305
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022304Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022304Ki
	  pods:               110
	System Info:
	  Machine ID:                 56495bcb733b49eb3642bf15694bbc8c
	  System UUID:                d177d38b-fb11-4ae2-8414-a55831071099
	  Boot ID:                    bc2ae459-717b-455e-ba6b-b9cf89c5e3c8
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.3
	  Kubelet Version:            v1.28.0
	  Kube-Proxy Version:         v1.28.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                              ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         10s
	  kube-system                 coredns-5dd5756b68-5jmbh                          100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     27s
	  kube-system                 etcd-old-k8s-version-156305                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         39s
	  kube-system                 kindnet-w2m9v                                     100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      27s
	  kube-system                 kube-apiserver-old-k8s-version-156305             250m (12%)    0 (0%)      0 (0%)           0 (0%)         39s
	  kube-system                 kube-controller-manager-old-k8s-version-156305    200m (10%)    0 (0%)      0 (0%)           0 (0%)         39s
	  kube-system                 kube-proxy-pkr8q                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         27s
	  kube-system                 kube-scheduler-old-k8s-version-156305             100m (5%)     0 (0%)      0 (0%)           0 (0%)         40s
	  kube-system                 storage-provisioner                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         25s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 25s                kube-proxy       
	  Normal  NodeHasSufficientMemory  47s (x8 over 47s)  kubelet          Node old-k8s-version-156305 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    47s (x8 over 47s)  kubelet          Node old-k8s-version-156305 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     47s (x8 over 47s)  kubelet          Node old-k8s-version-156305 status is now: NodeHasSufficientPID
	  Normal  Starting                 40s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  39s                kubelet          Node old-k8s-version-156305 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    39s                kubelet          Node old-k8s-version-156305 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     39s                kubelet          Node old-k8s-version-156305 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           28s                node-controller  Node old-k8s-version-156305 event: Registered Node old-k8s-version-156305 in Controller
	  Normal  NodeReady                13s                kubelet          Node old-k8s-version-156305 status is now: NodeReady
	
	
	==> dmesg <==
	[  +3.426470] overlayfs: idmapped layers are currently not supported
	[Dec27 09:27] overlayfs: idmapped layers are currently not supported
	[Dec27 09:28] overlayfs: idmapped layers are currently not supported
	[Dec27 09:29] overlayfs: idmapped layers are currently not supported
	[Dec27 09:32] overlayfs: idmapped layers are currently not supported
	[Dec27 09:33] overlayfs: idmapped layers are currently not supported
	[Dec27 09:34] overlayfs: idmapped layers are currently not supported
	[Dec27 09:35] overlayfs: idmapped layers are currently not supported
	[ +35.855481] overlayfs: idmapped layers are currently not supported
	[Dec27 09:36] overlayfs: idmapped layers are currently not supported
	[Dec27 09:37] overlayfs: idmapped layers are currently not supported
	[  +4.962986] overlayfs: idmapped layers are currently not supported
	[Dec27 09:38] overlayfs: idmapped layers are currently not supported
	[ +23.051169] overlayfs: idmapped layers are currently not supported
	[Dec27 09:39] overlayfs: idmapped layers are currently not supported
	[ +34.383287] overlayfs: idmapped layers are currently not supported
	[Dec27 09:41] overlayfs: idmapped layers are currently not supported
	[Dec27 09:43] overlayfs: idmapped layers are currently not supported
	[ +35.522529] overlayfs: idmapped layers are currently not supported
	[Dec27 09:48] overlayfs: idmapped layers are currently not supported
	[Dec27 09:49] overlayfs: idmapped layers are currently not supported
	[Dec27 09:51] overlayfs: idmapped layers are currently not supported
	[Dec27 09:52] overlayfs: idmapped layers are currently not supported
	[Dec27 09:59] overlayfs: idmapped layers are currently not supported
	[ +37.649191] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [1c3f3e66258760ad9655bdf855160e4f0d184abbd01c19b61d83bf5e186b694c] <==
	{"level":"info","ts":"2025-12-27T09:59:52.728987Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.85.2:2380"}
	{"level":"info","ts":"2025-12-27T09:59:52.729485Z","caller":"etcdserver/server.go:738","msg":"started as single-node; fast-forwarding election ticks","local-member-id":"9f0758e1c58a86ed","forward-ticks":9,"forward-duration":"900ms","election-ticks":10,"election-timeout":"1s"}
	{"level":"info","ts":"2025-12-27T09:59:52.729657Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap.db","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-12-27T09:59:52.729725Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-12-27T09:59:52.729771Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-12-27T09:59:52.730105Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed switched to configuration voters=(11459225503572592365)"}
	{"level":"info","ts":"2025-12-27T09:59:52.7303Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"68eaea490fab4e05","local-member-id":"9f0758e1c58a86ed","added-peer-id":"9f0758e1c58a86ed","added-peer-peer-urls":["https://192.168.85.2:2380"]}
	{"level":"info","ts":"2025-12-27T09:59:53.182189Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed is starting a new election at term 1"}
	{"level":"info","ts":"2025-12-27T09:59:53.182238Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed became pre-candidate at term 1"}
	{"level":"info","ts":"2025-12-27T09:59:53.182256Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed received MsgPreVoteResp from 9f0758e1c58a86ed at term 1"}
	{"level":"info","ts":"2025-12-27T09:59:53.182269Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed became candidate at term 2"}
	{"level":"info","ts":"2025-12-27T09:59:53.182276Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed received MsgVoteResp from 9f0758e1c58a86ed at term 2"}
	{"level":"info","ts":"2025-12-27T09:59:53.182286Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed became leader at term 2"}
	{"level":"info","ts":"2025-12-27T09:59:53.182293Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 9f0758e1c58a86ed elected leader 9f0758e1c58a86ed at term 2"}
	{"level":"info","ts":"2025-12-27T09:59:53.184007Z","caller":"etcdserver/server.go:2571","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2025-12-27T09:59:53.185407Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"9f0758e1c58a86ed","local-member-attributes":"{Name:old-k8s-version-156305 ClientURLs:[https://192.168.85.2:2379]}","request-path":"/0/members/9f0758e1c58a86ed/attributes","cluster-id":"68eaea490fab4e05","publish-timeout":"7s"}
	{"level":"info","ts":"2025-12-27T09:59:53.185544Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-12-27T09:59:53.191901Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.85.2:2379"}
	{"level":"info","ts":"2025-12-27T09:59:53.192109Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-12-27T09:59:53.194234Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-12-27T09:59:53.194274Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-12-27T09:59:53.201544Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"68eaea490fab4e05","local-member-id":"9f0758e1c58a86ed","cluster-version":"3.5"}
	{"level":"info","ts":"2025-12-27T09:59:53.201711Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2025-12-27T09:59:53.20187Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2025-12-27T09:59:53.202404Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	
	
	==> kernel <==
	 10:00:39 up  2:43,  0 user,  load average: 1.82, 1.57, 2.00
	Linux old-k8s-version-156305 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [f1d70fc1c602ccb4570dc8e0de6fe4a059a905ef6be86b56af795cdee19c9fac] <==
	I1227 10:00:16.223289       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1227 10:00:16.318376       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1227 10:00:16.318545       1 main.go:148] setting mtu 1500 for CNI 
	I1227 10:00:16.318566       1 main.go:178] kindnetd IP family: "ipv4"
	I1227 10:00:16.318578       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-12-27T10:00:16Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1227 10:00:16.520802       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1227 10:00:16.521142       1 controller.go:381] "Waiting for informer caches to sync"
	I1227 10:00:16.521192       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1227 10:00:16.521364       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1227 10:00:16.721455       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1227 10:00:16.721551       1 metrics.go:72] Registering metrics
	I1227 10:00:16.721633       1 controller.go:711] "Syncing nftables rules"
	I1227 10:00:26.523954       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1227 10:00:26.524008       1 main.go:301] handling current node
	I1227 10:00:36.520595       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1227 10:00:36.520660       1 main.go:301] handling current node
	
	
	==> kube-apiserver [689f9a2418821f9a0aeec34ee5b61076eda018e4c029c44f41dbf1192f868b68] <==
	I1227 09:59:56.678055       1 apf_controller.go:380] Running API Priority and Fairness periodic rebalancing process
	I1227 09:59:56.677655       1 controller.go:624] quota admission added evaluator for: namespaces
	I1227 09:59:56.683219       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I1227 09:59:56.683760       1 aggregator.go:166] initial CRD sync complete...
	I1227 09:59:56.683793       1 autoregister_controller.go:141] Starting autoregister controller
	I1227 09:59:56.683820       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1227 09:59:56.683850       1 cache.go:39] Caches are synced for autoregister controller
	E1227 09:59:56.713792       1 controller.go:146] "Failed to ensure lease exists, will retry" err="namespaces \"kube-system\" not found" interval="200ms"
	I1227 09:59:56.719343       1 shared_informer.go:318] Caches are synced for node_authorizer
	I1227 09:59:56.917528       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I1227 09:59:57.479941       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1227 09:59:57.486477       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1227 09:59:57.486495       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1227 09:59:58.115323       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1227 09:59:58.168539       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1227 09:59:58.313850       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1227 09:59:58.323177       1 lease.go:263] Resetting endpoints for master service "kubernetes" to [192.168.85.2]
	I1227 09:59:58.324540       1 controller.go:624] quota admission added evaluator for: endpoints
	I1227 09:59:58.330419       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1227 09:59:58.656232       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I1227 09:59:59.908193       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I1227 09:59:59.924320       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1227 09:59:59.939276       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	I1227 10:00:12.262086       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	I1227 10:00:12.526257       1 controller.go:624] quota admission added evaluator for: controllerrevisions.apps
	
	
	==> kube-controller-manager [ca1508a569c4fae2bb3758226dc0d24c3d315d9ba348f07678b4185e8723b07e] <==
	I1227 10:00:11.744127       1 range_allocator.go:380] "Set node PodCIDR" node="old-k8s-version-156305" podCIDRs=["10.244.0.0/24"]
	I1227 10:00:11.744414       1 event.go:307] "Event occurred" object="kube-system/kube-apiserver-old-k8s-version-156305" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I1227 10:00:11.808300       1 shared_informer.go:318] Caches are synced for resource quota
	I1227 10:00:12.194138       1 shared_informer.go:318] Caches are synced for garbage collector
	I1227 10:00:12.216509       1 shared_informer.go:318] Caches are synced for garbage collector
	I1227 10:00:12.216556       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	I1227 10:00:12.285383       1 event.go:307] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-5dd5756b68 to 2"
	I1227 10:00:12.736530       1 event.go:307] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-pkr8q"
	I1227 10:00:12.738946       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-5dd5756b68-j9kgh"
	I1227 10:00:12.739066       1 event.go:307] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-w2m9v"
	I1227 10:00:12.900570       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-5dd5756b68-5jmbh"
	I1227 10:00:13.010070       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="725.344286ms"
	I1227 10:00:13.099207       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="89.06485ms"
	I1227 10:00:13.099385       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="134.468µs"
	I1227 10:00:14.323293       1 event.go:307] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set coredns-5dd5756b68 to 1 from 2"
	I1227 10:00:14.389036       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-5dd5756b68-j9kgh"
	I1227 10:00:14.400661       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="78.31955ms"
	I1227 10:00:14.420911       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="20.200597ms"
	I1227 10:00:14.443156       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="22.184096ms"
	I1227 10:00:14.443413       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="89.47µs"
	I1227 10:00:26.786218       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="134.722µs"
	I1227 10:00:26.808617       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="55.073µs"
	I1227 10:00:27.597332       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="17.224198ms"
	I1227 10:00:27.597728       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="72.149µs"
	I1227 10:00:31.660528       1 node_lifecycle_controller.go:1048] "Controller detected that some Nodes are Ready. Exiting master disruption mode"
	
	
	==> kube-proxy [4b1308eba9a3f28afec1d77a2292d3a2c6d78381783e6eeab130b246c44455c4] <==
	I1227 10:00:13.685127       1 server_others.go:69] "Using iptables proxy"
	I1227 10:00:13.708343       1 node.go:141] Successfully retrieved node IP: 192.168.85.2
	I1227 10:00:13.747564       1 server.go:632] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1227 10:00:13.750974       1 server_others.go:152] "Using iptables Proxier"
	I1227 10:00:13.751072       1 server_others.go:421] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I1227 10:00:13.751105       1 server_others.go:438] "Defaulting to no-op detect-local"
	I1227 10:00:13.751153       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1227 10:00:13.751378       1 server.go:846] "Version info" version="v1.28.0"
	I1227 10:00:13.751586       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1227 10:00:13.752305       1 config.go:188] "Starting service config controller"
	I1227 10:00:13.752392       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1227 10:00:13.752436       1 config.go:97] "Starting endpoint slice config controller"
	I1227 10:00:13.752463       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1227 10:00:13.752948       1 config.go:315] "Starting node config controller"
	I1227 10:00:13.754804       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1227 10:00:13.854140       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I1227 10:00:13.854235       1 shared_informer.go:318] Caches are synced for service config
	I1227 10:00:13.855803       1 shared_informer.go:318] Caches are synced for node config
	
	
	==> kube-scheduler [d7bf42ae79814bd7c9b715e9fc009bc94f6be8dfc66cf8088ebd9997a38d5bae] <==
	W1227 09:59:56.865352       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1227 09:59:56.866002       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W1227 09:59:56.865402       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1227 09:59:56.866082       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W1227 09:59:56.865447       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1227 09:59:56.866218       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W1227 09:59:56.865495       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1227 09:59:56.866296       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W1227 09:59:56.865529       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1227 09:59:56.866373       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W1227 09:59:56.865572       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1227 09:59:56.866444       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W1227 09:59:57.784838       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1227 09:59:57.784959       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W1227 09:59:57.805062       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1227 09:59:57.805181       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W1227 09:59:57.814532       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E1227 09:59:57.814565       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W1227 09:59:57.849539       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1227 09:59:57.849702       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W1227 09:59:57.864860       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1227 09:59:57.864894       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W1227 09:59:57.892703       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1227 09:59:57.892734       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	I1227 09:59:58.348844       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Dec 27 10:00:12 old-k8s-version-156305 kubelet[1397]: I1227 10:00:12.867814    1397 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/1e2c235b-d7bb-427a-8c56-988f64794d9d-kube-proxy\") pod \"kube-proxy-pkr8q\" (UID: \"1e2c235b-d7bb-427a-8c56-988f64794d9d\") " pod="kube-system/kube-proxy-pkr8q"
	Dec 27 10:00:12 old-k8s-version-156305 kubelet[1397]: I1227 10:00:12.867882    1397 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/1e2c235b-d7bb-427a-8c56-988f64794d9d-lib-modules\") pod \"kube-proxy-pkr8q\" (UID: \"1e2c235b-d7bb-427a-8c56-988f64794d9d\") " pod="kube-system/kube-proxy-pkr8q"
	Dec 27 10:00:12 old-k8s-version-156305 kubelet[1397]: I1227 10:00:12.867913    1397 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-87rdv\" (UniqueName: \"kubernetes.io/projected/1e2c235b-d7bb-427a-8c56-988f64794d9d-kube-api-access-87rdv\") pod \"kube-proxy-pkr8q\" (UID: \"1e2c235b-d7bb-427a-8c56-988f64794d9d\") " pod="kube-system/kube-proxy-pkr8q"
	Dec 27 10:00:12 old-k8s-version-156305 kubelet[1397]: I1227 10:00:12.867940    1397 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/1e2c235b-d7bb-427a-8c56-988f64794d9d-xtables-lock\") pod \"kube-proxy-pkr8q\" (UID: \"1e2c235b-d7bb-427a-8c56-988f64794d9d\") " pod="kube-system/kube-proxy-pkr8q"
	Dec 27 10:00:13 old-k8s-version-156305 kubelet[1397]: I1227 10:00:13.042649    1397 topology_manager.go:215] "Topology Admit Handler" podUID="fba5eff1-7424-451f-9109-7e58587628ef" podNamespace="kube-system" podName="kindnet-w2m9v"
	Dec 27 10:00:13 old-k8s-version-156305 kubelet[1397]: I1227 10:00:13.074400    1397 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/fba5eff1-7424-451f-9109-7e58587628ef-xtables-lock\") pod \"kindnet-w2m9v\" (UID: \"fba5eff1-7424-451f-9109-7e58587628ef\") " pod="kube-system/kindnet-w2m9v"
	Dec 27 10:00:13 old-k8s-version-156305 kubelet[1397]: I1227 10:00:13.074457    1397 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/fba5eff1-7424-451f-9109-7e58587628ef-cni-cfg\") pod \"kindnet-w2m9v\" (UID: \"fba5eff1-7424-451f-9109-7e58587628ef\") " pod="kube-system/kindnet-w2m9v"
	Dec 27 10:00:13 old-k8s-version-156305 kubelet[1397]: I1227 10:00:13.074484    1397 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/fba5eff1-7424-451f-9109-7e58587628ef-lib-modules\") pod \"kindnet-w2m9v\" (UID: \"fba5eff1-7424-451f-9109-7e58587628ef\") " pod="kube-system/kindnet-w2m9v"
	Dec 27 10:00:13 old-k8s-version-156305 kubelet[1397]: I1227 10:00:13.074509    1397 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hfpsc\" (UniqueName: \"kubernetes.io/projected/fba5eff1-7424-451f-9109-7e58587628ef-kube-api-access-hfpsc\") pod \"kindnet-w2m9v\" (UID: \"fba5eff1-7424-451f-9109-7e58587628ef\") " pod="kube-system/kindnet-w2m9v"
	Dec 27 10:00:13 old-k8s-version-156305 kubelet[1397]: W1227 10:00:13.357456    1397 manager.go:1159] Failed to process watch event {EventType:0 Name:/docker/347dbce10dafb51b72ca7dcef615f0de8b8f7496d754df760f0354e3d747f426/crio-10f82c94a624ad45de42cd598b4f839af59b4213b80cc882f078c935989f00a2 WatchSource:0}: Error finding container 10f82c94a624ad45de42cd598b4f839af59b4213b80cc882f078c935989f00a2: Status 404 returned error can't find the container with id 10f82c94a624ad45de42cd598b4f839af59b4213b80cc882f078c935989f00a2
	Dec 27 10:00:13 old-k8s-version-156305 kubelet[1397]: I1227 10:00:13.537633    1397 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-pkr8q" podStartSLOduration=1.537579665 podCreationTimestamp="2025-12-27 10:00:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-27 10:00:13.537270049 +0000 UTC m=+13.668761171" watchObservedRunningTime="2025-12-27 10:00:13.537579665 +0000 UTC m=+13.669070755"
	Dec 27 10:00:16 old-k8s-version-156305 kubelet[1397]: I1227 10:00:16.555939    1397 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kindnet-w2m9v" podStartSLOduration=1.781122562 podCreationTimestamp="2025-12-27 10:00:12 +0000 UTC" firstStartedPulling="2025-12-27 10:00:13.361922226 +0000 UTC m=+13.493413316" lastFinishedPulling="2025-12-27 10:00:16.136683947 +0000 UTC m=+16.268175037" observedRunningTime="2025-12-27 10:00:16.555301539 +0000 UTC m=+16.686792637" watchObservedRunningTime="2025-12-27 10:00:16.555884283 +0000 UTC m=+16.687375373"
	Dec 27 10:00:26 old-k8s-version-156305 kubelet[1397]: I1227 10:00:26.751736    1397 kubelet_node_status.go:493] "Fast updating node status as it just became ready"
	Dec 27 10:00:26 old-k8s-version-156305 kubelet[1397]: I1227 10:00:26.784752    1397 topology_manager.go:215] "Topology Admit Handler" podUID="1eb3c15a-a576-4711-849e-790fa87ddc70" podNamespace="kube-system" podName="coredns-5dd5756b68-5jmbh"
	Dec 27 10:00:26 old-k8s-version-156305 kubelet[1397]: I1227 10:00:26.793929    1397 topology_manager.go:215] "Topology Admit Handler" podUID="f6bd7b49-196a-44fd-87ef-c75c1aec15de" podNamespace="kube-system" podName="storage-provisioner"
	Dec 27 10:00:26 old-k8s-version-156305 kubelet[1397]: I1227 10:00:26.860947    1397 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mvs74\" (UniqueName: \"kubernetes.io/projected/f6bd7b49-196a-44fd-87ef-c75c1aec15de-kube-api-access-mvs74\") pod \"storage-provisioner\" (UID: \"f6bd7b49-196a-44fd-87ef-c75c1aec15de\") " pod="kube-system/storage-provisioner"
	Dec 27 10:00:26 old-k8s-version-156305 kubelet[1397]: I1227 10:00:26.861247    1397 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1eb3c15a-a576-4711-849e-790fa87ddc70-config-volume\") pod \"coredns-5dd5756b68-5jmbh\" (UID: \"1eb3c15a-a576-4711-849e-790fa87ddc70\") " pod="kube-system/coredns-5dd5756b68-5jmbh"
	Dec 27 10:00:26 old-k8s-version-156305 kubelet[1397]: I1227 10:00:26.861311    1397 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/f6bd7b49-196a-44fd-87ef-c75c1aec15de-tmp\") pod \"storage-provisioner\" (UID: \"f6bd7b49-196a-44fd-87ef-c75c1aec15de\") " pod="kube-system/storage-provisioner"
	Dec 27 10:00:26 old-k8s-version-156305 kubelet[1397]: I1227 10:00:26.861351    1397 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rxhz9\" (UniqueName: \"kubernetes.io/projected/1eb3c15a-a576-4711-849e-790fa87ddc70-kube-api-access-rxhz9\") pod \"coredns-5dd5756b68-5jmbh\" (UID: \"1eb3c15a-a576-4711-849e-790fa87ddc70\") " pod="kube-system/coredns-5dd5756b68-5jmbh"
	Dec 27 10:00:27 old-k8s-version-156305 kubelet[1397]: I1227 10:00:27.579555    1397 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=13.57950825 podCreationTimestamp="2025-12-27 10:00:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-27 10:00:27.564843575 +0000 UTC m=+27.696334664" watchObservedRunningTime="2025-12-27 10:00:27.57950825 +0000 UTC m=+27.710999340"
	Dec 27 10:00:29 old-k8s-version-156305 kubelet[1397]: I1227 10:00:29.665802    1397 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-5dd5756b68-5jmbh" podStartSLOduration=17.665762712 podCreationTimestamp="2025-12-27 10:00:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-27 10:00:27.580149169 +0000 UTC m=+27.711640259" watchObservedRunningTime="2025-12-27 10:00:29.665762712 +0000 UTC m=+29.797253802"
	Dec 27 10:00:29 old-k8s-version-156305 kubelet[1397]: I1227 10:00:29.665960    1397 topology_manager.go:215] "Topology Admit Handler" podUID="dfb2f88f-5b2b-4d4e-947d-54a4743f76e3" podNamespace="default" podName="busybox"
	Dec 27 10:00:29 old-k8s-version-156305 kubelet[1397]: I1227 10:00:29.780977    1397 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tgk92\" (UniqueName: \"kubernetes.io/projected/dfb2f88f-5b2b-4d4e-947d-54a4743f76e3-kube-api-access-tgk92\") pod \"busybox\" (UID: \"dfb2f88f-5b2b-4d4e-947d-54a4743f76e3\") " pod="default/busybox"
	Dec 27 10:00:29 old-k8s-version-156305 kubelet[1397]: W1227 10:00:29.988888    1397 manager.go:1159] Failed to process watch event {EventType:0 Name:/docker/347dbce10dafb51b72ca7dcef615f0de8b8f7496d754df760f0354e3d747f426/crio-ee91743ac80f3ee43ec5f49702d6906f9b05f062412df14a71eb9062eea332f7 WatchSource:0}: Error finding container ee91743ac80f3ee43ec5f49702d6906f9b05f062412df14a71eb9062eea332f7: Status 404 returned error can't find the container with id ee91743ac80f3ee43ec5f49702d6906f9b05f062412df14a71eb9062eea332f7
	Dec 27 10:00:32 old-k8s-version-156305 kubelet[1397]: I1227 10:00:32.579414    1397 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="default/busybox" podStartSLOduration=1.479517003 podCreationTimestamp="2025-12-27 10:00:29 +0000 UTC" firstStartedPulling="2025-12-27 10:00:29.990904142 +0000 UTC m=+30.122395240" lastFinishedPulling="2025-12-27 10:00:32.090758978 +0000 UTC m=+32.222250068" observedRunningTime="2025-12-27 10:00:32.579312434 +0000 UTC m=+32.710803532" watchObservedRunningTime="2025-12-27 10:00:32.579371831 +0000 UTC m=+32.710862921"
	
	
	==> storage-provisioner [25a4dd58ef1d5570cdc63882e1f49e627ab9270a9008adac97db2adab408f35f] <==
	I1227 10:00:27.193018       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1227 10:00:27.225380       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1227 10:00:27.225514       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1227 10:00:27.246196       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1227 10:00:27.248255       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-156305_2dd50a33-d89b-42c5-926a-73a1d450ef5f!
	I1227 10:00:27.246733       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"00426bfd-0cf8-4159-8bc4-8e458dec9071", APIVersion:"v1", ResourceVersion:"446", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-156305_2dd50a33-d89b-42c5-926a-73a1d450ef5f became leader
	I1227 10:00:27.349939       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-156305_2dd50a33-d89b-42c5-926a-73a1d450ef5f!
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-156305 -n old-k8s-version-156305
helpers_test.go:270: (dbg) Run:  kubectl --context old-k8s-version-156305 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:294: <<< TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive FAILED: end of post-mortem logs <<<
helpers_test.go:295: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (2.52s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (6.56s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p old-k8s-version-156305 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 pause -p old-k8s-version-156305 --alsologtostderr -v=1: exit status 80 (1.928452165s)

                                                
                                                
-- stdout --
	* Pausing node old-k8s-version-156305 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1227 10:01:57.769361  499359 out.go:360] Setting OutFile to fd 1 ...
	I1227 10:01:57.769464  499359 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 10:01:57.769469  499359 out.go:374] Setting ErrFile to fd 2...
	I1227 10:01:57.769475  499359 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 10:01:57.769720  499359 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22344-301174/.minikube/bin
	I1227 10:01:57.769989  499359 out.go:368] Setting JSON to false
	I1227 10:01:57.770012  499359 mustload.go:66] Loading cluster: old-k8s-version-156305
	I1227 10:01:57.770514  499359 config.go:182] Loaded profile config "old-k8s-version-156305": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1227 10:01:57.771063  499359 cli_runner.go:164] Run: docker container inspect old-k8s-version-156305 --format={{.State.Status}}
	I1227 10:01:57.788820  499359 host.go:66] Checking if "old-k8s-version-156305" exists ...
	I1227 10:01:57.789243  499359 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1227 10:01:57.855059  499359 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:49 OomKillDisable:true NGoroutines:62 SystemTime:2025-12-27 10:01:57.845529796 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1227 10:01:57.855727  499359 pause.go:60] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/22158/minikube-v1.37.0-1766719468-22158-arm64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1766719468-22158/minikube-v1.37.0-1766719468-22158-arm64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1766719468-22158-arm64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) preload-source:auto profile:old-k8s-version-156305 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) rosetta:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s
(bool=true) wantupdatenotification:%!s(bool=true) wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1227 10:01:57.861063  499359 out.go:179] * Pausing node old-k8s-version-156305 ... 
	I1227 10:01:57.864142  499359 host.go:66] Checking if "old-k8s-version-156305" exists ...
	I1227 10:01:57.864505  499359 ssh_runner.go:195] Run: systemctl --version
	I1227 10:01:57.864555  499359 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-156305
	I1227 10:01:57.882458  499359 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33426 SSHKeyPath:/home/jenkins/minikube-integration/22344-301174/.minikube/machines/old-k8s-version-156305/id_rsa Username:docker}
	I1227 10:01:57.985212  499359 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1227 10:01:58.007695  499359 pause.go:52] kubelet running: true
	I1227 10:01:58.007826  499359 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1227 10:01:58.245817  499359 cri.go:61] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1227 10:01:58.245936  499359 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1227 10:01:58.321443  499359 cri.go:96] found id: "e30c642381fd66b6ac4e8aacfb12d272b20d86702b36d52f8134df8d684e8b32"
	I1227 10:01:58.321472  499359 cri.go:96] found id: "8501cac9151481649f57c3b1c4cff002c410d0569db5665e9748a613a6f2b616"
	I1227 10:01:58.321478  499359 cri.go:96] found id: "504b4912148700373d37491a8b4ed435ec42d5677bc4483cf42ab677f49c02f2"
	I1227 10:01:58.321482  499359 cri.go:96] found id: "ce4a589925fa2bea6e1c4dd2a3f450ac19fa2b6905610d4fdca193b304e7c654"
	I1227 10:01:58.321485  499359 cri.go:96] found id: "d611e0003d877a2c61c9a067f013513d4747d34cdf80058c12dbcd2cee6f4aac"
	I1227 10:01:58.321497  499359 cri.go:96] found id: "cd8ca9064dcc761f5b92eb96cced8f7de01ddc2ffebf6147cbc5c135c3801051"
	I1227 10:01:58.321505  499359 cri.go:96] found id: "5708ffd35134c895f5788182b61cdd93ddb87642674f5a59ca92211051f91063"
	I1227 10:01:58.321509  499359 cri.go:96] found id: "d3e3d49e9f91ec5959eb25abafbab591f5c21e6495480628f706735c5fe3d04c"
	I1227 10:01:58.321512  499359 cri.go:96] found id: "52de84716480aa6f441c8bd1ee9047feb54b482e6049d39d4b329ff433bc6cb2"
	I1227 10:01:58.321518  499359 cri.go:96] found id: "0491eab1be817f5a0588cba3090545b34f363760f5960123eab7b9b3a9b19dd3"
	I1227 10:01:58.321524  499359 cri.go:96] found id: "01fe9c54f51a28045014799ed2d0326a433e8b4d38927e40419708a1a7a0a3c7"
	I1227 10:01:58.321528  499359 cri.go:96] found id: ""
	I1227 10:01:58.321577  499359 ssh_runner.go:195] Run: sudo runc list -f json
	I1227 10:01:58.334296  499359 retry.go:84] will retry after 300ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-27T10:01:58Z" level=error msg="open /run/runc: no such file or directory"
	I1227 10:01:58.590772  499359 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1227 10:01:58.603905  499359 pause.go:52] kubelet running: false
	I1227 10:01:58.603969  499359 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1227 10:01:58.782438  499359 cri.go:61] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1227 10:01:58.782590  499359 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1227 10:01:58.850796  499359 cri.go:96] found id: "e30c642381fd66b6ac4e8aacfb12d272b20d86702b36d52f8134df8d684e8b32"
	I1227 10:01:58.850819  499359 cri.go:96] found id: "8501cac9151481649f57c3b1c4cff002c410d0569db5665e9748a613a6f2b616"
	I1227 10:01:58.850825  499359 cri.go:96] found id: "504b4912148700373d37491a8b4ed435ec42d5677bc4483cf42ab677f49c02f2"
	I1227 10:01:58.850828  499359 cri.go:96] found id: "ce4a589925fa2bea6e1c4dd2a3f450ac19fa2b6905610d4fdca193b304e7c654"
	I1227 10:01:58.850832  499359 cri.go:96] found id: "d611e0003d877a2c61c9a067f013513d4747d34cdf80058c12dbcd2cee6f4aac"
	I1227 10:01:58.850835  499359 cri.go:96] found id: "cd8ca9064dcc761f5b92eb96cced8f7de01ddc2ffebf6147cbc5c135c3801051"
	I1227 10:01:58.850838  499359 cri.go:96] found id: "5708ffd35134c895f5788182b61cdd93ddb87642674f5a59ca92211051f91063"
	I1227 10:01:58.850841  499359 cri.go:96] found id: "d3e3d49e9f91ec5959eb25abafbab591f5c21e6495480628f706735c5fe3d04c"
	I1227 10:01:58.850844  499359 cri.go:96] found id: "52de84716480aa6f441c8bd1ee9047feb54b482e6049d39d4b329ff433bc6cb2"
	I1227 10:01:58.850850  499359 cri.go:96] found id: "0491eab1be817f5a0588cba3090545b34f363760f5960123eab7b9b3a9b19dd3"
	I1227 10:01:58.850853  499359 cri.go:96] found id: "01fe9c54f51a28045014799ed2d0326a433e8b4d38927e40419708a1a7a0a3c7"
	I1227 10:01:58.850889  499359 cri.go:96] found id: ""
	I1227 10:01:58.850960  499359 ssh_runner.go:195] Run: sudo runc list -f json
	I1227 10:01:59.358313  499359 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1227 10:01:59.372341  499359 pause.go:52] kubelet running: false
	I1227 10:01:59.372404  499359 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1227 10:01:59.546750  499359 cri.go:61] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1227 10:01:59.546855  499359 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1227 10:01:59.613061  499359 cri.go:96] found id: "e30c642381fd66b6ac4e8aacfb12d272b20d86702b36d52f8134df8d684e8b32"
	I1227 10:01:59.613135  499359 cri.go:96] found id: "8501cac9151481649f57c3b1c4cff002c410d0569db5665e9748a613a6f2b616"
	I1227 10:01:59.613157  499359 cri.go:96] found id: "504b4912148700373d37491a8b4ed435ec42d5677bc4483cf42ab677f49c02f2"
	I1227 10:01:59.613181  499359 cri.go:96] found id: "ce4a589925fa2bea6e1c4dd2a3f450ac19fa2b6905610d4fdca193b304e7c654"
	I1227 10:01:59.613214  499359 cri.go:96] found id: "d611e0003d877a2c61c9a067f013513d4747d34cdf80058c12dbcd2cee6f4aac"
	I1227 10:01:59.613242  499359 cri.go:96] found id: "cd8ca9064dcc761f5b92eb96cced8f7de01ddc2ffebf6147cbc5c135c3801051"
	I1227 10:01:59.613262  499359 cri.go:96] found id: "5708ffd35134c895f5788182b61cdd93ddb87642674f5a59ca92211051f91063"
	I1227 10:01:59.613284  499359 cri.go:96] found id: "d3e3d49e9f91ec5959eb25abafbab591f5c21e6495480628f706735c5fe3d04c"
	I1227 10:01:59.613304  499359 cri.go:96] found id: "52de84716480aa6f441c8bd1ee9047feb54b482e6049d39d4b329ff433bc6cb2"
	I1227 10:01:59.613342  499359 cri.go:96] found id: "0491eab1be817f5a0588cba3090545b34f363760f5960123eab7b9b3a9b19dd3"
	I1227 10:01:59.613361  499359 cri.go:96] found id: "01fe9c54f51a28045014799ed2d0326a433e8b4d38927e40419708a1a7a0a3c7"
	I1227 10:01:59.613382  499359 cri.go:96] found id: ""
	I1227 10:01:59.613460  499359 ssh_runner.go:195] Run: sudo runc list -f json
	I1227 10:01:59.629873  499359 out.go:203] 
	W1227 10:01:59.633255  499359 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-27T10:01:59Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-27T10:01:59Z" level=error msg="open /run/runc: no such file or directory"
	
	W1227 10:01:59.633280  499359 out.go:285] * 
	* 
	W1227 10:01:59.637244  499359 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1227 10:01:59.639426  499359 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:309: out/minikube-linux-arm64 pause -p old-k8s-version-156305 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect old-k8s-version-156305
helpers_test.go:244: (dbg) docker inspect old-k8s-version-156305:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "347dbce10dafb51b72ca7dcef615f0de8b8f7496d754df760f0354e3d747f426",
	        "Created": "2025-12-27T09:59:32.848675789Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 496778,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-27T10:00:52.964083972Z",
	            "FinishedAt": "2025-12-27T10:00:52.127246309Z"
	        },
	        "Image": "sha256:713d59eed1cb7ea89d31e3cbef8f6274a6ab7509a421e96cf0197c334e76398d",
	        "ResolvConfPath": "/var/lib/docker/containers/347dbce10dafb51b72ca7dcef615f0de8b8f7496d754df760f0354e3d747f426/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/347dbce10dafb51b72ca7dcef615f0de8b8f7496d754df760f0354e3d747f426/hostname",
	        "HostsPath": "/var/lib/docker/containers/347dbce10dafb51b72ca7dcef615f0de8b8f7496d754df760f0354e3d747f426/hosts",
	        "LogPath": "/var/lib/docker/containers/347dbce10dafb51b72ca7dcef615f0de8b8f7496d754df760f0354e3d747f426/347dbce10dafb51b72ca7dcef615f0de8b8f7496d754df760f0354e3d747f426-json.log",
	        "Name": "/old-k8s-version-156305",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-156305:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "old-k8s-version-156305",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "347dbce10dafb51b72ca7dcef615f0de8b8f7496d754df760f0354e3d747f426",
	                "LowerDir": "/var/lib/docker/overlay2/68e3c3f0278cd7b3013cea940f132fed7eeabae4011f0b02c966dba9f26d7618-init/diff:/var/lib/docker/overlay2/888349771a41a46b09e7de676064af9effbe2f5ae2a8ba49ad062335fb2a70e5/diff",
	                "MergedDir": "/var/lib/docker/overlay2/68e3c3f0278cd7b3013cea940f132fed7eeabae4011f0b02c966dba9f26d7618/merged",
	                "UpperDir": "/var/lib/docker/overlay2/68e3c3f0278cd7b3013cea940f132fed7eeabae4011f0b02c966dba9f26d7618/diff",
	                "WorkDir": "/var/lib/docker/overlay2/68e3c3f0278cd7b3013cea940f132fed7eeabae4011f0b02c966dba9f26d7618/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-156305",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-156305/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-156305",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-156305",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-156305",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "421c3dbefc237ae8e1c5175dee07269f48d08ea2c2470e3596ad0e38a9b224c6",
	            "SandboxKey": "/var/run/docker/netns/421c3dbefc23",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33426"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33427"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33430"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33428"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33429"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-156305": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "6a:9d:db:20:54:cd",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "05c9d9029c4a7ae450dccaf37503f6c9dee72aa6f5a06e1cc6293b09c389163d",
	                    "EndpointID": "50a4c166556d059e1328116ce49f8da2faa8895b4f3fe841ccff4412dc0c04d2",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "old-k8s-version-156305",
	                        "347dbce10daf"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-156305 -n old-k8s-version-156305
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-156305 -n old-k8s-version-156305: exit status 2 (638.574565ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestStartStop/group/old-k8s-version/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p old-k8s-version-156305 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-arm64 -p old-k8s-version-156305 logs -n 25: (1.352743902s)
helpers_test.go:261: TestStartStop/group/old-k8s-version/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────
────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │          PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────
────┤
	│ ssh     │ -p cilium-246753 sudo systemctl cat containerd --no-pager                                                                                                                                                                                     │ cilium-246753             │ jenkins │ v1.37.0 │ 27 Dec 25 09:52 UTC │                     │
	│ ssh     │ -p cilium-246753 sudo cat /lib/systemd/system/containerd.service                                                                                                                                                                              │ cilium-246753             │ jenkins │ v1.37.0 │ 27 Dec 25 09:52 UTC │                     │
	│ ssh     │ -p cilium-246753 sudo cat /etc/containerd/config.toml                                                                                                                                                                                         │ cilium-246753             │ jenkins │ v1.37.0 │ 27 Dec 25 09:52 UTC │                     │
	│ ssh     │ -p cilium-246753 sudo containerd config dump                                                                                                                                                                                                  │ cilium-246753             │ jenkins │ v1.37.0 │ 27 Dec 25 09:52 UTC │                     │
	│ ssh     │ -p cilium-246753 sudo systemctl status crio --all --full --no-pager                                                                                                                                                                           │ cilium-246753             │ jenkins │ v1.37.0 │ 27 Dec 25 09:52 UTC │                     │
	│ ssh     │ -p cilium-246753 sudo systemctl cat crio --no-pager                                                                                                                                                                                           │ cilium-246753             │ jenkins │ v1.37.0 │ 27 Dec 25 09:52 UTC │                     │
	│ ssh     │ -p cilium-246753 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                                                                                                 │ cilium-246753             │ jenkins │ v1.37.0 │ 27 Dec 25 09:52 UTC │                     │
	│ ssh     │ -p cilium-246753 sudo crio config                                                                                                                                                                                                             │ cilium-246753             │ jenkins │ v1.37.0 │ 27 Dec 25 09:52 UTC │                     │
	│ delete  │ -p cilium-246753                                                                                                                                                                                                                              │ cilium-246753             │ jenkins │ v1.37.0 │ 27 Dec 25 09:52 UTC │ 27 Dec 25 09:52 UTC │
	│ start   │ -p cert-expiration-028595 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio                                                                                                                                        │ cert-expiration-028595    │ jenkins │ v1.37.0 │ 27 Dec 25 09:52 UTC │ 27 Dec 25 09:52 UTC │
	│ start   │ -p cert-expiration-028595 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio                                                                                                                                     │ cert-expiration-028595    │ jenkins │ v1.37.0 │ 27 Dec 25 09:55 UTC │ 27 Dec 25 09:55 UTC │
	│ delete  │ -p cert-expiration-028595                                                                                                                                                                                                                     │ cert-expiration-028595    │ jenkins │ v1.37.0 │ 27 Dec 25 09:55 UTC │ 27 Dec 25 09:55 UTC │
	│ start   │ -p force-systemd-flag-779725 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                                                                                                                   │ force-systemd-flag-779725 │ jenkins │ v1.37.0 │ 27 Dec 25 09:55 UTC │                     │
	│ delete  │ -p force-systemd-env-029895                                                                                                                                                                                                                   │ force-systemd-env-029895  │ jenkins │ v1.37.0 │ 27 Dec 25 09:58 UTC │ 27 Dec 25 09:58 UTC │
	│ start   │ -p cert-options-057459 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio                     │ cert-options-057459       │ jenkins │ v1.37.0 │ 27 Dec 25 09:58 UTC │ 27 Dec 25 09:59 UTC │
	│ ssh     │ cert-options-057459 ssh openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt                                                                                                                                                   │ cert-options-057459       │ jenkins │ v1.37.0 │ 27 Dec 25 09:59 UTC │ 27 Dec 25 09:59 UTC │
	│ ssh     │ -p cert-options-057459 -- sudo cat /etc/kubernetes/admin.conf                                                                                                                                                                                 │ cert-options-057459       │ jenkins │ v1.37.0 │ 27 Dec 25 09:59 UTC │ 27 Dec 25 09:59 UTC │
	│ delete  │ -p cert-options-057459                                                                                                                                                                                                                        │ cert-options-057459       │ jenkins │ v1.37.0 │ 27 Dec 25 09:59 UTC │ 27 Dec 25 09:59 UTC │
	│ start   │ -p old-k8s-version-156305 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-156305    │ jenkins │ v1.37.0 │ 27 Dec 25 09:59 UTC │ 27 Dec 25 10:00 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-156305 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-156305    │ jenkins │ v1.37.0 │ 27 Dec 25 10:00 UTC │                     │
	│ stop    │ -p old-k8s-version-156305 --alsologtostderr -v=3                                                                                                                                                                                              │ old-k8s-version-156305    │ jenkins │ v1.37.0 │ 27 Dec 25 10:00 UTC │ 27 Dec 25 10:00 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-156305 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-156305    │ jenkins │ v1.37.0 │ 27 Dec 25 10:00 UTC │ 27 Dec 25 10:00 UTC │
	│ start   │ -p old-k8s-version-156305 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-156305    │ jenkins │ v1.37.0 │ 27 Dec 25 10:00 UTC │ 27 Dec 25 10:01 UTC │
	│ image   │ old-k8s-version-156305 image list --format=json                                                                                                                                                                                               │ old-k8s-version-156305    │ jenkins │ v1.37.0 │ 27 Dec 25 10:01 UTC │ 27 Dec 25 10:01 UTC │
	│ pause   │ -p old-k8s-version-156305 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-156305    │ jenkins │ v1.37.0 │ 27 Dec 25 10:01 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────
────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/27 10:00:52
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1227 10:00:52.667703  496650 out.go:360] Setting OutFile to fd 1 ...
	I1227 10:00:52.667884  496650 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 10:00:52.667914  496650 out.go:374] Setting ErrFile to fd 2...
	I1227 10:00:52.667939  496650 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 10:00:52.668326  496650 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22344-301174/.minikube/bin
	I1227 10:00:52.668818  496650 out.go:368] Setting JSON to false
	I1227 10:00:52.669741  496650 start.go:133] hostinfo: {"hostname":"ip-172-31-31-251","uptime":9802,"bootTime":1766819851,"procs":176,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1227 10:00:52.669864  496650 start.go:143] virtualization:  
	I1227 10:00:52.672857  496650 out.go:179] * [old-k8s-version-156305] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1227 10:00:52.675423  496650 out.go:179]   - MINIKUBE_LOCATION=22344
	I1227 10:00:52.675622  496650 notify.go:221] Checking for updates...
	I1227 10:00:52.681459  496650 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1227 10:00:52.684360  496650 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22344-301174/kubeconfig
	I1227 10:00:52.687151  496650 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22344-301174/.minikube
	I1227 10:00:52.689971  496650 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1227 10:00:52.693006  496650 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1227 10:00:52.696440  496650 config.go:182] Loaded profile config "old-k8s-version-156305": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1227 10:00:52.699983  496650 out.go:179] * Kubernetes 1.35.0 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.35.0
	I1227 10:00:52.702850  496650 driver.go:422] Setting default libvirt URI to qemu:///system
	I1227 10:00:52.733486  496650 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1227 10:00:52.733603  496650 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1227 10:00:52.794400  496650 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:42 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-27 10:00:52.784184865 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1227 10:00:52.794517  496650 docker.go:319] overlay module found
	I1227 10:00:52.797609  496650 out.go:179] * Using the docker driver based on existing profile
	I1227 10:00:52.800406  496650 start.go:309] selected driver: docker
	I1227 10:00:52.800429  496650 start.go:928] validating driver "docker" against &{Name:old-k8s-version-156305 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-156305 Namespace:default APIServerHAVIP: AP
IServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mo
untType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1227 10:00:52.800549  496650 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1227 10:00:52.801298  496650 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1227 10:00:52.872918  496650 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:42 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-27 10:00:52.863207291 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1227 10:00:52.873260  496650 start_flags.go:1019] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1227 10:00:52.873300  496650 cni.go:84] Creating CNI manager for ""
	I1227 10:00:52.873361  496650 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1227 10:00:52.873409  496650 start.go:353] cluster config:
	{Name:old-k8s-version-156305 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-156305 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetri
cs:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1227 10:00:52.876616  496650 out.go:179] * Starting "old-k8s-version-156305" primary control-plane node in "old-k8s-version-156305" cluster
	I1227 10:00:52.879490  496650 cache.go:134] Beginning downloading kic base image for docker with crio
	I1227 10:00:52.882438  496650 out.go:179] * Pulling base image v0.0.48-1766570851-22316 ...
	I1227 10:00:52.885298  496650 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1227 10:00:52.885344  496650 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22344-301174/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4
	I1227 10:00:52.885370  496650 cache.go:65] Caching tarball of preloaded images
	I1227 10:00:52.885376  496650 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a in local docker daemon
	I1227 10:00:52.885453  496650 preload.go:251] Found /home/jenkins/minikube-integration/22344-301174/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1227 10:00:52.885464  496650 cache.go:68] Finished verifying existence of preloaded tar for v1.28.0 on crio
	I1227 10:00:52.885583  496650 profile.go:143] Saving config to /home/jenkins/minikube-integration/22344-301174/.minikube/profiles/old-k8s-version-156305/config.json ...
	I1227 10:00:52.909604  496650 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a in local docker daemon, skipping pull
	I1227 10:00:52.909628  496650 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a exists in daemon, skipping load
	I1227 10:00:52.909643  496650 cache.go:243] Successfully downloaded all kic artifacts
	I1227 10:00:52.909676  496650 start.go:360] acquireMachinesLock for old-k8s-version-156305: {Name:mk38a9d425ae861a3d9f927feaf86bb827ff0e6c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1227 10:00:52.909744  496650 start.go:364] duration metric: took 51.094µs to acquireMachinesLock for "old-k8s-version-156305"
	I1227 10:00:52.909768  496650 start.go:96] Skipping create...Using existing machine configuration
	I1227 10:00:52.909773  496650 fix.go:54] fixHost starting: 
	I1227 10:00:52.910038  496650 cli_runner.go:164] Run: docker container inspect old-k8s-version-156305 --format={{.State.Status}}
	I1227 10:00:52.927811  496650 fix.go:112] recreateIfNeeded on old-k8s-version-156305: state=Stopped err=<nil>
	W1227 10:00:52.927845  496650 fix.go:138] unexpected machine state, will restart: <nil>
	I1227 10:00:52.931174  496650 out.go:252] * Restarting existing docker container for "old-k8s-version-156305" ...
	I1227 10:00:52.931275  496650 cli_runner.go:164] Run: docker start old-k8s-version-156305
	I1227 10:00:53.185820  496650 cli_runner.go:164] Run: docker container inspect old-k8s-version-156305 --format={{.State.Status}}
	I1227 10:00:53.207973  496650 kic.go:430] container "old-k8s-version-156305" state is running.
	I1227 10:00:53.208352  496650 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-156305
	I1227 10:00:53.235558  496650 profile.go:143] Saving config to /home/jenkins/minikube-integration/22344-301174/.minikube/profiles/old-k8s-version-156305/config.json ...
	I1227 10:00:53.235787  496650 machine.go:94] provisionDockerMachine start ...
	I1227 10:00:53.240076  496650 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-156305
	I1227 10:00:53.268004  496650 main.go:144] libmachine: Using SSH client type: native
	I1227 10:00:53.268332  496650 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dbbe0] 0x3de0e0 <nil>  [] 0s} 127.0.0.1 33426 <nil> <nil>}
	I1227 10:00:53.268341  496650 main.go:144] libmachine: About to run SSH command:
	hostname
	I1227 10:00:53.268935  496650 main.go:144] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:41710->127.0.0.1:33426: read: connection reset by peer
	I1227 10:00:56.405931  496650 main.go:144] libmachine: SSH cmd err, output: <nil>: old-k8s-version-156305
	
	I1227 10:00:56.405957  496650 ubuntu.go:182] provisioning hostname "old-k8s-version-156305"
	I1227 10:00:56.406080  496650 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-156305
	I1227 10:00:56.429696  496650 main.go:144] libmachine: Using SSH client type: native
	I1227 10:00:56.430019  496650 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dbbe0] 0x3de0e0 <nil>  [] 0s} 127.0.0.1 33426 <nil> <nil>}
	I1227 10:00:56.430036  496650 main.go:144] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-156305 && echo "old-k8s-version-156305" | sudo tee /etc/hostname
	I1227 10:00:56.579925  496650 main.go:144] libmachine: SSH cmd err, output: <nil>: old-k8s-version-156305
	
	I1227 10:00:56.580048  496650 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-156305
	I1227 10:00:56.598035  496650 main.go:144] libmachine: Using SSH client type: native
	I1227 10:00:56.598610  496650 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dbbe0] 0x3de0e0 <nil>  [] 0s} 127.0.0.1 33426 <nil> <nil>}
	I1227 10:00:56.598637  496650 main.go:144] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-156305' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-156305/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-156305' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1227 10:00:56.738482  496650 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	I1227 10:00:56.738510  496650 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22344-301174/.minikube CaCertPath:/home/jenkins/minikube-integration/22344-301174/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22344-301174/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22344-301174/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22344-301174/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22344-301174/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22344-301174/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22344-301174/.minikube}
	I1227 10:00:56.738547  496650 ubuntu.go:190] setting up certificates
	I1227 10:00:56.738558  496650 provision.go:84] configureAuth start
	I1227 10:00:56.738623  496650 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-156305
	I1227 10:00:56.756960  496650 provision.go:143] copyHostCerts
	I1227 10:00:56.757035  496650 exec_runner.go:144] found /home/jenkins/minikube-integration/22344-301174/.minikube/ca.pem, removing ...
	I1227 10:00:56.757062  496650 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22344-301174/.minikube/ca.pem
	I1227 10:00:56.757140  496650 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22344-301174/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22344-301174/.minikube/ca.pem (1082 bytes)
	I1227 10:00:56.757239  496650 exec_runner.go:144] found /home/jenkins/minikube-integration/22344-301174/.minikube/cert.pem, removing ...
	I1227 10:00:56.757248  496650 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22344-301174/.minikube/cert.pem
	I1227 10:00:56.757274  496650 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22344-301174/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22344-301174/.minikube/cert.pem (1123 bytes)
	I1227 10:00:56.757332  496650 exec_runner.go:144] found /home/jenkins/minikube-integration/22344-301174/.minikube/key.pem, removing ...
	I1227 10:00:56.757340  496650 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22344-301174/.minikube/key.pem
	I1227 10:00:56.757362  496650 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22344-301174/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22344-301174/.minikube/key.pem (1675 bytes)
	I1227 10:00:56.757414  496650 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22344-301174/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22344-301174/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22344-301174/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-156305 san=[127.0.0.1 192.168.85.2 localhost minikube old-k8s-version-156305]
	I1227 10:00:57.161342  496650 provision.go:177] copyRemoteCerts
	I1227 10:00:57.161417  496650 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1227 10:00:57.161457  496650 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-156305
	I1227 10:00:57.179202  496650 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33426 SSHKeyPath:/home/jenkins/minikube-integration/22344-301174/.minikube/machines/old-k8s-version-156305/id_rsa Username:docker}
	I1227 10:00:57.278527  496650 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22344-301174/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1227 10:00:57.296666  496650 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22344-301174/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I1227 10:00:57.313898  496650 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22344-301174/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1227 10:00:57.331800  496650 provision.go:87] duration metric: took 593.215618ms to configureAuth
	I1227 10:00:57.331876  496650 ubuntu.go:206] setting minikube options for container-runtime
	I1227 10:00:57.332098  496650 config.go:182] Loaded profile config "old-k8s-version-156305": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1227 10:00:57.332233  496650 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-156305
	I1227 10:00:57.350003  496650 main.go:144] libmachine: Using SSH client type: native
	I1227 10:00:57.350359  496650 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dbbe0] 0x3de0e0 <nil>  [] 0s} 127.0.0.1 33426 <nil> <nil>}
	I1227 10:00:57.350382  496650 main.go:144] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1227 10:00:57.723813  496650 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1227 10:00:57.723920  496650 machine.go:97] duration metric: took 4.488122609s to provisionDockerMachine
	I1227 10:00:57.723958  496650 start.go:293] postStartSetup for "old-k8s-version-156305" (driver="docker")
	I1227 10:00:57.723984  496650 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1227 10:00:57.724064  496650 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1227 10:00:57.724140  496650 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-156305
	I1227 10:00:57.748057  496650 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33426 SSHKeyPath:/home/jenkins/minikube-integration/22344-301174/.minikube/machines/old-k8s-version-156305/id_rsa Username:docker}
	I1227 10:00:57.846299  496650 ssh_runner.go:195] Run: cat /etc/os-release
	I1227 10:00:57.849754  496650 main.go:144] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1227 10:00:57.849786  496650 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1227 10:00:57.849806  496650 filesync.go:126] Scanning /home/jenkins/minikube-integration/22344-301174/.minikube/addons for local assets ...
	I1227 10:00:57.849901  496650 filesync.go:126] Scanning /home/jenkins/minikube-integration/22344-301174/.minikube/files for local assets ...
	I1227 10:00:57.850013  496650 filesync.go:149] local asset: /home/jenkins/minikube-integration/22344-301174/.minikube/files/etc/ssl/certs/3030432.pem -> 3030432.pem in /etc/ssl/certs
	I1227 10:00:57.850122  496650 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1227 10:00:57.858054  496650 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22344-301174/.minikube/files/etc/ssl/certs/3030432.pem --> /etc/ssl/certs/3030432.pem (1708 bytes)
	I1227 10:00:57.876801  496650 start.go:296] duration metric: took 152.813231ms for postStartSetup
	I1227 10:00:57.876899  496650 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1227 10:00:57.876944  496650 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-156305
	I1227 10:00:57.894526  496650 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33426 SSHKeyPath:/home/jenkins/minikube-integration/22344-301174/.minikube/machines/old-k8s-version-156305/id_rsa Username:docker}
	I1227 10:00:57.991776  496650 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1227 10:00:57.996992  496650 fix.go:56] duration metric: took 5.087210827s for fixHost
	I1227 10:00:57.997030  496650 start.go:83] releasing machines lock for "old-k8s-version-156305", held for 5.087276542s
	I1227 10:00:57.997103  496650 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-156305
	I1227 10:00:58.020995  496650 ssh_runner.go:195] Run: cat /version.json
	I1227 10:00:58.021081  496650 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-156305
	I1227 10:00:58.021135  496650 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1227 10:00:58.021226  496650 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-156305
	I1227 10:00:58.049026  496650 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33426 SSHKeyPath:/home/jenkins/minikube-integration/22344-301174/.minikube/machines/old-k8s-version-156305/id_rsa Username:docker}
	I1227 10:00:58.051188  496650 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33426 SSHKeyPath:/home/jenkins/minikube-integration/22344-301174/.minikube/machines/old-k8s-version-156305/id_rsa Username:docker}
	I1227 10:00:58.241329  496650 ssh_runner.go:195] Run: systemctl --version
	I1227 10:00:58.248300  496650 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1227 10:00:58.289099  496650 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1227 10:00:58.293640  496650 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1227 10:00:58.293789  496650 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1227 10:00:58.301893  496650 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1227 10:00:58.301917  496650 start.go:496] detecting cgroup driver to use...
	I1227 10:00:58.301968  496650 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1227 10:00:58.302032  496650 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1227 10:00:58.317292  496650 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1227 10:00:58.330524  496650 docker.go:218] disabling cri-docker service (if available) ...
	I1227 10:00:58.330589  496650 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1227 10:00:58.346051  496650 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1227 10:00:58.359614  496650 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1227 10:00:58.495391  496650 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1227 10:00:58.609291  496650 docker.go:234] disabling docker service ...
	I1227 10:00:58.609355  496650 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1227 10:00:58.624110  496650 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1227 10:00:58.637281  496650 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1227 10:00:58.755038  496650 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1227 10:00:58.868258  496650 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1227 10:00:58.881819  496650 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1227 10:00:58.896370  496650 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1227 10:00:58.896459  496650 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 10:00:58.906083  496650 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1227 10:00:58.906200  496650 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 10:00:58.915503  496650 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 10:00:58.924125  496650 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 10:00:58.932627  496650 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1227 10:00:58.940866  496650 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 10:00:58.949740  496650 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 10:00:58.958465  496650 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 10:00:58.967359  496650 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1227 10:00:58.975259  496650 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1227 10:00:58.982838  496650 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1227 10:00:59.094879  496650 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1227 10:00:59.297060  496650 start.go:553] Will wait 60s for socket path /var/run/crio/crio.sock
	I1227 10:00:59.297177  496650 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1227 10:00:59.300978  496650 start.go:574] Will wait 60s for crictl version
	I1227 10:00:59.301072  496650 ssh_runner.go:195] Run: which crictl
	I1227 10:00:59.304618  496650 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1227 10:00:59.329538  496650 start.go:590] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1227 10:00:59.329651  496650 ssh_runner.go:195] Run: crio --version
	I1227 10:00:59.357954  496650 ssh_runner.go:195] Run: crio --version
	I1227 10:00:59.391657  496650 out.go:179] * Preparing Kubernetes v1.28.0 on CRI-O 1.34.3 ...
	I1227 10:00:59.394493  496650 cli_runner.go:164] Run: docker network inspect old-k8s-version-156305 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1227 10:00:59.411324  496650 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1227 10:00:59.415321  496650 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1227 10:00:59.425333  496650 kubeadm.go:884] updating cluster {Name:old-k8s-version-156305 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-156305 Namespace:default APIServerHAVIP: APIServerName:minik
ubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountU
ID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false} ...
	I1227 10:00:59.425463  496650 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1227 10:00:59.425523  496650 ssh_runner.go:195] Run: sudo crictl images --output json
	I1227 10:00:59.460418  496650 crio.go:561] all images are preloaded for cri-o runtime.
	I1227 10:00:59.460442  496650 crio.go:433] Images already preloaded, skipping extraction
	I1227 10:00:59.460499  496650 ssh_runner.go:195] Run: sudo crictl images --output json
	I1227 10:00:59.485456  496650 crio.go:561] all images are preloaded for cri-o runtime.
	I1227 10:00:59.485481  496650 cache_images.go:86] Images are preloaded, skipping loading
	I1227 10:00:59.485502  496650 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.28.0 crio true true} ...
	I1227 10:00:59.485607  496650 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=old-k8s-version-156305 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-156305 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1227 10:00:59.485691  496650 ssh_runner.go:195] Run: crio config
	I1227 10:00:59.568379  496650 cni.go:84] Creating CNI manager for ""
	I1227 10:00:59.568418  496650 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1227 10:00:59.568441  496650 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1227 10:00:59.568465  496650 kubeadm.go:197] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.28.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-156305 NodeName:old-k8s-version-156305 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPo
dPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1227 10:00:59.568623  496650 kubeadm.go:203] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "old-k8s-version-156305"
	  kubeletExtraArgs:
	    node-ip: 192.168.85.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1227 10:00:59.568698  496650 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.0
	I1227 10:00:59.576645  496650 binaries.go:51] Found k8s binaries, skipping transfer
	I1227 10:00:59.576724  496650 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1227 10:00:59.584575  496650 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (372 bytes)
	I1227 10:00:59.598059  496650 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1227 10:00:59.611067  496650 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2160 bytes)
	I1227 10:00:59.625050  496650 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1227 10:00:59.629176  496650 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1227 10:00:59.639418  496650 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1227 10:00:59.763860  496650 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1227 10:00:59.780235  496650 certs.go:69] Setting up /home/jenkins/minikube-integration/22344-301174/.minikube/profiles/old-k8s-version-156305 for IP: 192.168.85.2
	I1227 10:00:59.780270  496650 certs.go:195] generating shared ca certs ...
	I1227 10:00:59.780286  496650 certs.go:227] acquiring lock for ca certs: {Name:mk8bd99999eb300c33129cb11fd55a1be3001328 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 10:00:59.780451  496650 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22344-301174/.minikube/ca.key
	I1227 10:00:59.780497  496650 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22344-301174/.minikube/proxy-client-ca.key
	I1227 10:00:59.780534  496650 certs.go:257] generating profile certs ...
	I1227 10:00:59.780623  496650 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22344-301174/.minikube/profiles/old-k8s-version-156305/client.key
	I1227 10:00:59.780701  496650 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22344-301174/.minikube/profiles/old-k8s-version-156305/apiserver.key.aa518b85
	I1227 10:00:59.780750  496650 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22344-301174/.minikube/profiles/old-k8s-version-156305/proxy-client.key
	I1227 10:00:59.780866  496650 certs.go:484] found cert: /home/jenkins/minikube-integration/22344-301174/.minikube/certs/303043.pem (1338 bytes)
	W1227 10:00:59.780904  496650 certs.go:480] ignoring /home/jenkins/minikube-integration/22344-301174/.minikube/certs/303043_empty.pem, impossibly tiny 0 bytes
	I1227 10:00:59.780918  496650 certs.go:484] found cert: /home/jenkins/minikube-integration/22344-301174/.minikube/certs/ca-key.pem (1675 bytes)
	I1227 10:00:59.780952  496650 certs.go:484] found cert: /home/jenkins/minikube-integration/22344-301174/.minikube/certs/ca.pem (1082 bytes)
	I1227 10:00:59.780982  496650 certs.go:484] found cert: /home/jenkins/minikube-integration/22344-301174/.minikube/certs/cert.pem (1123 bytes)
	I1227 10:00:59.781010  496650 certs.go:484] found cert: /home/jenkins/minikube-integration/22344-301174/.minikube/certs/key.pem (1675 bytes)
	I1227 10:00:59.781059  496650 certs.go:484] found cert: /home/jenkins/minikube-integration/22344-301174/.minikube/files/etc/ssl/certs/3030432.pem (1708 bytes)
	I1227 10:00:59.781642  496650 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22344-301174/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1227 10:00:59.803450  496650 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22344-301174/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1227 10:00:59.821319  496650 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22344-301174/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1227 10:00:59.841011  496650 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22344-301174/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1227 10:00:59.873860  496650 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22344-301174/.minikube/profiles/old-k8s-version-156305/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1227 10:00:59.907441  496650 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22344-301174/.minikube/profiles/old-k8s-version-156305/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1227 10:00:59.975391  496650 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22344-301174/.minikube/profiles/old-k8s-version-156305/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1227 10:00:59.996371  496650 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22344-301174/.minikube/profiles/old-k8s-version-156305/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1227 10:01:00.029229  496650 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22344-301174/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1227 10:01:00.108718  496650 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22344-301174/.minikube/certs/303043.pem --> /usr/share/ca-certificates/303043.pem (1338 bytes)
	I1227 10:01:00.179569  496650 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22344-301174/.minikube/files/etc/ssl/certs/3030432.pem --> /usr/share/ca-certificates/3030432.pem (1708 bytes)
	I1227 10:01:00.222865  496650 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (722 bytes)
	I1227 10:01:00.290526  496650 ssh_runner.go:195] Run: openssl version
	I1227 10:01:00.300223  496650 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/303043.pem
	I1227 10:01:00.343177  496650 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/303043.pem /etc/ssl/certs/303043.pem
	I1227 10:01:00.355317  496650 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/303043.pem
	I1227 10:01:00.361847  496650 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 27 09:17 /usr/share/ca-certificates/303043.pem
	I1227 10:01:00.361964  496650 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/303043.pem
	I1227 10:01:00.410440  496650 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1227 10:01:00.440074  496650 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/3030432.pem
	I1227 10:01:00.450781  496650 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/3030432.pem /etc/ssl/certs/3030432.pem
	I1227 10:01:00.461538  496650 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3030432.pem
	I1227 10:01:00.468301  496650 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 27 09:17 /usr/share/ca-certificates/3030432.pem
	I1227 10:01:00.468376  496650 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3030432.pem
	I1227 10:01:00.516104  496650 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1227 10:01:00.526860  496650 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1227 10:01:00.539641  496650 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1227 10:01:00.549509  496650 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1227 10:01:00.554390  496650 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 27 09:13 /usr/share/ca-certificates/minikubeCA.pem
	I1227 10:01:00.554461  496650 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1227 10:01:00.607399  496650 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1227 10:01:00.628951  496650 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1227 10:01:00.634722  496650 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1227 10:01:00.729554  496650 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1227 10:01:00.817106  496650 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1227 10:01:00.882882  496650 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1227 10:01:00.941530  496650 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1227 10:01:01.003358  496650 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1227 10:01:01.054882  496650 kubeadm.go:401] StartCluster: {Name:old-k8s-version-156305 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-156305 Namespace:default APIServerHAVIP: APIServerName:minikube
CA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:
docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1227 10:01:01.055040  496650 cri.go:61] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1227 10:01:01.055148  496650 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1227 10:01:01.123137  496650 cri.go:96] found id: "cd8ca9064dcc761f5b92eb96cced8f7de01ddc2ffebf6147cbc5c135c3801051"
	I1227 10:01:01.123220  496650 cri.go:96] found id: "5708ffd35134c895f5788182b61cdd93ddb87642674f5a59ca92211051f91063"
	I1227 10:01:01.123239  496650 cri.go:96] found id: "d3e3d49e9f91ec5959eb25abafbab591f5c21e6495480628f706735c5fe3d04c"
	I1227 10:01:01.123264  496650 cri.go:96] found id: "52de84716480aa6f441c8bd1ee9047feb54b482e6049d39d4b329ff433bc6cb2"
	I1227 10:01:01.123315  496650 cri.go:96] found id: ""
	I1227 10:01:01.123404  496650 ssh_runner.go:195] Run: sudo runc list -f json
	W1227 10:01:01.143644  496650 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-27T10:01:01Z" level=error msg="open /run/runc: no such file or directory"
	I1227 10:01:01.143808  496650 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1227 10:01:01.156749  496650 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1227 10:01:01.156829  496650 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1227 10:01:01.156939  496650 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1227 10:01:01.172109  496650 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1227 10:01:01.172645  496650 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-156305" does not appear in /home/jenkins/minikube-integration/22344-301174/kubeconfig
	I1227 10:01:01.172813  496650 kubeconfig.go:62] /home/jenkins/minikube-integration/22344-301174/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-156305" cluster setting kubeconfig missing "old-k8s-version-156305" context setting]
	I1227 10:01:01.173289  496650 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22344-301174/kubeconfig: {Name:mk3a3b85dc4ae4614cb3a9a7459a1ec0ee3f4d9f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 10:01:01.175085  496650 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1227 10:01:01.188666  496650 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.85.2
	I1227 10:01:01.188752  496650 kubeadm.go:602] duration metric: took 31.900845ms to restartPrimaryControlPlane
	I1227 10:01:01.188783  496650 kubeadm.go:403] duration metric: took 133.906ms to StartCluster
	I1227 10:01:01.188833  496650 settings.go:142] acquiring lock: {Name:mkd3c0090464e277fd903dd0170c6ace5a6172ea Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 10:01:01.188936  496650 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22344-301174/kubeconfig
	I1227 10:01:01.189672  496650 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22344-301174/kubeconfig: {Name:mk3a3b85dc4ae4614cb3a9a7459a1ec0ee3f4d9f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 10:01:01.189986  496650 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1227 10:01:01.190366  496650 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1227 10:01:01.190461  496650 addons.go:70] Setting storage-provisioner=true in profile "old-k8s-version-156305"
	I1227 10:01:01.190489  496650 addons.go:239] Setting addon storage-provisioner=true in "old-k8s-version-156305"
	W1227 10:01:01.190498  496650 addons.go:248] addon storage-provisioner should already be in state true
	I1227 10:01:01.190524  496650 host.go:66] Checking if "old-k8s-version-156305" exists ...
	I1227 10:01:01.190633  496650 config.go:182] Loaded profile config "old-k8s-version-156305": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1227 10:01:01.190813  496650 addons.go:70] Setting dashboard=true in profile "old-k8s-version-156305"
	I1227 10:01:01.190897  496650 addons.go:239] Setting addon dashboard=true in "old-k8s-version-156305"
	W1227 10:01:01.190921  496650 addons.go:248] addon dashboard should already be in state true
	I1227 10:01:01.190979  496650 host.go:66] Checking if "old-k8s-version-156305" exists ...
	I1227 10:01:01.191037  496650 cli_runner.go:164] Run: docker container inspect old-k8s-version-156305 --format={{.State.Status}}
	I1227 10:01:01.191608  496650 cli_runner.go:164] Run: docker container inspect old-k8s-version-156305 --format={{.State.Status}}
	I1227 10:01:01.192137  496650 addons.go:70] Setting default-storageclass=true in profile "old-k8s-version-156305"
	I1227 10:01:01.192177  496650 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "old-k8s-version-156305"
	I1227 10:01:01.192516  496650 cli_runner.go:164] Run: docker container inspect old-k8s-version-156305 --format={{.State.Status}}
	I1227 10:01:01.196618  496650 out.go:179] * Verifying Kubernetes components...
	I1227 10:01:01.200086  496650 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1227 10:01:01.257546  496650 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1227 10:01:01.260475  496650 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1227 10:01:01.263344  496650 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1227 10:01:01.263906  496650 addons.go:239] Setting addon default-storageclass=true in "old-k8s-version-156305"
	W1227 10:01:01.263927  496650 addons.go:248] addon default-storageclass should already be in state true
	I1227 10:01:01.263956  496650 host.go:66] Checking if "old-k8s-version-156305" exists ...
	I1227 10:01:01.264433  496650 cli_runner.go:164] Run: docker container inspect old-k8s-version-156305 --format={{.State.Status}}
	I1227 10:01:01.264662  496650 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1227 10:01:01.264700  496650 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1227 10:01:01.264751  496650 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-156305
	I1227 10:01:01.270267  496650 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1227 10:01:01.270304  496650 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1227 10:01:01.270386  496650 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-156305
	I1227 10:01:01.333207  496650 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1227 10:01:01.333233  496650 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1227 10:01:01.333313  496650 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-156305
	I1227 10:01:01.333657  496650 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33426 SSHKeyPath:/home/jenkins/minikube-integration/22344-301174/.minikube/machines/old-k8s-version-156305/id_rsa Username:docker}
	I1227 10:01:01.341186  496650 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33426 SSHKeyPath:/home/jenkins/minikube-integration/22344-301174/.minikube/machines/old-k8s-version-156305/id_rsa Username:docker}
	I1227 10:01:01.364861  496650 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33426 SSHKeyPath:/home/jenkins/minikube-integration/22344-301174/.minikube/machines/old-k8s-version-156305/id_rsa Username:docker}
	I1227 10:01:01.569614  496650 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1227 10:01:01.611995  496650 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1227 10:01:01.656517  496650 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1227 10:01:01.769561  496650 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1227 10:01:01.769643  496650 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1227 10:01:01.851772  496650 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1227 10:01:01.851848  496650 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1227 10:01:01.903351  496650 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1227 10:01:01.903435  496650 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1227 10:01:01.928699  496650 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1227 10:01:01.928772  496650 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1227 10:01:01.948687  496650 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1227 10:01:01.948763  496650 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1227 10:01:01.971637  496650 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1227 10:01:01.971728  496650 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1227 10:01:01.996678  496650 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1227 10:01:01.996760  496650 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1227 10:01:02.021147  496650 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1227 10:01:02.021225  496650 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1227 10:01:02.044614  496650 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1227 10:01:02.044708  496650 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1227 10:01:02.068581  496650 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1227 10:01:08.339756  496650 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (6.770108973s)
	I1227 10:01:08.339811  496650 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (6.727744743s)
	I1227 10:01:08.339842  496650 node_ready.go:35] waiting up to 6m0s for node "old-k8s-version-156305" to be "Ready" ...
	I1227 10:01:08.340144  496650 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (6.6835613s)
	I1227 10:01:08.366593  496650 node_ready.go:49] node "old-k8s-version-156305" is "Ready"
	I1227 10:01:08.366625  496650 node_ready.go:38] duration metric: took 26.770968ms for node "old-k8s-version-156305" to be "Ready" ...
	I1227 10:01:08.366641  496650 api_server.go:52] waiting for apiserver process to appear ...
	I1227 10:01:08.366700  496650 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 10:01:08.886011  496650 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (6.817338034s)
	I1227 10:01:08.886282  496650 api_server.go:72] duration metric: took 7.696241254s to wait for apiserver process to appear ...
	I1227 10:01:08.886299  496650 api_server.go:88] waiting for apiserver healthz status ...
	I1227 10:01:08.886318  496650 api_server.go:299] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1227 10:01:08.889318  496650 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p old-k8s-version-156305 addons enable metrics-server
	
	I1227 10:01:08.892139  496650 out.go:179] * Enabled addons: storage-provisioner, default-storageclass, dashboard
	I1227 10:01:08.895110  496650 addons.go:530] duration metric: took 7.704746164s for enable addons: enabled=[storage-provisioner default-storageclass dashboard]
	I1227 10:01:08.907144  496650 api_server.go:325] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1227 10:01:08.909437  496650 api_server.go:141] control plane version: v1.28.0
	I1227 10:01:08.909467  496650 api_server.go:131] duration metric: took 23.162053ms to wait for apiserver health ...
	I1227 10:01:08.909476  496650 system_pods.go:43] waiting for kube-system pods to appear ...
	I1227 10:01:08.920386  496650 system_pods.go:59] 8 kube-system pods found
	I1227 10:01:08.920428  496650 system_pods.go:61] "coredns-5dd5756b68-5jmbh" [1eb3c15a-a576-4711-849e-790fa87ddc70] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1227 10:01:08.920439  496650 system_pods.go:61] "etcd-old-k8s-version-156305" [788b8b14-c8e8-45f5-85a5-d93643420eaf] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1227 10:01:08.920446  496650 system_pods.go:61] "kindnet-w2m9v" [fba5eff1-7424-451f-9109-7e58587628ef] Running
	I1227 10:01:08.920454  496650 system_pods.go:61] "kube-apiserver-old-k8s-version-156305" [8408f30c-4be2-4423-8ed2-bfaf23f66b8f] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1227 10:01:08.920466  496650 system_pods.go:61] "kube-controller-manager-old-k8s-version-156305" [4647838c-b00e-4945-995c-58b03a1dea94] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1227 10:01:08.920486  496650 system_pods.go:61] "kube-proxy-pkr8q" [1e2c235b-d7bb-427a-8c56-988f64794d9d] Running
	I1227 10:01:08.920498  496650 system_pods.go:61] "kube-scheduler-old-k8s-version-156305" [8fa045f4-b8c0-4d14-9b36-f0a5ec8d6b7b] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1227 10:01:08.920504  496650 system_pods.go:61] "storage-provisioner" [f6bd7b49-196a-44fd-87ef-c75c1aec15de] Running
	I1227 10:01:08.920511  496650 system_pods.go:74] duration metric: took 11.027904ms to wait for pod list to return data ...
	I1227 10:01:08.920522  496650 default_sa.go:34] waiting for default service account to be created ...
	I1227 10:01:08.925210  496650 default_sa.go:45] found service account: "default"
	I1227 10:01:08.925237  496650 default_sa.go:55] duration metric: took 4.708456ms for default service account to be created ...
	I1227 10:01:08.925247  496650 system_pods.go:116] waiting for k8s-apps to be running ...
	I1227 10:01:08.933036  496650 system_pods.go:86] 8 kube-system pods found
	I1227 10:01:08.933072  496650 system_pods.go:89] "coredns-5dd5756b68-5jmbh" [1eb3c15a-a576-4711-849e-790fa87ddc70] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1227 10:01:08.933082  496650 system_pods.go:89] "etcd-old-k8s-version-156305" [788b8b14-c8e8-45f5-85a5-d93643420eaf] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1227 10:01:08.933088  496650 system_pods.go:89] "kindnet-w2m9v" [fba5eff1-7424-451f-9109-7e58587628ef] Running
	I1227 10:01:08.933096  496650 system_pods.go:89] "kube-apiserver-old-k8s-version-156305" [8408f30c-4be2-4423-8ed2-bfaf23f66b8f] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1227 10:01:08.933105  496650 system_pods.go:89] "kube-controller-manager-old-k8s-version-156305" [4647838c-b00e-4945-995c-58b03a1dea94] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1227 10:01:08.933111  496650 system_pods.go:89] "kube-proxy-pkr8q" [1e2c235b-d7bb-427a-8c56-988f64794d9d] Running
	I1227 10:01:08.933119  496650 system_pods.go:89] "kube-scheduler-old-k8s-version-156305" [8fa045f4-b8c0-4d14-9b36-f0a5ec8d6b7b] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1227 10:01:08.933124  496650 system_pods.go:89] "storage-provisioner" [f6bd7b49-196a-44fd-87ef-c75c1aec15de] Running
	I1227 10:01:08.933135  496650 system_pods.go:126] duration metric: took 7.881502ms to wait for k8s-apps to be running ...
	I1227 10:01:08.933150  496650 system_svc.go:44] waiting for kubelet service to be running ....
	I1227 10:01:08.933209  496650 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1227 10:01:08.964339  496650 system_svc.go:56] duration metric: took 31.168652ms WaitForService to wait for kubelet
	I1227 10:01:08.964417  496650 kubeadm.go:587] duration metric: took 7.774366266s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1227 10:01:08.964458  496650 node_conditions.go:102] verifying NodePressure condition ...
	I1227 10:01:08.974603  496650 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1227 10:01:08.974682  496650 node_conditions.go:123] node cpu capacity is 2
	I1227 10:01:08.974726  496650 node_conditions.go:105] duration metric: took 10.23628ms to run NodePressure ...
	I1227 10:01:08.974754  496650 start.go:242] waiting for startup goroutines ...
	I1227 10:01:08.974791  496650 start.go:247] waiting for cluster config update ...
	I1227 10:01:08.974822  496650 start.go:256] writing updated cluster config ...
	I1227 10:01:08.975242  496650 ssh_runner.go:195] Run: rm -f paused
	I1227 10:01:08.980016  496650 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1227 10:01:08.987765  496650 pod_ready.go:83] waiting for pod "coredns-5dd5756b68-5jmbh" in "kube-system" namespace to be "Ready" or be gone ...
	W1227 10:01:10.993640  496650 pod_ready.go:104] pod "coredns-5dd5756b68-5jmbh" is not "Ready", error: <nil>
	W1227 10:01:12.994295  496650 pod_ready.go:104] pod "coredns-5dd5756b68-5jmbh" is not "Ready", error: <nil>
	W1227 10:01:14.994749  496650 pod_ready.go:104] pod "coredns-5dd5756b68-5jmbh" is not "Ready", error: <nil>
	W1227 10:01:16.995376  496650 pod_ready.go:104] pod "coredns-5dd5756b68-5jmbh" is not "Ready", error: <nil>
	W1227 10:01:19.505384  496650 pod_ready.go:104] pod "coredns-5dd5756b68-5jmbh" is not "Ready", error: <nil>
	W1227 10:01:21.994620  496650 pod_ready.go:104] pod "coredns-5dd5756b68-5jmbh" is not "Ready", error: <nil>
	W1227 10:01:24.494503  496650 pod_ready.go:104] pod "coredns-5dd5756b68-5jmbh" is not "Ready", error: <nil>
	W1227 10:01:26.994573  496650 pod_ready.go:104] pod "coredns-5dd5756b68-5jmbh" is not "Ready", error: <nil>
	W1227 10:01:29.493694  496650 pod_ready.go:104] pod "coredns-5dd5756b68-5jmbh" is not "Ready", error: <nil>
	W1227 10:01:31.494055  496650 pod_ready.go:104] pod "coredns-5dd5756b68-5jmbh" is not "Ready", error: <nil>
	W1227 10:01:33.993629  496650 pod_ready.go:104] pod "coredns-5dd5756b68-5jmbh" is not "Ready", error: <nil>
	W1227 10:01:36.493572  496650 pod_ready.go:104] pod "coredns-5dd5756b68-5jmbh" is not "Ready", error: <nil>
	W1227 10:01:38.993989  496650 pod_ready.go:104] pod "coredns-5dd5756b68-5jmbh" is not "Ready", error: <nil>
	W1227 10:01:40.994064  496650 pod_ready.go:104] pod "coredns-5dd5756b68-5jmbh" is not "Ready", error: <nil>
	W1227 10:01:42.995563  496650 pod_ready.go:104] pod "coredns-5dd5756b68-5jmbh" is not "Ready", error: <nil>
	I1227 10:01:44.493548  496650 pod_ready.go:94] pod "coredns-5dd5756b68-5jmbh" is "Ready"
	I1227 10:01:44.493576  496650 pod_ready.go:86] duration metric: took 35.505772422s for pod "coredns-5dd5756b68-5jmbh" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 10:01:44.496874  496650 pod_ready.go:83] waiting for pod "etcd-old-k8s-version-156305" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 10:01:44.502285  496650 pod_ready.go:94] pod "etcd-old-k8s-version-156305" is "Ready"
	I1227 10:01:44.502316  496650 pod_ready.go:86] duration metric: took 5.415575ms for pod "etcd-old-k8s-version-156305" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 10:01:44.508460  496650 pod_ready.go:83] waiting for pod "kube-apiserver-old-k8s-version-156305" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 10:01:44.513667  496650 pod_ready.go:94] pod "kube-apiserver-old-k8s-version-156305" is "Ready"
	I1227 10:01:44.513698  496650 pod_ready.go:86] duration metric: took 5.207646ms for pod "kube-apiserver-old-k8s-version-156305" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 10:01:44.516871  496650 pod_ready.go:83] waiting for pod "kube-controller-manager-old-k8s-version-156305" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 10:01:44.690924  496650 pod_ready.go:94] pod "kube-controller-manager-old-k8s-version-156305" is "Ready"
	I1227 10:01:44.690954  496650 pod_ready.go:86] duration metric: took 174.052387ms for pod "kube-controller-manager-old-k8s-version-156305" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 10:01:44.893872  496650 pod_ready.go:83] waiting for pod "kube-proxy-pkr8q" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 10:01:45.291652  496650 pod_ready.go:94] pod "kube-proxy-pkr8q" is "Ready"
	I1227 10:01:45.291683  496650 pod_ready.go:86] duration metric: took 397.780792ms for pod "kube-proxy-pkr8q" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 10:01:45.491970  496650 pod_ready.go:83] waiting for pod "kube-scheduler-old-k8s-version-156305" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 10:01:45.891551  496650 pod_ready.go:94] pod "kube-scheduler-old-k8s-version-156305" is "Ready"
	I1227 10:01:45.891632  496650 pod_ready.go:86] duration metric: took 399.630932ms for pod "kube-scheduler-old-k8s-version-156305" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 10:01:45.891652  496650 pod_ready.go:40] duration metric: took 36.91155894s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1227 10:01:45.947233  496650 start.go:625] kubectl: 1.33.2, cluster: 1.28.0 (minor skew: 5)
	I1227 10:01:45.950480  496650 out.go:203] 
	W1227 10:01:45.953431  496650 out.go:285] ! /usr/local/bin/kubectl is version 1.33.2, which may have incompatibilities with Kubernetes 1.28.0.
	I1227 10:01:45.956389  496650 out.go:179]   - Want kubectl v1.28.0? Try 'minikube kubectl -- get pods -A'
	I1227 10:01:45.959349  496650 out.go:179] * Done! kubectl is now configured to use "old-k8s-version-156305" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Dec 27 10:01:42 old-k8s-version-156305 crio[654]: time="2025-12-27T10:01:42.054517968Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 27 10:01:42 old-k8s-version-156305 crio[654]: time="2025-12-27T10:01:42.061577124Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 27 10:01:42 old-k8s-version-156305 crio[654]: time="2025-12-27T10:01:42.064286492Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 27 10:01:42 old-k8s-version-156305 crio[654]: time="2025-12-27T10:01:42.086379124Z" level=info msg="Created container 0491eab1be817f5a0588cba3090545b34f363760f5960123eab7b9b3a9b19dd3: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-7ht6q/dashboard-metrics-scraper" id=43af9396-7f9a-42ee-b395-f2b7c6f06af6 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 27 10:01:42 old-k8s-version-156305 crio[654]: time="2025-12-27T10:01:42.09061939Z" level=info msg="Starting container: 0491eab1be817f5a0588cba3090545b34f363760f5960123eab7b9b3a9b19dd3" id=f51bc300-f08a-45a7-a8fc-0984a90cd754 name=/runtime.v1.RuntimeService/StartContainer
	Dec 27 10:01:42 old-k8s-version-156305 crio[654]: time="2025-12-27T10:01:42.093061505Z" level=info msg="Started container" PID=1652 containerID=0491eab1be817f5a0588cba3090545b34f363760f5960123eab7b9b3a9b19dd3 description=kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-7ht6q/dashboard-metrics-scraper id=f51bc300-f08a-45a7-a8fc-0984a90cd754 name=/runtime.v1.RuntimeService/StartContainer sandboxID=900f1369f80200f2910ad79c76bad68715d9412c1126f128dd7be15c20776a61
	Dec 27 10:01:42 old-k8s-version-156305 conmon[1650]: conmon 0491eab1be817f5a0588 <ninfo>: container 1652 exited with status 1
	Dec 27 10:01:42 old-k8s-version-156305 crio[654]: time="2025-12-27T10:01:42.27313663Z" level=info msg="Removing container: 490359e4d7be99b674aa5be025f58c6d5a47ecfe5324dfd8b2a597a900885692" id=5ea45676-531e-4f0e-b8f4-edd411e104d5 name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 27 10:01:42 old-k8s-version-156305 crio[654]: time="2025-12-27T10:01:42.284947485Z" level=info msg="Error loading conmon cgroup of container 490359e4d7be99b674aa5be025f58c6d5a47ecfe5324dfd8b2a597a900885692: cgroup deleted" id=5ea45676-531e-4f0e-b8f4-edd411e104d5 name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 27 10:01:42 old-k8s-version-156305 crio[654]: time="2025-12-27T10:01:42.291211128Z" level=info msg="Removed container 490359e4d7be99b674aa5be025f58c6d5a47ecfe5324dfd8b2a597a900885692: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-7ht6q/dashboard-metrics-scraper" id=5ea45676-531e-4f0e-b8f4-edd411e104d5 name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 27 10:01:47 old-k8s-version-156305 crio[654]: time="2025-12-27T10:01:47.920190277Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Dec 27 10:01:47 old-k8s-version-156305 crio[654]: time="2025-12-27T10:01:47.926250709Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 27 10:01:47 old-k8s-version-156305 crio[654]: time="2025-12-27T10:01:47.926452139Z" level=info msg="Updated default CNI network name to kindnet"
	Dec 27 10:01:47 old-k8s-version-156305 crio[654]: time="2025-12-27T10:01:47.926558528Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Dec 27 10:01:47 old-k8s-version-156305 crio[654]: time="2025-12-27T10:01:47.931867116Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 27 10:01:47 old-k8s-version-156305 crio[654]: time="2025-12-27T10:01:47.932028431Z" level=info msg="Updated default CNI network name to kindnet"
	Dec 27 10:01:47 old-k8s-version-156305 crio[654]: time="2025-12-27T10:01:47.93210135Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Dec 27 10:01:47 old-k8s-version-156305 crio[654]: time="2025-12-27T10:01:47.93558818Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 27 10:01:47 old-k8s-version-156305 crio[654]: time="2025-12-27T10:01:47.935826829Z" level=info msg="Updated default CNI network name to kindnet"
	Dec 27 10:01:47 old-k8s-version-156305 crio[654]: time="2025-12-27T10:01:47.935907527Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Dec 27 10:01:47 old-k8s-version-156305 crio[654]: time="2025-12-27T10:01:47.939323061Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 27 10:01:47 old-k8s-version-156305 crio[654]: time="2025-12-27T10:01:47.939479412Z" level=info msg="Updated default CNI network name to kindnet"
	Dec 27 10:01:47 old-k8s-version-156305 crio[654]: time="2025-12-27T10:01:47.939567503Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Dec 27 10:01:47 old-k8s-version-156305 crio[654]: time="2025-12-27T10:01:47.943704203Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 27 10:01:47 old-k8s-version-156305 crio[654]: time="2025-12-27T10:01:47.943872943Z" level=info msg="Updated default CNI network name to kindnet"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED              STATE               NAME                        ATTEMPT             POD ID              POD                                              NAMESPACE
	0491eab1be817       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           18 seconds ago       Exited              dashboard-metrics-scraper   2                   900f1369f8020       dashboard-metrics-scraper-5f989dc9cf-7ht6q       kubernetes-dashboard
	e30c642381fd6       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           22 seconds ago       Running             storage-provisioner         2                   59d7d48287e9c       storage-provisioner                              kube-system
	01fe9c54f51a2       docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf   32 seconds ago       Running             kubernetes-dashboard        0                   611773856d41d       kubernetes-dashboard-8694d4445c-b6nzn            kubernetes-dashboard
	8501cac915148       97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108                                           53 seconds ago       Running             coredns                     1                   7420f8aeaecf0       coredns-5dd5756b68-5jmbh                         kube-system
	ae1cd8b48e0d6       1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c                                           53 seconds ago       Running             busybox                     1                   3a50ad6fb69f7       busybox                                          default
	504b491214870       c96ee3c17498748ccc544ba99ee8ffeb020fc335b230b43cd28bf43bed229a13                                           53 seconds ago       Running             kindnet-cni                 1                   8751f5d356052       kindnet-w2m9v                                    kube-system
	ce4a589925fa2       940f54a5bcae9dd4c97844fa36d12cc5d9078cffd5e677ad0df1528c12f3240d                                           53 seconds ago       Running             kube-proxy                  1                   44a18e9fb66b9       kube-proxy-pkr8q                                 kube-system
	d611e0003d877       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           53 seconds ago       Exited              storage-provisioner         1                   59d7d48287e9c       storage-provisioner                              kube-system
	cd8ca9064dcc7       762dce4090c5f4789bb5dbb933d5b50bc1a2357d7739bbce30d949820e5a38ee                                           About a minute ago   Running             kube-scheduler              1                   28df270e11a5c       kube-scheduler-old-k8s-version-156305            kube-system
	5708ffd35134c       9cdd6470f48c8b127530b7ce6ea4b3524137984481e48bcde619735890840ace                                           About a minute ago   Running             etcd                        1                   3eb40cdde08bc       etcd-old-k8s-version-156305                      kube-system
	d3e3d49e9f91e       00543d2fe5d71095984891a0609ee504b81f9d72a69a0ad02039d4e135213766                                           About a minute ago   Running             kube-apiserver              1                   9acead1803c7d       kube-apiserver-old-k8s-version-156305            kube-system
	52de84716480a       46cc66ccc7c19b4b30625b0aa4e178792add2385659205d7c6fcbd05d78c23e5                                           About a minute ago   Running             kube-controller-manager     1                   29cdfdddab34d       kube-controller-manager-old-k8s-version-156305   kube-system
	
	
	==> coredns [8501cac9151481649f57c3b1c4cff002c410d0569db5665e9748a613a6f2b616] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 8aa94104b4dae56b00431f7362ac05b997af2246775de35dc2eb361b0707b2fa7199f9ddfdba27fdef1331b76d09c41700f6cb5d00836dabab7c0df8e651283f
	CoreDNS-1.10.1
	linux/arm64, go1.20, 055b2c3
	[INFO] 127.0.0.1:52134 - 49888 "HINFO IN 7215865035616262061.4788119228755665548. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.030632432s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[WARNING] plugin/kubernetes: Kubernetes API connection failure: Get "https://10.96.0.1:443/version": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> describe nodes <==
	Name:               old-k8s-version-156305
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=old-k8s-version-156305
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=a2daf445edf4872fd9586416ba5dbf507613db86
	                    minikube.k8s.io/name=old-k8s-version-156305
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_27T10_00_02_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 27 Dec 2025 09:59:56 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  old-k8s-version-156305
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 27 Dec 2025 10:01:58 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 27 Dec 2025 10:01:37 +0000   Sat, 27 Dec 2025 09:59:52 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 27 Dec 2025 10:01:37 +0000   Sat, 27 Dec 2025 09:59:52 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 27 Dec 2025 10:01:37 +0000   Sat, 27 Dec 2025 09:59:52 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 27 Dec 2025 10:01:37 +0000   Sat, 27 Dec 2025 10:00:26 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    old-k8s-version-156305
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022304Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022304Ki
	  pods:               110
	System Info:
	  Machine ID:                 56495bcb733b49eb3642bf15694bbc8c
	  System UUID:                d177d38b-fb11-4ae2-8414-a55831071099
	  Boot ID:                    bc2ae459-717b-455e-ba6b-b9cf89c5e3c8
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.3
	  Kubelet Version:            v1.28.0
	  Kube-Proxy Version:         v1.28.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                              ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         92s
	  kube-system                 coredns-5dd5756b68-5jmbh                          100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     109s
	  kube-system                 etcd-old-k8s-version-156305                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         2m1s
	  kube-system                 kindnet-w2m9v                                     100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      109s
	  kube-system                 kube-apiserver-old-k8s-version-156305             250m (12%)    0 (0%)      0 (0%)           0 (0%)         2m1s
	  kube-system                 kube-controller-manager-old-k8s-version-156305    200m (10%)    0 (0%)      0 (0%)           0 (0%)         2m1s
	  kube-system                 kube-proxy-pkr8q                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         109s
	  kube-system                 kube-scheduler-old-k8s-version-156305             100m (5%)     0 (0%)      0 (0%)           0 (0%)         2m2s
	  kube-system                 storage-provisioner                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         107s
	  kubernetes-dashboard        dashboard-metrics-scraper-5f989dc9cf-7ht6q        0 (0%)        0 (0%)      0 (0%)           0 (0%)         42s
	  kubernetes-dashboard        kubernetes-dashboard-8694d4445c-b6nzn             0 (0%)        0 (0%)      0 (0%)           0 (0%)         42s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 107s                 kube-proxy       
	  Normal  Starting                 52s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  2m9s (x8 over 2m9s)  kubelet          Node old-k8s-version-156305 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m9s (x8 over 2m9s)  kubelet          Node old-k8s-version-156305 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m9s (x8 over 2m9s)  kubelet          Node old-k8s-version-156305 status is now: NodeHasSufficientPID
	  Normal  Starting                 2m2s                 kubelet          Starting kubelet.
	  Normal  NodeHasSufficientPID     2m1s                 kubelet          Node old-k8s-version-156305 status is now: NodeHasSufficientPID
	  Normal  NodeHasNoDiskPressure    2m1s                 kubelet          Node old-k8s-version-156305 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  2m1s                 kubelet          Node old-k8s-version-156305 status is now: NodeHasSufficientMemory
	  Normal  RegisteredNode           110s                 node-controller  Node old-k8s-version-156305 event: Registered Node old-k8s-version-156305 in Controller
	  Normal  NodeReady                95s                  kubelet          Node old-k8s-version-156305 status is now: NodeReady
	  Normal  Starting                 62s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  61s (x8 over 61s)    kubelet          Node old-k8s-version-156305 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    61s (x8 over 61s)    kubelet          Node old-k8s-version-156305 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     61s (x8 over 61s)    kubelet          Node old-k8s-version-156305 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           42s                  node-controller  Node old-k8s-version-156305 event: Registered Node old-k8s-version-156305 in Controller
	
	
	==> dmesg <==
	[Dec27 09:27] overlayfs: idmapped layers are currently not supported
	[Dec27 09:28] overlayfs: idmapped layers are currently not supported
	[Dec27 09:29] overlayfs: idmapped layers are currently not supported
	[Dec27 09:32] overlayfs: idmapped layers are currently not supported
	[Dec27 09:33] overlayfs: idmapped layers are currently not supported
	[Dec27 09:34] overlayfs: idmapped layers are currently not supported
	[Dec27 09:35] overlayfs: idmapped layers are currently not supported
	[ +35.855481] overlayfs: idmapped layers are currently not supported
	[Dec27 09:36] overlayfs: idmapped layers are currently not supported
	[Dec27 09:37] overlayfs: idmapped layers are currently not supported
	[  +4.962986] overlayfs: idmapped layers are currently not supported
	[Dec27 09:38] overlayfs: idmapped layers are currently not supported
	[ +23.051169] overlayfs: idmapped layers are currently not supported
	[Dec27 09:39] overlayfs: idmapped layers are currently not supported
	[ +34.383287] overlayfs: idmapped layers are currently not supported
	[Dec27 09:41] overlayfs: idmapped layers are currently not supported
	[Dec27 09:43] overlayfs: idmapped layers are currently not supported
	[ +35.522529] overlayfs: idmapped layers are currently not supported
	[Dec27 09:48] overlayfs: idmapped layers are currently not supported
	[Dec27 09:49] overlayfs: idmapped layers are currently not supported
	[Dec27 09:51] overlayfs: idmapped layers are currently not supported
	[Dec27 09:52] overlayfs: idmapped layers are currently not supported
	[Dec27 09:59] overlayfs: idmapped layers are currently not supported
	[ +37.649191] overlayfs: idmapped layers are currently not supported
	[Dec27 10:01] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [5708ffd35134c895f5788182b61cdd93ddb87642674f5a59ca92211051f91063] <==
	{"level":"info","ts":"2025-12-27T10:01:01.062189Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-12-27T10:01:01.062197Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-12-27T10:01:01.062497Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed switched to configuration voters=(11459225503572592365)"}
	{"level":"info","ts":"2025-12-27T10:01:01.06256Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"68eaea490fab4e05","local-member-id":"9f0758e1c58a86ed","added-peer-id":"9f0758e1c58a86ed","added-peer-peer-urls":["https://192.168.85.2:2380"]}
	{"level":"info","ts":"2025-12-27T10:01:01.062634Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"68eaea490fab4e05","local-member-id":"9f0758e1c58a86ed","cluster-version":"3.5"}
	{"level":"info","ts":"2025-12-27T10:01:01.062661Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2025-12-27T10:01:01.072805Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2025-12-27T10:01:01.08072Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"9f0758e1c58a86ed","initial-advertise-peer-urls":["https://192.168.85.2:2380"],"listen-peer-urls":["https://192.168.85.2:2380"],"advertise-client-urls":["https://192.168.85.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.85.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2025-12-27T10:01:01.080767Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2025-12-27T10:01:01.08083Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.85.2:2380"}
	{"level":"info","ts":"2025-12-27T10:01:01.080838Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.85.2:2380"}
	{"level":"info","ts":"2025-12-27T10:01:02.790189Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed is starting a new election at term 2"}
	{"level":"info","ts":"2025-12-27T10:01:02.790325Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed became pre-candidate at term 2"}
	{"level":"info","ts":"2025-12-27T10:01:02.790378Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed received MsgPreVoteResp from 9f0758e1c58a86ed at term 2"}
	{"level":"info","ts":"2025-12-27T10:01:02.790421Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed became candidate at term 3"}
	{"level":"info","ts":"2025-12-27T10:01:02.790457Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed received MsgVoteResp from 9f0758e1c58a86ed at term 3"}
	{"level":"info","ts":"2025-12-27T10:01:02.790498Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed became leader at term 3"}
	{"level":"info","ts":"2025-12-27T10:01:02.790531Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 9f0758e1c58a86ed elected leader 9f0758e1c58a86ed at term 3"}
	{"level":"info","ts":"2025-12-27T10:01:02.798373Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"9f0758e1c58a86ed","local-member-attributes":"{Name:old-k8s-version-156305 ClientURLs:[https://192.168.85.2:2379]}","request-path":"/0/members/9f0758e1c58a86ed/attributes","cluster-id":"68eaea490fab4e05","publish-timeout":"7s"}
	{"level":"info","ts":"2025-12-27T10:01:02.798414Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-12-27T10:01:02.799569Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.85.2:2379"}
	{"level":"info","ts":"2025-12-27T10:01:02.798444Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-12-27T10:01:02.800568Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2025-12-27T10:01:02.810202Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-12-27T10:01:02.823963Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	
	
	==> kernel <==
	 10:02:01 up  2:44,  0 user,  load average: 1.36, 1.50, 1.94
	Linux old-k8s-version-156305 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [504b4912148700373d37491a8b4ed435ec42d5677bc4483cf42ab677f49c02f2] <==
	I1227 10:01:07.719377       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1227 10:01:07.719607       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1227 10:01:07.719741       1 main.go:148] setting mtu 1500 for CNI 
	I1227 10:01:07.719752       1 main.go:178] kindnetd IP family: "ipv4"
	I1227 10:01:07.719764       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-12-27T10:01:07Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1227 10:01:07.919604       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1227 10:01:07.919624       1 controller.go:381] "Waiting for informer caches to sync"
	I1227 10:01:07.919632       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1227 10:01:07.919931       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1227 10:01:37.920172       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1227 10:01:37.920288       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1227 10:01:37.920298       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	E1227 10:01:37.920329       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	I1227 10:01:39.420747       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1227 10:01:39.420778       1 metrics.go:72] Registering metrics
	I1227 10:01:39.420846       1 controller.go:711] "Syncing nftables rules"
	I1227 10:01:47.919879       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1227 10:01:47.919936       1 main.go:301] handling current node
	I1227 10:01:57.926255       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1227 10:01:57.926287       1 main.go:301] handling current node
	
	
	==> kube-apiserver [d3e3d49e9f91ec5959eb25abafbab591f5c21e6495480628f706735c5fe3d04c] <==
	I1227 10:01:06.836613       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I1227 10:01:06.838550       1 shared_informer.go:318] Caches are synced for node_authorizer
	I1227 10:01:06.838642       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I1227 10:01:06.839991       1 shared_informer.go:318] Caches are synced for configmaps
	I1227 10:01:06.840103       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1227 10:01:06.845525       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I1227 10:01:06.845744       1 aggregator.go:166] initial CRD sync complete...
	I1227 10:01:06.845797       1 autoregister_controller.go:141] Starting autoregister controller
	I1227 10:01:06.845846       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1227 10:01:06.845877       1 cache.go:39] Caches are synced for autoregister controller
	I1227 10:01:06.891885       1 apf_controller.go:377] Running API Priority and Fairness config worker
	I1227 10:01:06.891986       1 apf_controller.go:380] Running API Priority and Fairness periodic rebalancing process
	I1227 10:01:06.892175       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	E1227 10:01:07.083410       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1227 10:01:07.510643       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1227 10:01:08.683584       1 controller.go:624] quota admission added evaluator for: namespaces
	I1227 10:01:08.728626       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I1227 10:01:08.757123       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1227 10:01:08.770578       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1227 10:01:08.783903       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I1227 10:01:08.856846       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.97.79.115"}
	I1227 10:01:08.878638       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.96.93.247"}
	I1227 10:01:19.324610       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1227 10:01:19.362779       1 controller.go:624] quota admission added evaluator for: endpoints
	I1227 10:01:19.367538       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	
	
	==> kube-controller-manager [52de84716480aa6f441c8bd1ee9047feb54b482e6049d39d4b329ff433bc6cb2] <==
	I1227 10:01:19.433337       1 shared_informer.go:318] Caches are synced for resource quota
	I1227 10:01:19.436728       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="61.348384ms"
	I1227 10:01:19.437150       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="60.655305ms"
	I1227 10:01:19.450950       1 shared_informer.go:318] Caches are synced for resource quota
	I1227 10:01:19.454307       1 shared_informer.go:318] Caches are synced for service account
	I1227 10:01:19.464445       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="27.597505ms"
	I1227 10:01:19.465346       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="64.87µs"
	I1227 10:01:19.465435       1 shared_informer.go:318] Caches are synced for namespace
	I1227 10:01:19.469954       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="32.721295ms"
	I1227 10:01:19.483032       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="47.623µs"
	I1227 10:01:19.505778       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="35.700787ms"
	I1227 10:01:19.505915       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="91.332µs"
	I1227 10:01:19.508335       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="172.687µs"
	I1227 10:01:19.855178       1 shared_informer.go:318] Caches are synced for garbage collector
	I1227 10:01:19.855227       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	I1227 10:01:19.875249       1 shared_informer.go:318] Caches are synced for garbage collector
	I1227 10:01:24.233787       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="41.035µs"
	I1227 10:01:25.237549       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="67.792µs"
	I1227 10:01:26.243887       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="53.596µs"
	I1227 10:01:28.257002       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="12.251599ms"
	I1227 10:01:28.257200       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="53.621µs"
	I1227 10:01:42.292495       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="56.739µs"
	I1227 10:01:44.475775       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="13.774941ms"
	I1227 10:01:44.475898       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="70.278µs"
	I1227 10:01:49.755723       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="55.344µs"
	
	
	==> kube-proxy [ce4a589925fa2bea6e1c4dd2a3f450ac19fa2b6905610d4fdca193b304e7c654] <==
	I1227 10:01:08.092987       1 server_others.go:69] "Using iptables proxy"
	I1227 10:01:08.135659       1 node.go:141] Successfully retrieved node IP: 192.168.85.2
	I1227 10:01:08.375344       1 server.go:632] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1227 10:01:08.379124       1 server_others.go:152] "Using iptables Proxier"
	I1227 10:01:08.379161       1 server_others.go:421] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I1227 10:01:08.379169       1 server_others.go:438] "Defaulting to no-op detect-local"
	I1227 10:01:08.379205       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1227 10:01:08.379412       1 server.go:846] "Version info" version="v1.28.0"
	I1227 10:01:08.379422       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1227 10:01:08.380664       1 config.go:188] "Starting service config controller"
	I1227 10:01:08.380675       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1227 10:01:08.380693       1 config.go:97] "Starting endpoint slice config controller"
	I1227 10:01:08.380697       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1227 10:01:08.382595       1 config.go:315] "Starting node config controller"
	I1227 10:01:08.382608       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1227 10:01:08.481579       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I1227 10:01:08.481662       1 shared_informer.go:318] Caches are synced for service config
	I1227 10:01:08.483386       1 shared_informer.go:318] Caches are synced for node config
	
	
	==> kube-scheduler [cd8ca9064dcc761f5b92eb96cced8f7de01ddc2ffebf6147cbc5c135c3801051] <==
	I1227 10:01:05.019320       1 serving.go:348] Generated self-signed cert in-memory
	W1227 10:01:06.630608       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1227 10:01:06.630720       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1227 10:01:06.630759       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1227 10:01:06.630802       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1227 10:01:06.782809       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.28.0"
	I1227 10:01:06.782911       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1227 10:01:06.792617       1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
	I1227 10:01:06.795065       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I1227 10:01:06.795150       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1227 10:01:06.806242       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1227 10:01:06.907885       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Dec 27 10:01:19 old-k8s-version-156305 kubelet[783]: I1227 10:01:19.576318     783 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/14248e6a-3981-4785-b2a6-b5c3128b97dd-tmp-volume\") pod \"kubernetes-dashboard-8694d4445c-b6nzn\" (UID: \"14248e6a-3981-4785-b2a6-b5c3128b97dd\") " pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-b6nzn"
	Dec 27 10:01:19 old-k8s-version-156305 kubelet[783]: I1227 10:01:19.576348     783 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mfmtf\" (UniqueName: \"kubernetes.io/projected/14248e6a-3981-4785-b2a6-b5c3128b97dd-kube-api-access-mfmtf\") pod \"kubernetes-dashboard-8694d4445c-b6nzn\" (UID: \"14248e6a-3981-4785-b2a6-b5c3128b97dd\") " pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-b6nzn"
	Dec 27 10:01:19 old-k8s-version-156305 kubelet[783]: I1227 10:01:19.576371     783 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/77599837-429e-492b-a0ae-92863962e515-tmp-volume\") pod \"dashboard-metrics-scraper-5f989dc9cf-7ht6q\" (UID: \"77599837-429e-492b-a0ae-92863962e515\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-7ht6q"
	Dec 27 10:01:19 old-k8s-version-156305 kubelet[783]: W1227 10:01:19.762919     783 manager.go:1159] Failed to process watch event {EventType:0 Name:/docker/347dbce10dafb51b72ca7dcef615f0de8b8f7496d754df760f0354e3d747f426/crio-900f1369f80200f2910ad79c76bad68715d9412c1126f128dd7be15c20776a61 WatchSource:0}: Error finding container 900f1369f80200f2910ad79c76bad68715d9412c1126f128dd7be15c20776a61: Status 404 returned error can't find the container with id 900f1369f80200f2910ad79c76bad68715d9412c1126f128dd7be15c20776a61
	Dec 27 10:01:19 old-k8s-version-156305 kubelet[783]: W1227 10:01:19.774339     783 manager.go:1159] Failed to process watch event {EventType:0 Name:/docker/347dbce10dafb51b72ca7dcef615f0de8b8f7496d754df760f0354e3d747f426/crio-611773856d41d2a12c68a74fc85b8d71a1c46db5873ba0e5e322f0fef6128794 WatchSource:0}: Error finding container 611773856d41d2a12c68a74fc85b8d71a1c46db5873ba0e5e322f0fef6128794: Status 404 returned error can't find the container with id 611773856d41d2a12c68a74fc85b8d71a1c46db5873ba0e5e322f0fef6128794
	Dec 27 10:01:24 old-k8s-version-156305 kubelet[783]: I1227 10:01:24.214733     783 scope.go:117] "RemoveContainer" containerID="f7b4bb0acbda658e7047ded9de708896ba10cfe2c6e9b766b6ca45d745170b5b"
	Dec 27 10:01:25 old-k8s-version-156305 kubelet[783]: I1227 10:01:25.219795     783 scope.go:117] "RemoveContainer" containerID="f7b4bb0acbda658e7047ded9de708896ba10cfe2c6e9b766b6ca45d745170b5b"
	Dec 27 10:01:25 old-k8s-version-156305 kubelet[783]: I1227 10:01:25.220133     783 scope.go:117] "RemoveContainer" containerID="490359e4d7be99b674aa5be025f58c6d5a47ecfe5324dfd8b2a597a900885692"
	Dec 27 10:01:25 old-k8s-version-156305 kubelet[783]: E1227 10:01:25.220396     783 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-7ht6q_kubernetes-dashboard(77599837-429e-492b-a0ae-92863962e515)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-7ht6q" podUID="77599837-429e-492b-a0ae-92863962e515"
	Dec 27 10:01:26 old-k8s-version-156305 kubelet[783]: I1227 10:01:26.223842     783 scope.go:117] "RemoveContainer" containerID="490359e4d7be99b674aa5be025f58c6d5a47ecfe5324dfd8b2a597a900885692"
	Dec 27 10:01:26 old-k8s-version-156305 kubelet[783]: E1227 10:01:26.224186     783 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-7ht6q_kubernetes-dashboard(77599837-429e-492b-a0ae-92863962e515)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-7ht6q" podUID="77599837-429e-492b-a0ae-92863962e515"
	Dec 27 10:01:29 old-k8s-version-156305 kubelet[783]: I1227 10:01:29.741896     783 scope.go:117] "RemoveContainer" containerID="490359e4d7be99b674aa5be025f58c6d5a47ecfe5324dfd8b2a597a900885692"
	Dec 27 10:01:29 old-k8s-version-156305 kubelet[783]: E1227 10:01:29.742247     783 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-7ht6q_kubernetes-dashboard(77599837-429e-492b-a0ae-92863962e515)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-7ht6q" podUID="77599837-429e-492b-a0ae-92863962e515"
	Dec 27 10:01:38 old-k8s-version-156305 kubelet[783]: I1227 10:01:38.253340     783 scope.go:117] "RemoveContainer" containerID="d611e0003d877a2c61c9a067f013513d4747d34cdf80058c12dbcd2cee6f4aac"
	Dec 27 10:01:38 old-k8s-version-156305 kubelet[783]: I1227 10:01:38.276905     783 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-b6nzn" podStartSLOduration=10.979225156 podCreationTimestamp="2025-12-27 10:01:19 +0000 UTC" firstStartedPulling="2025-12-27 10:01:19.776200883 +0000 UTC m=+19.992698617" lastFinishedPulling="2025-12-27 10:01:28.073819927 +0000 UTC m=+28.290317653" observedRunningTime="2025-12-27 10:01:28.244320328 +0000 UTC m=+28.460818062" watchObservedRunningTime="2025-12-27 10:01:38.276844192 +0000 UTC m=+38.493341918"
	Dec 27 10:01:42 old-k8s-version-156305 kubelet[783]: I1227 10:01:42.050544     783 scope.go:117] "RemoveContainer" containerID="490359e4d7be99b674aa5be025f58c6d5a47ecfe5324dfd8b2a597a900885692"
	Dec 27 10:01:42 old-k8s-version-156305 kubelet[783]: I1227 10:01:42.268584     783 scope.go:117] "RemoveContainer" containerID="490359e4d7be99b674aa5be025f58c6d5a47ecfe5324dfd8b2a597a900885692"
	Dec 27 10:01:42 old-k8s-version-156305 kubelet[783]: I1227 10:01:42.268888     783 scope.go:117] "RemoveContainer" containerID="0491eab1be817f5a0588cba3090545b34f363760f5960123eab7b9b3a9b19dd3"
	Dec 27 10:01:42 old-k8s-version-156305 kubelet[783]: E1227 10:01:42.269179     783 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-7ht6q_kubernetes-dashboard(77599837-429e-492b-a0ae-92863962e515)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-7ht6q" podUID="77599837-429e-492b-a0ae-92863962e515"
	Dec 27 10:01:49 old-k8s-version-156305 kubelet[783]: I1227 10:01:49.741835     783 scope.go:117] "RemoveContainer" containerID="0491eab1be817f5a0588cba3090545b34f363760f5960123eab7b9b3a9b19dd3"
	Dec 27 10:01:49 old-k8s-version-156305 kubelet[783]: E1227 10:01:49.742208     783 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-7ht6q_kubernetes-dashboard(77599837-429e-492b-a0ae-92863962e515)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-7ht6q" podUID="77599837-429e-492b-a0ae-92863962e515"
	Dec 27 10:01:58 old-k8s-version-156305 kubelet[783]: I1227 10:01:58.197653     783 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	Dec 27 10:01:58 old-k8s-version-156305 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Dec 27 10:01:58 old-k8s-version-156305 systemd[1]: kubelet.service: Deactivated successfully.
	Dec 27 10:01:58 old-k8s-version-156305 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	
	
	==> kubernetes-dashboard [01fe9c54f51a28045014799ed2d0326a433e8b4d38927e40419708a1a7a0a3c7] <==
	2025/12/27 10:01:28 Using namespace: kubernetes-dashboard
	2025/12/27 10:01:28 Using in-cluster config to connect to apiserver
	2025/12/27 10:01:28 Using secret token for csrf signing
	2025/12/27 10:01:28 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/12/27 10:01:28 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/12/27 10:01:28 Successful initial request to the apiserver, version: v1.28.0
	2025/12/27 10:01:28 Generating JWE encryption key
	2025/12/27 10:01:28 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/12/27 10:01:28 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/12/27 10:01:28 Initializing JWE encryption key from synchronized object
	2025/12/27 10:01:28 Creating in-cluster Sidecar client
	2025/12/27 10:01:28 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/12/27 10:01:28 Serving insecurely on HTTP port: 9090
	2025/12/27 10:01:58 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/12/27 10:01:28 Starting overwatch
	
	
	==> storage-provisioner [d611e0003d877a2c61c9a067f013513d4747d34cdf80058c12dbcd2cee6f4aac] <==
	I1227 10:01:07.860936       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1227 10:01:37.863151       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [e30c642381fd66b6ac4e8aacfb12d272b20d86702b36d52f8134df8d684e8b32] <==
	I1227 10:01:38.310625       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1227 10:01:38.329337       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1227 10:01:38.329524       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1227 10:01:55.728830       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1227 10:01:55.729008       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-156305_551df9fa-0e63-4397-975d-49651314d37b!
	I1227 10:01:55.730735       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"00426bfd-0cf8-4159-8bc4-8e458dec9071", APIVersion:"v1", ResourceVersion:"663", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-156305_551df9fa-0e63-4397-975d-49651314d37b became leader
	I1227 10:01:55.829643       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-156305_551df9fa-0e63-4397-975d-49651314d37b!
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-156305 -n old-k8s-version-156305
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-156305 -n old-k8s-version-156305: exit status 2 (342.592049ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:270: (dbg) Run:  kubectl --context old-k8s-version-156305 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:294: <<< TestStartStop/group/old-k8s-version/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:295: ---------------------/post-mortem---------------------------------
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect old-k8s-version-156305
helpers_test.go:244: (dbg) docker inspect old-k8s-version-156305:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "347dbce10dafb51b72ca7dcef615f0de8b8f7496d754df760f0354e3d747f426",
	        "Created": "2025-12-27T09:59:32.848675789Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 496778,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-27T10:00:52.964083972Z",
	            "FinishedAt": "2025-12-27T10:00:52.127246309Z"
	        },
	        "Image": "sha256:713d59eed1cb7ea89d31e3cbef8f6274a6ab7509a421e96cf0197c334e76398d",
	        "ResolvConfPath": "/var/lib/docker/containers/347dbce10dafb51b72ca7dcef615f0de8b8f7496d754df760f0354e3d747f426/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/347dbce10dafb51b72ca7dcef615f0de8b8f7496d754df760f0354e3d747f426/hostname",
	        "HostsPath": "/var/lib/docker/containers/347dbce10dafb51b72ca7dcef615f0de8b8f7496d754df760f0354e3d747f426/hosts",
	        "LogPath": "/var/lib/docker/containers/347dbce10dafb51b72ca7dcef615f0de8b8f7496d754df760f0354e3d747f426/347dbce10dafb51b72ca7dcef615f0de8b8f7496d754df760f0354e3d747f426-json.log",
	        "Name": "/old-k8s-version-156305",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-156305:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "old-k8s-version-156305",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "347dbce10dafb51b72ca7dcef615f0de8b8f7496d754df760f0354e3d747f426",
	                "LowerDir": "/var/lib/docker/overlay2/68e3c3f0278cd7b3013cea940f132fed7eeabae4011f0b02c966dba9f26d7618-init/diff:/var/lib/docker/overlay2/888349771a41a46b09e7de676064af9effbe2f5ae2a8ba49ad062335fb2a70e5/diff",
	                "MergedDir": "/var/lib/docker/overlay2/68e3c3f0278cd7b3013cea940f132fed7eeabae4011f0b02c966dba9f26d7618/merged",
	                "UpperDir": "/var/lib/docker/overlay2/68e3c3f0278cd7b3013cea940f132fed7eeabae4011f0b02c966dba9f26d7618/diff",
	                "WorkDir": "/var/lib/docker/overlay2/68e3c3f0278cd7b3013cea940f132fed7eeabae4011f0b02c966dba9f26d7618/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-156305",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-156305/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-156305",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-156305",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-156305",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "421c3dbefc237ae8e1c5175dee07269f48d08ea2c2470e3596ad0e38a9b224c6",
	            "SandboxKey": "/var/run/docker/netns/421c3dbefc23",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33426"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33427"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33430"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33428"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33429"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-156305": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "6a:9d:db:20:54:cd",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "05c9d9029c4a7ae450dccaf37503f6c9dee72aa6f5a06e1cc6293b09c389163d",
	                    "EndpointID": "50a4c166556d059e1328116ce49f8da2faa8895b4f3fe841ccff4412dc0c04d2",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "old-k8s-version-156305",
	                        "347dbce10daf"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-156305 -n old-k8s-version-156305
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-156305 -n old-k8s-version-156305: exit status 2 (368.694003ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestStartStop/group/old-k8s-version/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p old-k8s-version-156305 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-arm64 -p old-k8s-version-156305 logs -n 25: (1.295175487s)
helpers_test.go:261: TestStartStop/group/old-k8s-version/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────
────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │          PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────
────┤
	│ ssh     │ -p cilium-246753 sudo systemctl cat containerd --no-pager                                                                                                                                                                                     │ cilium-246753             │ jenkins │ v1.37.0 │ 27 Dec 25 09:52 UTC │                     │
	│ ssh     │ -p cilium-246753 sudo cat /lib/systemd/system/containerd.service                                                                                                                                                                              │ cilium-246753             │ jenkins │ v1.37.0 │ 27 Dec 25 09:52 UTC │                     │
	│ ssh     │ -p cilium-246753 sudo cat /etc/containerd/config.toml                                                                                                                                                                                         │ cilium-246753             │ jenkins │ v1.37.0 │ 27 Dec 25 09:52 UTC │                     │
	│ ssh     │ -p cilium-246753 sudo containerd config dump                                                                                                                                                                                                  │ cilium-246753             │ jenkins │ v1.37.0 │ 27 Dec 25 09:52 UTC │                     │
	│ ssh     │ -p cilium-246753 sudo systemctl status crio --all --full --no-pager                                                                                                                                                                           │ cilium-246753             │ jenkins │ v1.37.0 │ 27 Dec 25 09:52 UTC │                     │
	│ ssh     │ -p cilium-246753 sudo systemctl cat crio --no-pager                                                                                                                                                                                           │ cilium-246753             │ jenkins │ v1.37.0 │ 27 Dec 25 09:52 UTC │                     │
	│ ssh     │ -p cilium-246753 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                                                                                                 │ cilium-246753             │ jenkins │ v1.37.0 │ 27 Dec 25 09:52 UTC │                     │
	│ ssh     │ -p cilium-246753 sudo crio config                                                                                                                                                                                                             │ cilium-246753             │ jenkins │ v1.37.0 │ 27 Dec 25 09:52 UTC │                     │
	│ delete  │ -p cilium-246753                                                                                                                                                                                                                              │ cilium-246753             │ jenkins │ v1.37.0 │ 27 Dec 25 09:52 UTC │ 27 Dec 25 09:52 UTC │
	│ start   │ -p cert-expiration-028595 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio                                                                                                                                        │ cert-expiration-028595    │ jenkins │ v1.37.0 │ 27 Dec 25 09:52 UTC │ 27 Dec 25 09:52 UTC │
	│ start   │ -p cert-expiration-028595 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio                                                                                                                                     │ cert-expiration-028595    │ jenkins │ v1.37.0 │ 27 Dec 25 09:55 UTC │ 27 Dec 25 09:55 UTC │
	│ delete  │ -p cert-expiration-028595                                                                                                                                                                                                                     │ cert-expiration-028595    │ jenkins │ v1.37.0 │ 27 Dec 25 09:55 UTC │ 27 Dec 25 09:55 UTC │
	│ start   │ -p force-systemd-flag-779725 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                                                                                                                   │ force-systemd-flag-779725 │ jenkins │ v1.37.0 │ 27 Dec 25 09:55 UTC │                     │
	│ delete  │ -p force-systemd-env-029895                                                                                                                                                                                                                   │ force-systemd-env-029895  │ jenkins │ v1.37.0 │ 27 Dec 25 09:58 UTC │ 27 Dec 25 09:58 UTC │
	│ start   │ -p cert-options-057459 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio                     │ cert-options-057459       │ jenkins │ v1.37.0 │ 27 Dec 25 09:58 UTC │ 27 Dec 25 09:59 UTC │
	│ ssh     │ cert-options-057459 ssh openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt                                                                                                                                                   │ cert-options-057459       │ jenkins │ v1.37.0 │ 27 Dec 25 09:59 UTC │ 27 Dec 25 09:59 UTC │
	│ ssh     │ -p cert-options-057459 -- sudo cat /etc/kubernetes/admin.conf                                                                                                                                                                                 │ cert-options-057459       │ jenkins │ v1.37.0 │ 27 Dec 25 09:59 UTC │ 27 Dec 25 09:59 UTC │
	│ delete  │ -p cert-options-057459                                                                                                                                                                                                                        │ cert-options-057459       │ jenkins │ v1.37.0 │ 27 Dec 25 09:59 UTC │ 27 Dec 25 09:59 UTC │
	│ start   │ -p old-k8s-version-156305 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-156305    │ jenkins │ v1.37.0 │ 27 Dec 25 09:59 UTC │ 27 Dec 25 10:00 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-156305 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-156305    │ jenkins │ v1.37.0 │ 27 Dec 25 10:00 UTC │                     │
	│ stop    │ -p old-k8s-version-156305 --alsologtostderr -v=3                                                                                                                                                                                              │ old-k8s-version-156305    │ jenkins │ v1.37.0 │ 27 Dec 25 10:00 UTC │ 27 Dec 25 10:00 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-156305 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-156305    │ jenkins │ v1.37.0 │ 27 Dec 25 10:00 UTC │ 27 Dec 25 10:00 UTC │
	│ start   │ -p old-k8s-version-156305 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-156305    │ jenkins │ v1.37.0 │ 27 Dec 25 10:00 UTC │ 27 Dec 25 10:01 UTC │
	│ image   │ old-k8s-version-156305 image list --format=json                                                                                                                                                                                               │ old-k8s-version-156305    │ jenkins │ v1.37.0 │ 27 Dec 25 10:01 UTC │ 27 Dec 25 10:01 UTC │
	│ pause   │ -p old-k8s-version-156305 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-156305    │ jenkins │ v1.37.0 │ 27 Dec 25 10:01 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────
────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/27 10:00:52
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1227 10:00:52.667703  496650 out.go:360] Setting OutFile to fd 1 ...
	I1227 10:00:52.667884  496650 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 10:00:52.667914  496650 out.go:374] Setting ErrFile to fd 2...
	I1227 10:00:52.667939  496650 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 10:00:52.668326  496650 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22344-301174/.minikube/bin
	I1227 10:00:52.668818  496650 out.go:368] Setting JSON to false
	I1227 10:00:52.669741  496650 start.go:133] hostinfo: {"hostname":"ip-172-31-31-251","uptime":9802,"bootTime":1766819851,"procs":176,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1227 10:00:52.669864  496650 start.go:143] virtualization:  
	I1227 10:00:52.672857  496650 out.go:179] * [old-k8s-version-156305] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1227 10:00:52.675423  496650 out.go:179]   - MINIKUBE_LOCATION=22344
	I1227 10:00:52.675622  496650 notify.go:221] Checking for updates...
	I1227 10:00:52.681459  496650 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1227 10:00:52.684360  496650 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22344-301174/kubeconfig
	I1227 10:00:52.687151  496650 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22344-301174/.minikube
	I1227 10:00:52.689971  496650 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1227 10:00:52.693006  496650 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1227 10:00:52.696440  496650 config.go:182] Loaded profile config "old-k8s-version-156305": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1227 10:00:52.699983  496650 out.go:179] * Kubernetes 1.35.0 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.35.0
	I1227 10:00:52.702850  496650 driver.go:422] Setting default libvirt URI to qemu:///system
	I1227 10:00:52.733486  496650 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1227 10:00:52.733603  496650 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1227 10:00:52.794400  496650 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:42 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-27 10:00:52.784184865 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1227 10:00:52.794517  496650 docker.go:319] overlay module found
	I1227 10:00:52.797609  496650 out.go:179] * Using the docker driver based on existing profile
	I1227 10:00:52.800406  496650 start.go:309] selected driver: docker
	I1227 10:00:52.800429  496650 start.go:928] validating driver "docker" against &{Name:old-k8s-version-156305 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-156305 Namespace:default APIServerHAVIP: AP
IServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mo
untType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1227 10:00:52.800549  496650 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1227 10:00:52.801298  496650 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1227 10:00:52.872918  496650 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:42 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-27 10:00:52.863207291 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1227 10:00:52.873260  496650 start_flags.go:1019] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1227 10:00:52.873300  496650 cni.go:84] Creating CNI manager for ""
	I1227 10:00:52.873361  496650 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1227 10:00:52.873409  496650 start.go:353] cluster config:
	{Name:old-k8s-version-156305 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-156305 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetri
cs:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1227 10:00:52.876616  496650 out.go:179] * Starting "old-k8s-version-156305" primary control-plane node in "old-k8s-version-156305" cluster
	I1227 10:00:52.879490  496650 cache.go:134] Beginning downloading kic base image for docker with crio
	I1227 10:00:52.882438  496650 out.go:179] * Pulling base image v0.0.48-1766570851-22316 ...
	I1227 10:00:52.885298  496650 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1227 10:00:52.885344  496650 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22344-301174/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4
	I1227 10:00:52.885370  496650 cache.go:65] Caching tarball of preloaded images
	I1227 10:00:52.885376  496650 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a in local docker daemon
	I1227 10:00:52.885453  496650 preload.go:251] Found /home/jenkins/minikube-integration/22344-301174/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1227 10:00:52.885464  496650 cache.go:68] Finished verifying existence of preloaded tar for v1.28.0 on crio
	I1227 10:00:52.885583  496650 profile.go:143] Saving config to /home/jenkins/minikube-integration/22344-301174/.minikube/profiles/old-k8s-version-156305/config.json ...
	I1227 10:00:52.909604  496650 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a in local docker daemon, skipping pull
	I1227 10:00:52.909628  496650 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a exists in daemon, skipping load
	I1227 10:00:52.909643  496650 cache.go:243] Successfully downloaded all kic artifacts
	I1227 10:00:52.909676  496650 start.go:360] acquireMachinesLock for old-k8s-version-156305: {Name:mk38a9d425ae861a3d9f927feaf86bb827ff0e6c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1227 10:00:52.909744  496650 start.go:364] duration metric: took 51.094µs to acquireMachinesLock for "old-k8s-version-156305"
	I1227 10:00:52.909768  496650 start.go:96] Skipping create...Using existing machine configuration
	I1227 10:00:52.909773  496650 fix.go:54] fixHost starting: 
	I1227 10:00:52.910038  496650 cli_runner.go:164] Run: docker container inspect old-k8s-version-156305 --format={{.State.Status}}
	I1227 10:00:52.927811  496650 fix.go:112] recreateIfNeeded on old-k8s-version-156305: state=Stopped err=<nil>
	W1227 10:00:52.927845  496650 fix.go:138] unexpected machine state, will restart: <nil>
	I1227 10:00:52.931174  496650 out.go:252] * Restarting existing docker container for "old-k8s-version-156305" ...
	I1227 10:00:52.931275  496650 cli_runner.go:164] Run: docker start old-k8s-version-156305
	I1227 10:00:53.185820  496650 cli_runner.go:164] Run: docker container inspect old-k8s-version-156305 --format={{.State.Status}}
	I1227 10:00:53.207973  496650 kic.go:430] container "old-k8s-version-156305" state is running.
	I1227 10:00:53.208352  496650 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-156305
	I1227 10:00:53.235558  496650 profile.go:143] Saving config to /home/jenkins/minikube-integration/22344-301174/.minikube/profiles/old-k8s-version-156305/config.json ...
	I1227 10:00:53.235787  496650 machine.go:94] provisionDockerMachine start ...
	I1227 10:00:53.240076  496650 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-156305
	I1227 10:00:53.268004  496650 main.go:144] libmachine: Using SSH client type: native
	I1227 10:00:53.268332  496650 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dbbe0] 0x3de0e0 <nil>  [] 0s} 127.0.0.1 33426 <nil> <nil>}
	I1227 10:00:53.268341  496650 main.go:144] libmachine: About to run SSH command:
	hostname
	I1227 10:00:53.268935  496650 main.go:144] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:41710->127.0.0.1:33426: read: connection reset by peer
	I1227 10:00:56.405931  496650 main.go:144] libmachine: SSH cmd err, output: <nil>: old-k8s-version-156305
	
	I1227 10:00:56.405957  496650 ubuntu.go:182] provisioning hostname "old-k8s-version-156305"
	I1227 10:00:56.406080  496650 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-156305
	I1227 10:00:56.429696  496650 main.go:144] libmachine: Using SSH client type: native
	I1227 10:00:56.430019  496650 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dbbe0] 0x3de0e0 <nil>  [] 0s} 127.0.0.1 33426 <nil> <nil>}
	I1227 10:00:56.430036  496650 main.go:144] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-156305 && echo "old-k8s-version-156305" | sudo tee /etc/hostname
	I1227 10:00:56.579925  496650 main.go:144] libmachine: SSH cmd err, output: <nil>: old-k8s-version-156305
	
	I1227 10:00:56.580048  496650 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-156305
	I1227 10:00:56.598035  496650 main.go:144] libmachine: Using SSH client type: native
	I1227 10:00:56.598610  496650 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dbbe0] 0x3de0e0 <nil>  [] 0s} 127.0.0.1 33426 <nil> <nil>}
	I1227 10:00:56.598637  496650 main.go:144] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-156305' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-156305/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-156305' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1227 10:00:56.738482  496650 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	I1227 10:00:56.738510  496650 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22344-301174/.minikube CaCertPath:/home/jenkins/minikube-integration/22344-301174/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22344-301174/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22344-301174/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22344-301174/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22344-301174/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22344-301174/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22344-301174/.minikube}
	I1227 10:00:56.738547  496650 ubuntu.go:190] setting up certificates
	I1227 10:00:56.738558  496650 provision.go:84] configureAuth start
	I1227 10:00:56.738623  496650 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-156305
	I1227 10:00:56.756960  496650 provision.go:143] copyHostCerts
	I1227 10:00:56.757035  496650 exec_runner.go:144] found /home/jenkins/minikube-integration/22344-301174/.minikube/ca.pem, removing ...
	I1227 10:00:56.757062  496650 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22344-301174/.minikube/ca.pem
	I1227 10:00:56.757140  496650 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22344-301174/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22344-301174/.minikube/ca.pem (1082 bytes)
	I1227 10:00:56.757239  496650 exec_runner.go:144] found /home/jenkins/minikube-integration/22344-301174/.minikube/cert.pem, removing ...
	I1227 10:00:56.757248  496650 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22344-301174/.minikube/cert.pem
	I1227 10:00:56.757274  496650 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22344-301174/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22344-301174/.minikube/cert.pem (1123 bytes)
	I1227 10:00:56.757332  496650 exec_runner.go:144] found /home/jenkins/minikube-integration/22344-301174/.minikube/key.pem, removing ...
	I1227 10:00:56.757340  496650 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22344-301174/.minikube/key.pem
	I1227 10:00:56.757362  496650 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22344-301174/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22344-301174/.minikube/key.pem (1675 bytes)
	I1227 10:00:56.757414  496650 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22344-301174/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22344-301174/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22344-301174/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-156305 san=[127.0.0.1 192.168.85.2 localhost minikube old-k8s-version-156305]
	I1227 10:00:57.161342  496650 provision.go:177] copyRemoteCerts
	I1227 10:00:57.161417  496650 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1227 10:00:57.161457  496650 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-156305
	I1227 10:00:57.179202  496650 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33426 SSHKeyPath:/home/jenkins/minikube-integration/22344-301174/.minikube/machines/old-k8s-version-156305/id_rsa Username:docker}
	I1227 10:00:57.278527  496650 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22344-301174/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1227 10:00:57.296666  496650 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22344-301174/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I1227 10:00:57.313898  496650 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22344-301174/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1227 10:00:57.331800  496650 provision.go:87] duration metric: took 593.215618ms to configureAuth
	I1227 10:00:57.331876  496650 ubuntu.go:206] setting minikube options for container-runtime
	I1227 10:00:57.332098  496650 config.go:182] Loaded profile config "old-k8s-version-156305": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1227 10:00:57.332233  496650 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-156305
	I1227 10:00:57.350003  496650 main.go:144] libmachine: Using SSH client type: native
	I1227 10:00:57.350359  496650 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dbbe0] 0x3de0e0 <nil>  [] 0s} 127.0.0.1 33426 <nil> <nil>}
	I1227 10:00:57.350382  496650 main.go:144] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1227 10:00:57.723813  496650 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1227 10:00:57.723920  496650 machine.go:97] duration metric: took 4.488122609s to provisionDockerMachine
	I1227 10:00:57.723958  496650 start.go:293] postStartSetup for "old-k8s-version-156305" (driver="docker")
	I1227 10:00:57.723984  496650 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1227 10:00:57.724064  496650 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1227 10:00:57.724140  496650 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-156305
	I1227 10:00:57.748057  496650 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33426 SSHKeyPath:/home/jenkins/minikube-integration/22344-301174/.minikube/machines/old-k8s-version-156305/id_rsa Username:docker}
	I1227 10:00:57.846299  496650 ssh_runner.go:195] Run: cat /etc/os-release
	I1227 10:00:57.849754  496650 main.go:144] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1227 10:00:57.849786  496650 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1227 10:00:57.849806  496650 filesync.go:126] Scanning /home/jenkins/minikube-integration/22344-301174/.minikube/addons for local assets ...
	I1227 10:00:57.849901  496650 filesync.go:126] Scanning /home/jenkins/minikube-integration/22344-301174/.minikube/files for local assets ...
	I1227 10:00:57.850013  496650 filesync.go:149] local asset: /home/jenkins/minikube-integration/22344-301174/.minikube/files/etc/ssl/certs/3030432.pem -> 3030432.pem in /etc/ssl/certs
	I1227 10:00:57.850122  496650 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1227 10:00:57.858054  496650 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22344-301174/.minikube/files/etc/ssl/certs/3030432.pem --> /etc/ssl/certs/3030432.pem (1708 bytes)
	I1227 10:00:57.876801  496650 start.go:296] duration metric: took 152.813231ms for postStartSetup
	I1227 10:00:57.876899  496650 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1227 10:00:57.876944  496650 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-156305
	I1227 10:00:57.894526  496650 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33426 SSHKeyPath:/home/jenkins/minikube-integration/22344-301174/.minikube/machines/old-k8s-version-156305/id_rsa Username:docker}
	I1227 10:00:57.991776  496650 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1227 10:00:57.996992  496650 fix.go:56] duration metric: took 5.087210827s for fixHost
	I1227 10:00:57.997030  496650 start.go:83] releasing machines lock for "old-k8s-version-156305", held for 5.087276542s
	I1227 10:00:57.997103  496650 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-156305
	I1227 10:00:58.020995  496650 ssh_runner.go:195] Run: cat /version.json
	I1227 10:00:58.021081  496650 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-156305
	I1227 10:00:58.021135  496650 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1227 10:00:58.021226  496650 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-156305
	I1227 10:00:58.049026  496650 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33426 SSHKeyPath:/home/jenkins/minikube-integration/22344-301174/.minikube/machines/old-k8s-version-156305/id_rsa Username:docker}
	I1227 10:00:58.051188  496650 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33426 SSHKeyPath:/home/jenkins/minikube-integration/22344-301174/.minikube/machines/old-k8s-version-156305/id_rsa Username:docker}
	I1227 10:00:58.241329  496650 ssh_runner.go:195] Run: systemctl --version
	I1227 10:00:58.248300  496650 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1227 10:00:58.289099  496650 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1227 10:00:58.293640  496650 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1227 10:00:58.293789  496650 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1227 10:00:58.301893  496650 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1227 10:00:58.301917  496650 start.go:496] detecting cgroup driver to use...
	I1227 10:00:58.301968  496650 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1227 10:00:58.302032  496650 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1227 10:00:58.317292  496650 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1227 10:00:58.330524  496650 docker.go:218] disabling cri-docker service (if available) ...
	I1227 10:00:58.330589  496650 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1227 10:00:58.346051  496650 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1227 10:00:58.359614  496650 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1227 10:00:58.495391  496650 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1227 10:00:58.609291  496650 docker.go:234] disabling docker service ...
	I1227 10:00:58.609355  496650 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1227 10:00:58.624110  496650 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1227 10:00:58.637281  496650 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1227 10:00:58.755038  496650 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1227 10:00:58.868258  496650 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1227 10:00:58.881819  496650 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1227 10:00:58.896370  496650 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1227 10:00:58.896459  496650 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 10:00:58.906083  496650 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1227 10:00:58.906200  496650 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 10:00:58.915503  496650 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 10:00:58.924125  496650 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 10:00:58.932627  496650 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1227 10:00:58.940866  496650 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 10:00:58.949740  496650 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 10:00:58.958465  496650 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 10:00:58.967359  496650 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1227 10:00:58.975259  496650 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1227 10:00:58.982838  496650 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1227 10:00:59.094879  496650 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1227 10:00:59.297060  496650 start.go:553] Will wait 60s for socket path /var/run/crio/crio.sock
	I1227 10:00:59.297177  496650 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1227 10:00:59.300978  496650 start.go:574] Will wait 60s for crictl version
	I1227 10:00:59.301072  496650 ssh_runner.go:195] Run: which crictl
	I1227 10:00:59.304618  496650 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1227 10:00:59.329538  496650 start.go:590] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1227 10:00:59.329651  496650 ssh_runner.go:195] Run: crio --version
	I1227 10:00:59.357954  496650 ssh_runner.go:195] Run: crio --version
	I1227 10:00:59.391657  496650 out.go:179] * Preparing Kubernetes v1.28.0 on CRI-O 1.34.3 ...
	I1227 10:00:59.394493  496650 cli_runner.go:164] Run: docker network inspect old-k8s-version-156305 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1227 10:00:59.411324  496650 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1227 10:00:59.415321  496650 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1227 10:00:59.425333  496650 kubeadm.go:884] updating cluster {Name:old-k8s-version-156305 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-156305 Namespace:default APIServerHAVIP: APIServerName:minik
ubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountU
ID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false} ...
	I1227 10:00:59.425463  496650 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1227 10:00:59.425523  496650 ssh_runner.go:195] Run: sudo crictl images --output json
	I1227 10:00:59.460418  496650 crio.go:561] all images are preloaded for cri-o runtime.
	I1227 10:00:59.460442  496650 crio.go:433] Images already preloaded, skipping extraction
	I1227 10:00:59.460499  496650 ssh_runner.go:195] Run: sudo crictl images --output json
	I1227 10:00:59.485456  496650 crio.go:561] all images are preloaded for cri-o runtime.
	I1227 10:00:59.485481  496650 cache_images.go:86] Images are preloaded, skipping loading
	I1227 10:00:59.485502  496650 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.28.0 crio true true} ...
	I1227 10:00:59.485607  496650 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=old-k8s-version-156305 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-156305 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1227 10:00:59.485691  496650 ssh_runner.go:195] Run: crio config
	I1227 10:00:59.568379  496650 cni.go:84] Creating CNI manager for ""
	I1227 10:00:59.568418  496650 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1227 10:00:59.568441  496650 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1227 10:00:59.568465  496650 kubeadm.go:197] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.28.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-156305 NodeName:old-k8s-version-156305 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPo
dPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1227 10:00:59.568623  496650 kubeadm.go:203] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "old-k8s-version-156305"
	  kubeletExtraArgs:
	    node-ip: 192.168.85.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1227 10:00:59.568698  496650 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.0
	I1227 10:00:59.576645  496650 binaries.go:51] Found k8s binaries, skipping transfer
	I1227 10:00:59.576724  496650 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1227 10:00:59.584575  496650 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (372 bytes)
	I1227 10:00:59.598059  496650 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1227 10:00:59.611067  496650 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2160 bytes)
	I1227 10:00:59.625050  496650 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1227 10:00:59.629176  496650 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1227 10:00:59.639418  496650 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1227 10:00:59.763860  496650 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1227 10:00:59.780235  496650 certs.go:69] Setting up /home/jenkins/minikube-integration/22344-301174/.minikube/profiles/old-k8s-version-156305 for IP: 192.168.85.2
	I1227 10:00:59.780270  496650 certs.go:195] generating shared ca certs ...
	I1227 10:00:59.780286  496650 certs.go:227] acquiring lock for ca certs: {Name:mk8bd99999eb300c33129cb11fd55a1be3001328 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 10:00:59.780451  496650 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22344-301174/.minikube/ca.key
	I1227 10:00:59.780497  496650 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22344-301174/.minikube/proxy-client-ca.key
	I1227 10:00:59.780534  496650 certs.go:257] generating profile certs ...
	I1227 10:00:59.780623  496650 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22344-301174/.minikube/profiles/old-k8s-version-156305/client.key
	I1227 10:00:59.780701  496650 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22344-301174/.minikube/profiles/old-k8s-version-156305/apiserver.key.aa518b85
	I1227 10:00:59.780750  496650 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22344-301174/.minikube/profiles/old-k8s-version-156305/proxy-client.key
	I1227 10:00:59.780866  496650 certs.go:484] found cert: /home/jenkins/minikube-integration/22344-301174/.minikube/certs/303043.pem (1338 bytes)
	W1227 10:00:59.780904  496650 certs.go:480] ignoring /home/jenkins/minikube-integration/22344-301174/.minikube/certs/303043_empty.pem, impossibly tiny 0 bytes
	I1227 10:00:59.780918  496650 certs.go:484] found cert: /home/jenkins/minikube-integration/22344-301174/.minikube/certs/ca-key.pem (1675 bytes)
	I1227 10:00:59.780952  496650 certs.go:484] found cert: /home/jenkins/minikube-integration/22344-301174/.minikube/certs/ca.pem (1082 bytes)
	I1227 10:00:59.780982  496650 certs.go:484] found cert: /home/jenkins/minikube-integration/22344-301174/.minikube/certs/cert.pem (1123 bytes)
	I1227 10:00:59.781010  496650 certs.go:484] found cert: /home/jenkins/minikube-integration/22344-301174/.minikube/certs/key.pem (1675 bytes)
	I1227 10:00:59.781059  496650 certs.go:484] found cert: /home/jenkins/minikube-integration/22344-301174/.minikube/files/etc/ssl/certs/3030432.pem (1708 bytes)
	I1227 10:00:59.781642  496650 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22344-301174/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1227 10:00:59.803450  496650 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22344-301174/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1227 10:00:59.821319  496650 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22344-301174/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1227 10:00:59.841011  496650 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22344-301174/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1227 10:00:59.873860  496650 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22344-301174/.minikube/profiles/old-k8s-version-156305/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1227 10:00:59.907441  496650 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22344-301174/.minikube/profiles/old-k8s-version-156305/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1227 10:00:59.975391  496650 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22344-301174/.minikube/profiles/old-k8s-version-156305/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1227 10:00:59.996371  496650 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22344-301174/.minikube/profiles/old-k8s-version-156305/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1227 10:01:00.029229  496650 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22344-301174/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1227 10:01:00.108718  496650 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22344-301174/.minikube/certs/303043.pem --> /usr/share/ca-certificates/303043.pem (1338 bytes)
	I1227 10:01:00.179569  496650 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22344-301174/.minikube/files/etc/ssl/certs/3030432.pem --> /usr/share/ca-certificates/3030432.pem (1708 bytes)
	I1227 10:01:00.222865  496650 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (722 bytes)
	I1227 10:01:00.290526  496650 ssh_runner.go:195] Run: openssl version
	I1227 10:01:00.300223  496650 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/303043.pem
	I1227 10:01:00.343177  496650 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/303043.pem /etc/ssl/certs/303043.pem
	I1227 10:01:00.355317  496650 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/303043.pem
	I1227 10:01:00.361847  496650 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 27 09:17 /usr/share/ca-certificates/303043.pem
	I1227 10:01:00.361964  496650 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/303043.pem
	I1227 10:01:00.410440  496650 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1227 10:01:00.440074  496650 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/3030432.pem
	I1227 10:01:00.450781  496650 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/3030432.pem /etc/ssl/certs/3030432.pem
	I1227 10:01:00.461538  496650 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3030432.pem
	I1227 10:01:00.468301  496650 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 27 09:17 /usr/share/ca-certificates/3030432.pem
	I1227 10:01:00.468376  496650 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3030432.pem
	I1227 10:01:00.516104  496650 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1227 10:01:00.526860  496650 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1227 10:01:00.539641  496650 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1227 10:01:00.549509  496650 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1227 10:01:00.554390  496650 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 27 09:13 /usr/share/ca-certificates/minikubeCA.pem
	I1227 10:01:00.554461  496650 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1227 10:01:00.607399  496650 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1227 10:01:00.628951  496650 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1227 10:01:00.634722  496650 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1227 10:01:00.729554  496650 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1227 10:01:00.817106  496650 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1227 10:01:00.882882  496650 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1227 10:01:00.941530  496650 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1227 10:01:01.003358  496650 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1227 10:01:01.054882  496650 kubeadm.go:401] StartCluster: {Name:old-k8s-version-156305 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-156305 Namespace:default APIServerHAVIP: APIServerName:minikube
CA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:
docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1227 10:01:01.055040  496650 cri.go:61] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1227 10:01:01.055148  496650 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1227 10:01:01.123137  496650 cri.go:96] found id: "cd8ca9064dcc761f5b92eb96cced8f7de01ddc2ffebf6147cbc5c135c3801051"
	I1227 10:01:01.123220  496650 cri.go:96] found id: "5708ffd35134c895f5788182b61cdd93ddb87642674f5a59ca92211051f91063"
	I1227 10:01:01.123239  496650 cri.go:96] found id: "d3e3d49e9f91ec5959eb25abafbab591f5c21e6495480628f706735c5fe3d04c"
	I1227 10:01:01.123264  496650 cri.go:96] found id: "52de84716480aa6f441c8bd1ee9047feb54b482e6049d39d4b329ff433bc6cb2"
	I1227 10:01:01.123315  496650 cri.go:96] found id: ""
	I1227 10:01:01.123404  496650 ssh_runner.go:195] Run: sudo runc list -f json
	W1227 10:01:01.143644  496650 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-27T10:01:01Z" level=error msg="open /run/runc: no such file or directory"
	I1227 10:01:01.143808  496650 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1227 10:01:01.156749  496650 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1227 10:01:01.156829  496650 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1227 10:01:01.156939  496650 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1227 10:01:01.172109  496650 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1227 10:01:01.172645  496650 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-156305" does not appear in /home/jenkins/minikube-integration/22344-301174/kubeconfig
	I1227 10:01:01.172813  496650 kubeconfig.go:62] /home/jenkins/minikube-integration/22344-301174/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-156305" cluster setting kubeconfig missing "old-k8s-version-156305" context setting]
	I1227 10:01:01.173289  496650 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22344-301174/kubeconfig: {Name:mk3a3b85dc4ae4614cb3a9a7459a1ec0ee3f4d9f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 10:01:01.175085  496650 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1227 10:01:01.188666  496650 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.85.2
	I1227 10:01:01.188752  496650 kubeadm.go:602] duration metric: took 31.900845ms to restartPrimaryControlPlane
	I1227 10:01:01.188783  496650 kubeadm.go:403] duration metric: took 133.906ms to StartCluster
	I1227 10:01:01.188833  496650 settings.go:142] acquiring lock: {Name:mkd3c0090464e277fd903dd0170c6ace5a6172ea Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 10:01:01.188936  496650 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22344-301174/kubeconfig
	I1227 10:01:01.189672  496650 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22344-301174/kubeconfig: {Name:mk3a3b85dc4ae4614cb3a9a7459a1ec0ee3f4d9f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 10:01:01.189986  496650 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1227 10:01:01.190366  496650 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1227 10:01:01.190461  496650 addons.go:70] Setting storage-provisioner=true in profile "old-k8s-version-156305"
	I1227 10:01:01.190489  496650 addons.go:239] Setting addon storage-provisioner=true in "old-k8s-version-156305"
	W1227 10:01:01.190498  496650 addons.go:248] addon storage-provisioner should already be in state true
	I1227 10:01:01.190524  496650 host.go:66] Checking if "old-k8s-version-156305" exists ...
	I1227 10:01:01.190633  496650 config.go:182] Loaded profile config "old-k8s-version-156305": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1227 10:01:01.190813  496650 addons.go:70] Setting dashboard=true in profile "old-k8s-version-156305"
	I1227 10:01:01.190897  496650 addons.go:239] Setting addon dashboard=true in "old-k8s-version-156305"
	W1227 10:01:01.190921  496650 addons.go:248] addon dashboard should already be in state true
	I1227 10:01:01.190979  496650 host.go:66] Checking if "old-k8s-version-156305" exists ...
	I1227 10:01:01.191037  496650 cli_runner.go:164] Run: docker container inspect old-k8s-version-156305 --format={{.State.Status}}
	I1227 10:01:01.191608  496650 cli_runner.go:164] Run: docker container inspect old-k8s-version-156305 --format={{.State.Status}}
	I1227 10:01:01.192137  496650 addons.go:70] Setting default-storageclass=true in profile "old-k8s-version-156305"
	I1227 10:01:01.192177  496650 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "old-k8s-version-156305"
	I1227 10:01:01.192516  496650 cli_runner.go:164] Run: docker container inspect old-k8s-version-156305 --format={{.State.Status}}
	I1227 10:01:01.196618  496650 out.go:179] * Verifying Kubernetes components...
	I1227 10:01:01.200086  496650 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1227 10:01:01.257546  496650 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1227 10:01:01.260475  496650 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1227 10:01:01.263344  496650 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1227 10:01:01.263906  496650 addons.go:239] Setting addon default-storageclass=true in "old-k8s-version-156305"
	W1227 10:01:01.263927  496650 addons.go:248] addon default-storageclass should already be in state true
	I1227 10:01:01.263956  496650 host.go:66] Checking if "old-k8s-version-156305" exists ...
	I1227 10:01:01.264433  496650 cli_runner.go:164] Run: docker container inspect old-k8s-version-156305 --format={{.State.Status}}
	I1227 10:01:01.264662  496650 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1227 10:01:01.264700  496650 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1227 10:01:01.264751  496650 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-156305
	I1227 10:01:01.270267  496650 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1227 10:01:01.270304  496650 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1227 10:01:01.270386  496650 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-156305
	I1227 10:01:01.333207  496650 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1227 10:01:01.333233  496650 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1227 10:01:01.333313  496650 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-156305
	I1227 10:01:01.333657  496650 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33426 SSHKeyPath:/home/jenkins/minikube-integration/22344-301174/.minikube/machines/old-k8s-version-156305/id_rsa Username:docker}
	I1227 10:01:01.341186  496650 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33426 SSHKeyPath:/home/jenkins/minikube-integration/22344-301174/.minikube/machines/old-k8s-version-156305/id_rsa Username:docker}
	I1227 10:01:01.364861  496650 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33426 SSHKeyPath:/home/jenkins/minikube-integration/22344-301174/.minikube/machines/old-k8s-version-156305/id_rsa Username:docker}
	I1227 10:01:01.569614  496650 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1227 10:01:01.611995  496650 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1227 10:01:01.656517  496650 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1227 10:01:01.769561  496650 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1227 10:01:01.769643  496650 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1227 10:01:01.851772  496650 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1227 10:01:01.851848  496650 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1227 10:01:01.903351  496650 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1227 10:01:01.903435  496650 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1227 10:01:01.928699  496650 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1227 10:01:01.928772  496650 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1227 10:01:01.948687  496650 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1227 10:01:01.948763  496650 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1227 10:01:01.971637  496650 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1227 10:01:01.971728  496650 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1227 10:01:01.996678  496650 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1227 10:01:01.996760  496650 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1227 10:01:02.021147  496650 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1227 10:01:02.021225  496650 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1227 10:01:02.044614  496650 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1227 10:01:02.044708  496650 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1227 10:01:02.068581  496650 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1227 10:01:08.339756  496650 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (6.770108973s)
	I1227 10:01:08.339811  496650 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (6.727744743s)
	I1227 10:01:08.339842  496650 node_ready.go:35] waiting up to 6m0s for node "old-k8s-version-156305" to be "Ready" ...
	I1227 10:01:08.340144  496650 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (6.6835613s)
	I1227 10:01:08.366593  496650 node_ready.go:49] node "old-k8s-version-156305" is "Ready"
	I1227 10:01:08.366625  496650 node_ready.go:38] duration metric: took 26.770968ms for node "old-k8s-version-156305" to be "Ready" ...
	I1227 10:01:08.366641  496650 api_server.go:52] waiting for apiserver process to appear ...
	I1227 10:01:08.366700  496650 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 10:01:08.886011  496650 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (6.817338034s)
	I1227 10:01:08.886282  496650 api_server.go:72] duration metric: took 7.696241254s to wait for apiserver process to appear ...
	I1227 10:01:08.886299  496650 api_server.go:88] waiting for apiserver healthz status ...
	I1227 10:01:08.886318  496650 api_server.go:299] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1227 10:01:08.889318  496650 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p old-k8s-version-156305 addons enable metrics-server
	
	I1227 10:01:08.892139  496650 out.go:179] * Enabled addons: storage-provisioner, default-storageclass, dashboard
	I1227 10:01:08.895110  496650 addons.go:530] duration metric: took 7.704746164s for enable addons: enabled=[storage-provisioner default-storageclass dashboard]
	I1227 10:01:08.907144  496650 api_server.go:325] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1227 10:01:08.909437  496650 api_server.go:141] control plane version: v1.28.0
	I1227 10:01:08.909467  496650 api_server.go:131] duration metric: took 23.162053ms to wait for apiserver health ...
	I1227 10:01:08.909476  496650 system_pods.go:43] waiting for kube-system pods to appear ...
	I1227 10:01:08.920386  496650 system_pods.go:59] 8 kube-system pods found
	I1227 10:01:08.920428  496650 system_pods.go:61] "coredns-5dd5756b68-5jmbh" [1eb3c15a-a576-4711-849e-790fa87ddc70] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1227 10:01:08.920439  496650 system_pods.go:61] "etcd-old-k8s-version-156305" [788b8b14-c8e8-45f5-85a5-d93643420eaf] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1227 10:01:08.920446  496650 system_pods.go:61] "kindnet-w2m9v" [fba5eff1-7424-451f-9109-7e58587628ef] Running
	I1227 10:01:08.920454  496650 system_pods.go:61] "kube-apiserver-old-k8s-version-156305" [8408f30c-4be2-4423-8ed2-bfaf23f66b8f] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1227 10:01:08.920466  496650 system_pods.go:61] "kube-controller-manager-old-k8s-version-156305" [4647838c-b00e-4945-995c-58b03a1dea94] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1227 10:01:08.920486  496650 system_pods.go:61] "kube-proxy-pkr8q" [1e2c235b-d7bb-427a-8c56-988f64794d9d] Running
	I1227 10:01:08.920498  496650 system_pods.go:61] "kube-scheduler-old-k8s-version-156305" [8fa045f4-b8c0-4d14-9b36-f0a5ec8d6b7b] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1227 10:01:08.920504  496650 system_pods.go:61] "storage-provisioner" [f6bd7b49-196a-44fd-87ef-c75c1aec15de] Running
	I1227 10:01:08.920511  496650 system_pods.go:74] duration metric: took 11.027904ms to wait for pod list to return data ...
	I1227 10:01:08.920522  496650 default_sa.go:34] waiting for default service account to be created ...
	I1227 10:01:08.925210  496650 default_sa.go:45] found service account: "default"
	I1227 10:01:08.925237  496650 default_sa.go:55] duration metric: took 4.708456ms for default service account to be created ...
	I1227 10:01:08.925247  496650 system_pods.go:116] waiting for k8s-apps to be running ...
	I1227 10:01:08.933036  496650 system_pods.go:86] 8 kube-system pods found
	I1227 10:01:08.933072  496650 system_pods.go:89] "coredns-5dd5756b68-5jmbh" [1eb3c15a-a576-4711-849e-790fa87ddc70] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1227 10:01:08.933082  496650 system_pods.go:89] "etcd-old-k8s-version-156305" [788b8b14-c8e8-45f5-85a5-d93643420eaf] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1227 10:01:08.933088  496650 system_pods.go:89] "kindnet-w2m9v" [fba5eff1-7424-451f-9109-7e58587628ef] Running
	I1227 10:01:08.933096  496650 system_pods.go:89] "kube-apiserver-old-k8s-version-156305" [8408f30c-4be2-4423-8ed2-bfaf23f66b8f] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1227 10:01:08.933105  496650 system_pods.go:89] "kube-controller-manager-old-k8s-version-156305" [4647838c-b00e-4945-995c-58b03a1dea94] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1227 10:01:08.933111  496650 system_pods.go:89] "kube-proxy-pkr8q" [1e2c235b-d7bb-427a-8c56-988f64794d9d] Running
	I1227 10:01:08.933119  496650 system_pods.go:89] "kube-scheduler-old-k8s-version-156305" [8fa045f4-b8c0-4d14-9b36-f0a5ec8d6b7b] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1227 10:01:08.933124  496650 system_pods.go:89] "storage-provisioner" [f6bd7b49-196a-44fd-87ef-c75c1aec15de] Running
	I1227 10:01:08.933135  496650 system_pods.go:126] duration metric: took 7.881502ms to wait for k8s-apps to be running ...
	I1227 10:01:08.933150  496650 system_svc.go:44] waiting for kubelet service to be running ....
	I1227 10:01:08.933209  496650 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1227 10:01:08.964339  496650 system_svc.go:56] duration metric: took 31.168652ms WaitForService to wait for kubelet
	I1227 10:01:08.964417  496650 kubeadm.go:587] duration metric: took 7.774366266s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1227 10:01:08.964458  496650 node_conditions.go:102] verifying NodePressure condition ...
	I1227 10:01:08.974603  496650 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1227 10:01:08.974682  496650 node_conditions.go:123] node cpu capacity is 2
	I1227 10:01:08.974726  496650 node_conditions.go:105] duration metric: took 10.23628ms to run NodePressure ...
	I1227 10:01:08.974754  496650 start.go:242] waiting for startup goroutines ...
	I1227 10:01:08.974791  496650 start.go:247] waiting for cluster config update ...
	I1227 10:01:08.974822  496650 start.go:256] writing updated cluster config ...
	I1227 10:01:08.975242  496650 ssh_runner.go:195] Run: rm -f paused
	I1227 10:01:08.980016  496650 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1227 10:01:08.987765  496650 pod_ready.go:83] waiting for pod "coredns-5dd5756b68-5jmbh" in "kube-system" namespace to be "Ready" or be gone ...
	W1227 10:01:10.993640  496650 pod_ready.go:104] pod "coredns-5dd5756b68-5jmbh" is not "Ready", error: <nil>
	W1227 10:01:12.994295  496650 pod_ready.go:104] pod "coredns-5dd5756b68-5jmbh" is not "Ready", error: <nil>
	W1227 10:01:14.994749  496650 pod_ready.go:104] pod "coredns-5dd5756b68-5jmbh" is not "Ready", error: <nil>
	W1227 10:01:16.995376  496650 pod_ready.go:104] pod "coredns-5dd5756b68-5jmbh" is not "Ready", error: <nil>
	W1227 10:01:19.505384  496650 pod_ready.go:104] pod "coredns-5dd5756b68-5jmbh" is not "Ready", error: <nil>
	W1227 10:01:21.994620  496650 pod_ready.go:104] pod "coredns-5dd5756b68-5jmbh" is not "Ready", error: <nil>
	W1227 10:01:24.494503  496650 pod_ready.go:104] pod "coredns-5dd5756b68-5jmbh" is not "Ready", error: <nil>
	W1227 10:01:26.994573  496650 pod_ready.go:104] pod "coredns-5dd5756b68-5jmbh" is not "Ready", error: <nil>
	W1227 10:01:29.493694  496650 pod_ready.go:104] pod "coredns-5dd5756b68-5jmbh" is not "Ready", error: <nil>
	W1227 10:01:31.494055  496650 pod_ready.go:104] pod "coredns-5dd5756b68-5jmbh" is not "Ready", error: <nil>
	W1227 10:01:33.993629  496650 pod_ready.go:104] pod "coredns-5dd5756b68-5jmbh" is not "Ready", error: <nil>
	W1227 10:01:36.493572  496650 pod_ready.go:104] pod "coredns-5dd5756b68-5jmbh" is not "Ready", error: <nil>
	W1227 10:01:38.993989  496650 pod_ready.go:104] pod "coredns-5dd5756b68-5jmbh" is not "Ready", error: <nil>
	W1227 10:01:40.994064  496650 pod_ready.go:104] pod "coredns-5dd5756b68-5jmbh" is not "Ready", error: <nil>
	W1227 10:01:42.995563  496650 pod_ready.go:104] pod "coredns-5dd5756b68-5jmbh" is not "Ready", error: <nil>
	I1227 10:01:44.493548  496650 pod_ready.go:94] pod "coredns-5dd5756b68-5jmbh" is "Ready"
	I1227 10:01:44.493576  496650 pod_ready.go:86] duration metric: took 35.505772422s for pod "coredns-5dd5756b68-5jmbh" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 10:01:44.496874  496650 pod_ready.go:83] waiting for pod "etcd-old-k8s-version-156305" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 10:01:44.502285  496650 pod_ready.go:94] pod "etcd-old-k8s-version-156305" is "Ready"
	I1227 10:01:44.502316  496650 pod_ready.go:86] duration metric: took 5.415575ms for pod "etcd-old-k8s-version-156305" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 10:01:44.508460  496650 pod_ready.go:83] waiting for pod "kube-apiserver-old-k8s-version-156305" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 10:01:44.513667  496650 pod_ready.go:94] pod "kube-apiserver-old-k8s-version-156305" is "Ready"
	I1227 10:01:44.513698  496650 pod_ready.go:86] duration metric: took 5.207646ms for pod "kube-apiserver-old-k8s-version-156305" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 10:01:44.516871  496650 pod_ready.go:83] waiting for pod "kube-controller-manager-old-k8s-version-156305" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 10:01:44.690924  496650 pod_ready.go:94] pod "kube-controller-manager-old-k8s-version-156305" is "Ready"
	I1227 10:01:44.690954  496650 pod_ready.go:86] duration metric: took 174.052387ms for pod "kube-controller-manager-old-k8s-version-156305" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 10:01:44.893872  496650 pod_ready.go:83] waiting for pod "kube-proxy-pkr8q" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 10:01:45.291652  496650 pod_ready.go:94] pod "kube-proxy-pkr8q" is "Ready"
	I1227 10:01:45.291683  496650 pod_ready.go:86] duration metric: took 397.780792ms for pod "kube-proxy-pkr8q" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 10:01:45.491970  496650 pod_ready.go:83] waiting for pod "kube-scheduler-old-k8s-version-156305" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 10:01:45.891551  496650 pod_ready.go:94] pod "kube-scheduler-old-k8s-version-156305" is "Ready"
	I1227 10:01:45.891632  496650 pod_ready.go:86] duration metric: took 399.630932ms for pod "kube-scheduler-old-k8s-version-156305" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 10:01:45.891652  496650 pod_ready.go:40] duration metric: took 36.91155894s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1227 10:01:45.947233  496650 start.go:625] kubectl: 1.33.2, cluster: 1.28.0 (minor skew: 5)
	I1227 10:01:45.950480  496650 out.go:203] 
	W1227 10:01:45.953431  496650 out.go:285] ! /usr/local/bin/kubectl is version 1.33.2, which may have incompatibilities with Kubernetes 1.28.0.
	I1227 10:01:45.956389  496650 out.go:179]   - Want kubectl v1.28.0? Try 'minikube kubectl -- get pods -A'
	I1227 10:01:45.959349  496650 out.go:179] * Done! kubectl is now configured to use "old-k8s-version-156305" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Dec 27 10:01:42 old-k8s-version-156305 crio[654]: time="2025-12-27T10:01:42.054517968Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 27 10:01:42 old-k8s-version-156305 crio[654]: time="2025-12-27T10:01:42.061577124Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 27 10:01:42 old-k8s-version-156305 crio[654]: time="2025-12-27T10:01:42.064286492Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 27 10:01:42 old-k8s-version-156305 crio[654]: time="2025-12-27T10:01:42.086379124Z" level=info msg="Created container 0491eab1be817f5a0588cba3090545b34f363760f5960123eab7b9b3a9b19dd3: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-7ht6q/dashboard-metrics-scraper" id=43af9396-7f9a-42ee-b395-f2b7c6f06af6 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 27 10:01:42 old-k8s-version-156305 crio[654]: time="2025-12-27T10:01:42.09061939Z" level=info msg="Starting container: 0491eab1be817f5a0588cba3090545b34f363760f5960123eab7b9b3a9b19dd3" id=f51bc300-f08a-45a7-a8fc-0984a90cd754 name=/runtime.v1.RuntimeService/StartContainer
	Dec 27 10:01:42 old-k8s-version-156305 crio[654]: time="2025-12-27T10:01:42.093061505Z" level=info msg="Started container" PID=1652 containerID=0491eab1be817f5a0588cba3090545b34f363760f5960123eab7b9b3a9b19dd3 description=kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-7ht6q/dashboard-metrics-scraper id=f51bc300-f08a-45a7-a8fc-0984a90cd754 name=/runtime.v1.RuntimeService/StartContainer sandboxID=900f1369f80200f2910ad79c76bad68715d9412c1126f128dd7be15c20776a61
	Dec 27 10:01:42 old-k8s-version-156305 conmon[1650]: conmon 0491eab1be817f5a0588 <ninfo>: container 1652 exited with status 1
	Dec 27 10:01:42 old-k8s-version-156305 crio[654]: time="2025-12-27T10:01:42.27313663Z" level=info msg="Removing container: 490359e4d7be99b674aa5be025f58c6d5a47ecfe5324dfd8b2a597a900885692" id=5ea45676-531e-4f0e-b8f4-edd411e104d5 name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 27 10:01:42 old-k8s-version-156305 crio[654]: time="2025-12-27T10:01:42.284947485Z" level=info msg="Error loading conmon cgroup of container 490359e4d7be99b674aa5be025f58c6d5a47ecfe5324dfd8b2a597a900885692: cgroup deleted" id=5ea45676-531e-4f0e-b8f4-edd411e104d5 name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 27 10:01:42 old-k8s-version-156305 crio[654]: time="2025-12-27T10:01:42.291211128Z" level=info msg="Removed container 490359e4d7be99b674aa5be025f58c6d5a47ecfe5324dfd8b2a597a900885692: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-7ht6q/dashboard-metrics-scraper" id=5ea45676-531e-4f0e-b8f4-edd411e104d5 name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 27 10:01:47 old-k8s-version-156305 crio[654]: time="2025-12-27T10:01:47.920190277Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Dec 27 10:01:47 old-k8s-version-156305 crio[654]: time="2025-12-27T10:01:47.926250709Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 27 10:01:47 old-k8s-version-156305 crio[654]: time="2025-12-27T10:01:47.926452139Z" level=info msg="Updated default CNI network name to kindnet"
	Dec 27 10:01:47 old-k8s-version-156305 crio[654]: time="2025-12-27T10:01:47.926558528Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Dec 27 10:01:47 old-k8s-version-156305 crio[654]: time="2025-12-27T10:01:47.931867116Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 27 10:01:47 old-k8s-version-156305 crio[654]: time="2025-12-27T10:01:47.932028431Z" level=info msg="Updated default CNI network name to kindnet"
	Dec 27 10:01:47 old-k8s-version-156305 crio[654]: time="2025-12-27T10:01:47.93210135Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Dec 27 10:01:47 old-k8s-version-156305 crio[654]: time="2025-12-27T10:01:47.93558818Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 27 10:01:47 old-k8s-version-156305 crio[654]: time="2025-12-27T10:01:47.935826829Z" level=info msg="Updated default CNI network name to kindnet"
	Dec 27 10:01:47 old-k8s-version-156305 crio[654]: time="2025-12-27T10:01:47.935907527Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Dec 27 10:01:47 old-k8s-version-156305 crio[654]: time="2025-12-27T10:01:47.939323061Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 27 10:01:47 old-k8s-version-156305 crio[654]: time="2025-12-27T10:01:47.939479412Z" level=info msg="Updated default CNI network name to kindnet"
	Dec 27 10:01:47 old-k8s-version-156305 crio[654]: time="2025-12-27T10:01:47.939567503Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Dec 27 10:01:47 old-k8s-version-156305 crio[654]: time="2025-12-27T10:01:47.943704203Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 27 10:01:47 old-k8s-version-156305 crio[654]: time="2025-12-27T10:01:47.943872943Z" level=info msg="Updated default CNI network name to kindnet"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED              STATE               NAME                        ATTEMPT             POD ID              POD                                              NAMESPACE
	0491eab1be817       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           21 seconds ago       Exited              dashboard-metrics-scraper   2                   900f1369f8020       dashboard-metrics-scraper-5f989dc9cf-7ht6q       kubernetes-dashboard
	e30c642381fd6       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           24 seconds ago       Running             storage-provisioner         2                   59d7d48287e9c       storage-provisioner                              kube-system
	01fe9c54f51a2       docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf   35 seconds ago       Running             kubernetes-dashboard        0                   611773856d41d       kubernetes-dashboard-8694d4445c-b6nzn            kubernetes-dashboard
	8501cac915148       97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108                                           55 seconds ago       Running             coredns                     1                   7420f8aeaecf0       coredns-5dd5756b68-5jmbh                         kube-system
	ae1cd8b48e0d6       1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c                                           55 seconds ago       Running             busybox                     1                   3a50ad6fb69f7       busybox                                          default
	504b491214870       c96ee3c17498748ccc544ba99ee8ffeb020fc335b230b43cd28bf43bed229a13                                           55 seconds ago       Running             kindnet-cni                 1                   8751f5d356052       kindnet-w2m9v                                    kube-system
	ce4a589925fa2       940f54a5bcae9dd4c97844fa36d12cc5d9078cffd5e677ad0df1528c12f3240d                                           55 seconds ago       Running             kube-proxy                  1                   44a18e9fb66b9       kube-proxy-pkr8q                                 kube-system
	d611e0003d877       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           55 seconds ago       Exited              storage-provisioner         1                   59d7d48287e9c       storage-provisioner                              kube-system
	cd8ca9064dcc7       762dce4090c5f4789bb5dbb933d5b50bc1a2357d7739bbce30d949820e5a38ee                                           About a minute ago   Running             kube-scheduler              1                   28df270e11a5c       kube-scheduler-old-k8s-version-156305            kube-system
	5708ffd35134c       9cdd6470f48c8b127530b7ce6ea4b3524137984481e48bcde619735890840ace                                           About a minute ago   Running             etcd                        1                   3eb40cdde08bc       etcd-old-k8s-version-156305                      kube-system
	d3e3d49e9f91e       00543d2fe5d71095984891a0609ee504b81f9d72a69a0ad02039d4e135213766                                           About a minute ago   Running             kube-apiserver              1                   9acead1803c7d       kube-apiserver-old-k8s-version-156305            kube-system
	52de84716480a       46cc66ccc7c19b4b30625b0aa4e178792add2385659205d7c6fcbd05d78c23e5                                           About a minute ago   Running             kube-controller-manager     1                   29cdfdddab34d       kube-controller-manager-old-k8s-version-156305   kube-system
	
	
	==> coredns [8501cac9151481649f57c3b1c4cff002c410d0569db5665e9748a613a6f2b616] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 8aa94104b4dae56b00431f7362ac05b997af2246775de35dc2eb361b0707b2fa7199f9ddfdba27fdef1331b76d09c41700f6cb5d00836dabab7c0df8e651283f
	CoreDNS-1.10.1
	linux/arm64, go1.20, 055b2c3
	[INFO] 127.0.0.1:52134 - 49888 "HINFO IN 7215865035616262061.4788119228755665548. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.030632432s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[WARNING] plugin/kubernetes: Kubernetes API connection failure: Get "https://10.96.0.1:443/version": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> describe nodes <==
	Name:               old-k8s-version-156305
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=old-k8s-version-156305
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=a2daf445edf4872fd9586416ba5dbf507613db86
	                    minikube.k8s.io/name=old-k8s-version-156305
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_27T10_00_02_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 27 Dec 2025 09:59:56 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  old-k8s-version-156305
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 27 Dec 2025 10:01:58 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 27 Dec 2025 10:01:37 +0000   Sat, 27 Dec 2025 09:59:52 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 27 Dec 2025 10:01:37 +0000   Sat, 27 Dec 2025 09:59:52 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 27 Dec 2025 10:01:37 +0000   Sat, 27 Dec 2025 09:59:52 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 27 Dec 2025 10:01:37 +0000   Sat, 27 Dec 2025 10:00:26 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    old-k8s-version-156305
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022304Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022304Ki
	  pods:               110
	System Info:
	  Machine ID:                 56495bcb733b49eb3642bf15694bbc8c
	  System UUID:                d177d38b-fb11-4ae2-8414-a55831071099
	  Boot ID:                    bc2ae459-717b-455e-ba6b-b9cf89c5e3c8
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.3
	  Kubelet Version:            v1.28.0
	  Kube-Proxy Version:         v1.28.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                              ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         94s
	  kube-system                 coredns-5dd5756b68-5jmbh                          100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     111s
	  kube-system                 etcd-old-k8s-version-156305                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         2m3s
	  kube-system                 kindnet-w2m9v                                     100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      111s
	  kube-system                 kube-apiserver-old-k8s-version-156305             250m (12%)    0 (0%)      0 (0%)           0 (0%)         2m3s
	  kube-system                 kube-controller-manager-old-k8s-version-156305    200m (10%)    0 (0%)      0 (0%)           0 (0%)         2m3s
	  kube-system                 kube-proxy-pkr8q                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         111s
	  kube-system                 kube-scheduler-old-k8s-version-156305             100m (5%)     0 (0%)      0 (0%)           0 (0%)         2m4s
	  kube-system                 storage-provisioner                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         109s
	  kubernetes-dashboard        dashboard-metrics-scraper-5f989dc9cf-7ht6q        0 (0%)        0 (0%)      0 (0%)           0 (0%)         44s
	  kubernetes-dashboard        kubernetes-dashboard-8694d4445c-b6nzn             0 (0%)        0 (0%)      0 (0%)           0 (0%)         44s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 109s                   kube-proxy       
	  Normal  Starting                 54s                    kube-proxy       
	  Normal  NodeHasSufficientMemory  2m11s (x8 over 2m11s)  kubelet          Node old-k8s-version-156305 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m11s (x8 over 2m11s)  kubelet          Node old-k8s-version-156305 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m11s (x8 over 2m11s)  kubelet          Node old-k8s-version-156305 status is now: NodeHasSufficientPID
	  Normal  Starting                 2m4s                   kubelet          Starting kubelet.
	  Normal  NodeHasSufficientPID     2m3s                   kubelet          Node old-k8s-version-156305 status is now: NodeHasSufficientPID
	  Normal  NodeHasNoDiskPressure    2m3s                   kubelet          Node old-k8s-version-156305 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  2m3s                   kubelet          Node old-k8s-version-156305 status is now: NodeHasSufficientMemory
	  Normal  RegisteredNode           112s                   node-controller  Node old-k8s-version-156305 event: Registered Node old-k8s-version-156305 in Controller
	  Normal  NodeReady                97s                    kubelet          Node old-k8s-version-156305 status is now: NodeReady
	  Normal  Starting                 64s                    kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  63s (x8 over 63s)      kubelet          Node old-k8s-version-156305 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    63s (x8 over 63s)      kubelet          Node old-k8s-version-156305 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     63s (x8 over 63s)      kubelet          Node old-k8s-version-156305 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           44s                    node-controller  Node old-k8s-version-156305 event: Registered Node old-k8s-version-156305 in Controller
	
	
	==> dmesg <==
	[Dec27 09:27] overlayfs: idmapped layers are currently not supported
	[Dec27 09:28] overlayfs: idmapped layers are currently not supported
	[Dec27 09:29] overlayfs: idmapped layers are currently not supported
	[Dec27 09:32] overlayfs: idmapped layers are currently not supported
	[Dec27 09:33] overlayfs: idmapped layers are currently not supported
	[Dec27 09:34] overlayfs: idmapped layers are currently not supported
	[Dec27 09:35] overlayfs: idmapped layers are currently not supported
	[ +35.855481] overlayfs: idmapped layers are currently not supported
	[Dec27 09:36] overlayfs: idmapped layers are currently not supported
	[Dec27 09:37] overlayfs: idmapped layers are currently not supported
	[  +4.962986] overlayfs: idmapped layers are currently not supported
	[Dec27 09:38] overlayfs: idmapped layers are currently not supported
	[ +23.051169] overlayfs: idmapped layers are currently not supported
	[Dec27 09:39] overlayfs: idmapped layers are currently not supported
	[ +34.383287] overlayfs: idmapped layers are currently not supported
	[Dec27 09:41] overlayfs: idmapped layers are currently not supported
	[Dec27 09:43] overlayfs: idmapped layers are currently not supported
	[ +35.522529] overlayfs: idmapped layers are currently not supported
	[Dec27 09:48] overlayfs: idmapped layers are currently not supported
	[Dec27 09:49] overlayfs: idmapped layers are currently not supported
	[Dec27 09:51] overlayfs: idmapped layers are currently not supported
	[Dec27 09:52] overlayfs: idmapped layers are currently not supported
	[Dec27 09:59] overlayfs: idmapped layers are currently not supported
	[ +37.649191] overlayfs: idmapped layers are currently not supported
	[Dec27 10:01] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [5708ffd35134c895f5788182b61cdd93ddb87642674f5a59ca92211051f91063] <==
	{"level":"info","ts":"2025-12-27T10:01:01.062189Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-12-27T10:01:01.062197Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-12-27T10:01:01.062497Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed switched to configuration voters=(11459225503572592365)"}
	{"level":"info","ts":"2025-12-27T10:01:01.06256Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"68eaea490fab4e05","local-member-id":"9f0758e1c58a86ed","added-peer-id":"9f0758e1c58a86ed","added-peer-peer-urls":["https://192.168.85.2:2380"]}
	{"level":"info","ts":"2025-12-27T10:01:01.062634Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"68eaea490fab4e05","local-member-id":"9f0758e1c58a86ed","cluster-version":"3.5"}
	{"level":"info","ts":"2025-12-27T10:01:01.062661Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2025-12-27T10:01:01.072805Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2025-12-27T10:01:01.08072Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"9f0758e1c58a86ed","initial-advertise-peer-urls":["https://192.168.85.2:2380"],"listen-peer-urls":["https://192.168.85.2:2380"],"advertise-client-urls":["https://192.168.85.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.85.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2025-12-27T10:01:01.080767Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2025-12-27T10:01:01.08083Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.85.2:2380"}
	{"level":"info","ts":"2025-12-27T10:01:01.080838Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.85.2:2380"}
	{"level":"info","ts":"2025-12-27T10:01:02.790189Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed is starting a new election at term 2"}
	{"level":"info","ts":"2025-12-27T10:01:02.790325Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed became pre-candidate at term 2"}
	{"level":"info","ts":"2025-12-27T10:01:02.790378Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed received MsgPreVoteResp from 9f0758e1c58a86ed at term 2"}
	{"level":"info","ts":"2025-12-27T10:01:02.790421Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed became candidate at term 3"}
	{"level":"info","ts":"2025-12-27T10:01:02.790457Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed received MsgVoteResp from 9f0758e1c58a86ed at term 3"}
	{"level":"info","ts":"2025-12-27T10:01:02.790498Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed became leader at term 3"}
	{"level":"info","ts":"2025-12-27T10:01:02.790531Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 9f0758e1c58a86ed elected leader 9f0758e1c58a86ed at term 3"}
	{"level":"info","ts":"2025-12-27T10:01:02.798373Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"9f0758e1c58a86ed","local-member-attributes":"{Name:old-k8s-version-156305 ClientURLs:[https://192.168.85.2:2379]}","request-path":"/0/members/9f0758e1c58a86ed/attributes","cluster-id":"68eaea490fab4e05","publish-timeout":"7s"}
	{"level":"info","ts":"2025-12-27T10:01:02.798414Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-12-27T10:01:02.799569Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.85.2:2379"}
	{"level":"info","ts":"2025-12-27T10:01:02.798444Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-12-27T10:01:02.800568Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2025-12-27T10:01:02.810202Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-12-27T10:01:02.823963Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	
	
	==> kernel <==
	 10:02:03 up  2:44,  0 user,  load average: 1.36, 1.50, 1.94
	Linux old-k8s-version-156305 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [504b4912148700373d37491a8b4ed435ec42d5677bc4483cf42ab677f49c02f2] <==
	I1227 10:01:07.719377       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1227 10:01:07.719607       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1227 10:01:07.719741       1 main.go:148] setting mtu 1500 for CNI 
	I1227 10:01:07.719752       1 main.go:178] kindnetd IP family: "ipv4"
	I1227 10:01:07.719764       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-12-27T10:01:07Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1227 10:01:07.919604       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1227 10:01:07.919624       1 controller.go:381] "Waiting for informer caches to sync"
	I1227 10:01:07.919632       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1227 10:01:07.919931       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1227 10:01:37.920172       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1227 10:01:37.920288       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1227 10:01:37.920298       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	E1227 10:01:37.920329       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	I1227 10:01:39.420747       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1227 10:01:39.420778       1 metrics.go:72] Registering metrics
	I1227 10:01:39.420846       1 controller.go:711] "Syncing nftables rules"
	I1227 10:01:47.919879       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1227 10:01:47.919936       1 main.go:301] handling current node
	I1227 10:01:57.926255       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1227 10:01:57.926287       1 main.go:301] handling current node
	
	
	==> kube-apiserver [d3e3d49e9f91ec5959eb25abafbab591f5c21e6495480628f706735c5fe3d04c] <==
	I1227 10:01:06.836613       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I1227 10:01:06.838550       1 shared_informer.go:318] Caches are synced for node_authorizer
	I1227 10:01:06.838642       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I1227 10:01:06.839991       1 shared_informer.go:318] Caches are synced for configmaps
	I1227 10:01:06.840103       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1227 10:01:06.845525       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I1227 10:01:06.845744       1 aggregator.go:166] initial CRD sync complete...
	I1227 10:01:06.845797       1 autoregister_controller.go:141] Starting autoregister controller
	I1227 10:01:06.845846       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1227 10:01:06.845877       1 cache.go:39] Caches are synced for autoregister controller
	I1227 10:01:06.891885       1 apf_controller.go:377] Running API Priority and Fairness config worker
	I1227 10:01:06.891986       1 apf_controller.go:380] Running API Priority and Fairness periodic rebalancing process
	I1227 10:01:06.892175       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	E1227 10:01:07.083410       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1227 10:01:07.510643       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1227 10:01:08.683584       1 controller.go:624] quota admission added evaluator for: namespaces
	I1227 10:01:08.728626       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I1227 10:01:08.757123       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1227 10:01:08.770578       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1227 10:01:08.783903       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I1227 10:01:08.856846       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.97.79.115"}
	I1227 10:01:08.878638       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.96.93.247"}
	I1227 10:01:19.324610       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1227 10:01:19.362779       1 controller.go:624] quota admission added evaluator for: endpoints
	I1227 10:01:19.367538       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	
	
	==> kube-controller-manager [52de84716480aa6f441c8bd1ee9047feb54b482e6049d39d4b329ff433bc6cb2] <==
	I1227 10:01:19.433337       1 shared_informer.go:318] Caches are synced for resource quota
	I1227 10:01:19.436728       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="61.348384ms"
	I1227 10:01:19.437150       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="60.655305ms"
	I1227 10:01:19.450950       1 shared_informer.go:318] Caches are synced for resource quota
	I1227 10:01:19.454307       1 shared_informer.go:318] Caches are synced for service account
	I1227 10:01:19.464445       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="27.597505ms"
	I1227 10:01:19.465346       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="64.87µs"
	I1227 10:01:19.465435       1 shared_informer.go:318] Caches are synced for namespace
	I1227 10:01:19.469954       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="32.721295ms"
	I1227 10:01:19.483032       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="47.623µs"
	I1227 10:01:19.505778       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="35.700787ms"
	I1227 10:01:19.505915       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="91.332µs"
	I1227 10:01:19.508335       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="172.687µs"
	I1227 10:01:19.855178       1 shared_informer.go:318] Caches are synced for garbage collector
	I1227 10:01:19.855227       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	I1227 10:01:19.875249       1 shared_informer.go:318] Caches are synced for garbage collector
	I1227 10:01:24.233787       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="41.035µs"
	I1227 10:01:25.237549       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="67.792µs"
	I1227 10:01:26.243887       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="53.596µs"
	I1227 10:01:28.257002       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="12.251599ms"
	I1227 10:01:28.257200       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="53.621µs"
	I1227 10:01:42.292495       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="56.739µs"
	I1227 10:01:44.475775       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="13.774941ms"
	I1227 10:01:44.475898       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="70.278µs"
	I1227 10:01:49.755723       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="55.344µs"
	
	
	==> kube-proxy [ce4a589925fa2bea6e1c4dd2a3f450ac19fa2b6905610d4fdca193b304e7c654] <==
	I1227 10:01:08.092987       1 server_others.go:69] "Using iptables proxy"
	I1227 10:01:08.135659       1 node.go:141] Successfully retrieved node IP: 192.168.85.2
	I1227 10:01:08.375344       1 server.go:632] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1227 10:01:08.379124       1 server_others.go:152] "Using iptables Proxier"
	I1227 10:01:08.379161       1 server_others.go:421] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I1227 10:01:08.379169       1 server_others.go:438] "Defaulting to no-op detect-local"
	I1227 10:01:08.379205       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1227 10:01:08.379412       1 server.go:846] "Version info" version="v1.28.0"
	I1227 10:01:08.379422       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1227 10:01:08.380664       1 config.go:188] "Starting service config controller"
	I1227 10:01:08.380675       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1227 10:01:08.380693       1 config.go:97] "Starting endpoint slice config controller"
	I1227 10:01:08.380697       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1227 10:01:08.382595       1 config.go:315] "Starting node config controller"
	I1227 10:01:08.382608       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1227 10:01:08.481579       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I1227 10:01:08.481662       1 shared_informer.go:318] Caches are synced for service config
	I1227 10:01:08.483386       1 shared_informer.go:318] Caches are synced for node config
	
	
	==> kube-scheduler [cd8ca9064dcc761f5b92eb96cced8f7de01ddc2ffebf6147cbc5c135c3801051] <==
	I1227 10:01:05.019320       1 serving.go:348] Generated self-signed cert in-memory
	W1227 10:01:06.630608       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1227 10:01:06.630720       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1227 10:01:06.630759       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1227 10:01:06.630802       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1227 10:01:06.782809       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.28.0"
	I1227 10:01:06.782911       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1227 10:01:06.792617       1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
	I1227 10:01:06.795065       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I1227 10:01:06.795150       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1227 10:01:06.806242       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1227 10:01:06.907885       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Dec 27 10:01:19 old-k8s-version-156305 kubelet[783]: I1227 10:01:19.576318     783 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/14248e6a-3981-4785-b2a6-b5c3128b97dd-tmp-volume\") pod \"kubernetes-dashboard-8694d4445c-b6nzn\" (UID: \"14248e6a-3981-4785-b2a6-b5c3128b97dd\") " pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-b6nzn"
	Dec 27 10:01:19 old-k8s-version-156305 kubelet[783]: I1227 10:01:19.576348     783 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mfmtf\" (UniqueName: \"kubernetes.io/projected/14248e6a-3981-4785-b2a6-b5c3128b97dd-kube-api-access-mfmtf\") pod \"kubernetes-dashboard-8694d4445c-b6nzn\" (UID: \"14248e6a-3981-4785-b2a6-b5c3128b97dd\") " pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-b6nzn"
	Dec 27 10:01:19 old-k8s-version-156305 kubelet[783]: I1227 10:01:19.576371     783 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/77599837-429e-492b-a0ae-92863962e515-tmp-volume\") pod \"dashboard-metrics-scraper-5f989dc9cf-7ht6q\" (UID: \"77599837-429e-492b-a0ae-92863962e515\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-7ht6q"
	Dec 27 10:01:19 old-k8s-version-156305 kubelet[783]: W1227 10:01:19.762919     783 manager.go:1159] Failed to process watch event {EventType:0 Name:/docker/347dbce10dafb51b72ca7dcef615f0de8b8f7496d754df760f0354e3d747f426/crio-900f1369f80200f2910ad79c76bad68715d9412c1126f128dd7be15c20776a61 WatchSource:0}: Error finding container 900f1369f80200f2910ad79c76bad68715d9412c1126f128dd7be15c20776a61: Status 404 returned error can't find the container with id 900f1369f80200f2910ad79c76bad68715d9412c1126f128dd7be15c20776a61
	Dec 27 10:01:19 old-k8s-version-156305 kubelet[783]: W1227 10:01:19.774339     783 manager.go:1159] Failed to process watch event {EventType:0 Name:/docker/347dbce10dafb51b72ca7dcef615f0de8b8f7496d754df760f0354e3d747f426/crio-611773856d41d2a12c68a74fc85b8d71a1c46db5873ba0e5e322f0fef6128794 WatchSource:0}: Error finding container 611773856d41d2a12c68a74fc85b8d71a1c46db5873ba0e5e322f0fef6128794: Status 404 returned error can't find the container with id 611773856d41d2a12c68a74fc85b8d71a1c46db5873ba0e5e322f0fef6128794
	Dec 27 10:01:24 old-k8s-version-156305 kubelet[783]: I1227 10:01:24.214733     783 scope.go:117] "RemoveContainer" containerID="f7b4bb0acbda658e7047ded9de708896ba10cfe2c6e9b766b6ca45d745170b5b"
	Dec 27 10:01:25 old-k8s-version-156305 kubelet[783]: I1227 10:01:25.219795     783 scope.go:117] "RemoveContainer" containerID="f7b4bb0acbda658e7047ded9de708896ba10cfe2c6e9b766b6ca45d745170b5b"
	Dec 27 10:01:25 old-k8s-version-156305 kubelet[783]: I1227 10:01:25.220133     783 scope.go:117] "RemoveContainer" containerID="490359e4d7be99b674aa5be025f58c6d5a47ecfe5324dfd8b2a597a900885692"
	Dec 27 10:01:25 old-k8s-version-156305 kubelet[783]: E1227 10:01:25.220396     783 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-7ht6q_kubernetes-dashboard(77599837-429e-492b-a0ae-92863962e515)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-7ht6q" podUID="77599837-429e-492b-a0ae-92863962e515"
	Dec 27 10:01:26 old-k8s-version-156305 kubelet[783]: I1227 10:01:26.223842     783 scope.go:117] "RemoveContainer" containerID="490359e4d7be99b674aa5be025f58c6d5a47ecfe5324dfd8b2a597a900885692"
	Dec 27 10:01:26 old-k8s-version-156305 kubelet[783]: E1227 10:01:26.224186     783 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-7ht6q_kubernetes-dashboard(77599837-429e-492b-a0ae-92863962e515)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-7ht6q" podUID="77599837-429e-492b-a0ae-92863962e515"
	Dec 27 10:01:29 old-k8s-version-156305 kubelet[783]: I1227 10:01:29.741896     783 scope.go:117] "RemoveContainer" containerID="490359e4d7be99b674aa5be025f58c6d5a47ecfe5324dfd8b2a597a900885692"
	Dec 27 10:01:29 old-k8s-version-156305 kubelet[783]: E1227 10:01:29.742247     783 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-7ht6q_kubernetes-dashboard(77599837-429e-492b-a0ae-92863962e515)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-7ht6q" podUID="77599837-429e-492b-a0ae-92863962e515"
	Dec 27 10:01:38 old-k8s-version-156305 kubelet[783]: I1227 10:01:38.253340     783 scope.go:117] "RemoveContainer" containerID="d611e0003d877a2c61c9a067f013513d4747d34cdf80058c12dbcd2cee6f4aac"
	Dec 27 10:01:38 old-k8s-version-156305 kubelet[783]: I1227 10:01:38.276905     783 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-b6nzn" podStartSLOduration=10.979225156 podCreationTimestamp="2025-12-27 10:01:19 +0000 UTC" firstStartedPulling="2025-12-27 10:01:19.776200883 +0000 UTC m=+19.992698617" lastFinishedPulling="2025-12-27 10:01:28.073819927 +0000 UTC m=+28.290317653" observedRunningTime="2025-12-27 10:01:28.244320328 +0000 UTC m=+28.460818062" watchObservedRunningTime="2025-12-27 10:01:38.276844192 +0000 UTC m=+38.493341918"
	Dec 27 10:01:42 old-k8s-version-156305 kubelet[783]: I1227 10:01:42.050544     783 scope.go:117] "RemoveContainer" containerID="490359e4d7be99b674aa5be025f58c6d5a47ecfe5324dfd8b2a597a900885692"
	Dec 27 10:01:42 old-k8s-version-156305 kubelet[783]: I1227 10:01:42.268584     783 scope.go:117] "RemoveContainer" containerID="490359e4d7be99b674aa5be025f58c6d5a47ecfe5324dfd8b2a597a900885692"
	Dec 27 10:01:42 old-k8s-version-156305 kubelet[783]: I1227 10:01:42.268888     783 scope.go:117] "RemoveContainer" containerID="0491eab1be817f5a0588cba3090545b34f363760f5960123eab7b9b3a9b19dd3"
	Dec 27 10:01:42 old-k8s-version-156305 kubelet[783]: E1227 10:01:42.269179     783 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-7ht6q_kubernetes-dashboard(77599837-429e-492b-a0ae-92863962e515)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-7ht6q" podUID="77599837-429e-492b-a0ae-92863962e515"
	Dec 27 10:01:49 old-k8s-version-156305 kubelet[783]: I1227 10:01:49.741835     783 scope.go:117] "RemoveContainer" containerID="0491eab1be817f5a0588cba3090545b34f363760f5960123eab7b9b3a9b19dd3"
	Dec 27 10:01:49 old-k8s-version-156305 kubelet[783]: E1227 10:01:49.742208     783 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-7ht6q_kubernetes-dashboard(77599837-429e-492b-a0ae-92863962e515)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-7ht6q" podUID="77599837-429e-492b-a0ae-92863962e515"
	Dec 27 10:01:58 old-k8s-version-156305 kubelet[783]: I1227 10:01:58.197653     783 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	Dec 27 10:01:58 old-k8s-version-156305 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Dec 27 10:01:58 old-k8s-version-156305 systemd[1]: kubelet.service: Deactivated successfully.
	Dec 27 10:01:58 old-k8s-version-156305 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	
	
	==> kubernetes-dashboard [01fe9c54f51a28045014799ed2d0326a433e8b4d38927e40419708a1a7a0a3c7] <==
	2025/12/27 10:01:28 Using namespace: kubernetes-dashboard
	2025/12/27 10:01:28 Using in-cluster config to connect to apiserver
	2025/12/27 10:01:28 Using secret token for csrf signing
	2025/12/27 10:01:28 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/12/27 10:01:28 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/12/27 10:01:28 Successful initial request to the apiserver, version: v1.28.0
	2025/12/27 10:01:28 Generating JWE encryption key
	2025/12/27 10:01:28 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/12/27 10:01:28 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/12/27 10:01:28 Initializing JWE encryption key from synchronized object
	2025/12/27 10:01:28 Creating in-cluster Sidecar client
	2025/12/27 10:01:28 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/12/27 10:01:28 Serving insecurely on HTTP port: 9090
	2025/12/27 10:01:58 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/12/27 10:01:28 Starting overwatch
	
	
	==> storage-provisioner [d611e0003d877a2c61c9a067f013513d4747d34cdf80058c12dbcd2cee6f4aac] <==
	I1227 10:01:07.860936       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1227 10:01:37.863151       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [e30c642381fd66b6ac4e8aacfb12d272b20d86702b36d52f8134df8d684e8b32] <==
	I1227 10:01:38.310625       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1227 10:01:38.329337       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1227 10:01:38.329524       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1227 10:01:55.728830       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1227 10:01:55.729008       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-156305_551df9fa-0e63-4397-975d-49651314d37b!
	I1227 10:01:55.730735       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"00426bfd-0cf8-4159-8bc4-8e458dec9071", APIVersion:"v1", ResourceVersion:"663", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-156305_551df9fa-0e63-4397-975d-49651314d37b became leader
	I1227 10:01:55.829643       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-156305_551df9fa-0e63-4397-975d-49651314d37b!
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-156305 -n old-k8s-version-156305
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-156305 -n old-k8s-version-156305: exit status 2 (356.865507ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:270: (dbg) Run:  kubectl --context old-k8s-version-156305 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:294: <<< TestStartStop/group/old-k8s-version/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:295: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/old-k8s-version/serial/Pause (6.56s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (2.41s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p no-preload-021144 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable metrics-server -p no-preload-021144 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 11 (244.101087ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-27T10:03:11Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-arm64 addons enable metrics-server -p no-preload-021144 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 11
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context no-preload-021144 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:213: (dbg) Non-zero exit: kubectl --context no-preload-021144 describe deploy/metrics-server -n kube-system: exit status 1 (78.770714ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): deployments.apps "metrics-server" not found

                                                
                                                
** /stderr **
start_stop_delete_test.go:215: failed to get info on auto-pause deployments. args "kubectl --context no-preload-021144 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:219: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/no-preload/serial/EnableAddonWhileActive]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/no-preload/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect no-preload-021144
helpers_test.go:244: (dbg) docker inspect no-preload-021144:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "ab89938537bbdf10f981cdbb065149ea236bebfe2c08f7d5ff90bb70bae01ff1",
	        "Created": "2025-12-27T10:02:08.318546254Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 501093,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-27T10:02:08.388728356Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:713d59eed1cb7ea89d31e3cbef8f6274a6ab7509a421e96cf0197c334e76398d",
	        "ResolvConfPath": "/var/lib/docker/containers/ab89938537bbdf10f981cdbb065149ea236bebfe2c08f7d5ff90bb70bae01ff1/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/ab89938537bbdf10f981cdbb065149ea236bebfe2c08f7d5ff90bb70bae01ff1/hostname",
	        "HostsPath": "/var/lib/docker/containers/ab89938537bbdf10f981cdbb065149ea236bebfe2c08f7d5ff90bb70bae01ff1/hosts",
	        "LogPath": "/var/lib/docker/containers/ab89938537bbdf10f981cdbb065149ea236bebfe2c08f7d5ff90bb70bae01ff1/ab89938537bbdf10f981cdbb065149ea236bebfe2c08f7d5ff90bb70bae01ff1-json.log",
	        "Name": "/no-preload-021144",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "no-preload-021144:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "no-preload-021144",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "ab89938537bbdf10f981cdbb065149ea236bebfe2c08f7d5ff90bb70bae01ff1",
	                "LowerDir": "/var/lib/docker/overlay2/bd2f19d759acf34ab41ac7962a845e4705e5c52a8a2d5b4fa791e70efb755ef7-init/diff:/var/lib/docker/overlay2/888349771a41a46b09e7de676064af9effbe2f5ae2a8ba49ad062335fb2a70e5/diff",
	                "MergedDir": "/var/lib/docker/overlay2/bd2f19d759acf34ab41ac7962a845e4705e5c52a8a2d5b4fa791e70efb755ef7/merged",
	                "UpperDir": "/var/lib/docker/overlay2/bd2f19d759acf34ab41ac7962a845e4705e5c52a8a2d5b4fa791e70efb755ef7/diff",
	                "WorkDir": "/var/lib/docker/overlay2/bd2f19d759acf34ab41ac7962a845e4705e5c52a8a2d5b4fa791e70efb755ef7/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "no-preload-021144",
	                "Source": "/var/lib/docker/volumes/no-preload-021144/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-021144",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-021144",
	                "name.minikube.sigs.k8s.io": "no-preload-021144",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "c9ccc962a88bbc7808d2c289ff69d183b88d64bde83324d40d2f27e9d0863062",
	            "SandboxKey": "/var/run/docker/netns/c9ccc962a88b",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33431"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33432"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33435"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33433"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33434"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "no-preload-021144": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "8e:f6:d4:33:61:dc",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "580e567ffdb0a3108b9672089c71417e29baa569ff9d213d3d1dd6886e00e475",
	                    "EndpointID": "9e26a163c473a331626244fb71104d190d1acacae64fb7fa1bf4be33ffe6bf0e",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "no-preload-021144",
	                        "ab89938537bb"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-021144 -n no-preload-021144
helpers_test.go:253: <<< TestStartStop/group/no-preload/serial/EnableAddonWhileActive FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/no-preload/serial/EnableAddonWhileActive]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p no-preload-021144 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-arm64 -p no-preload-021144 logs -n 25: (1.137459791s)
helpers_test.go:261: TestStartStop/group/no-preload/serial/EnableAddonWhileActive logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────
────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │          PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────
────┤
	│ ssh     │ -p cilium-246753 sudo systemctl status crio --all --full --no-pager                                                                                                                                                                           │ cilium-246753             │ jenkins │ v1.37.0 │ 27 Dec 25 09:52 UTC │                     │
	│ ssh     │ -p cilium-246753 sudo systemctl cat crio --no-pager                                                                                                                                                                                           │ cilium-246753             │ jenkins │ v1.37.0 │ 27 Dec 25 09:52 UTC │                     │
	│ ssh     │ -p cilium-246753 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                                                                                                 │ cilium-246753             │ jenkins │ v1.37.0 │ 27 Dec 25 09:52 UTC │                     │
	│ ssh     │ -p cilium-246753 sudo crio config                                                                                                                                                                                                             │ cilium-246753             │ jenkins │ v1.37.0 │ 27 Dec 25 09:52 UTC │                     │
	│ delete  │ -p cilium-246753                                                                                                                                                                                                                              │ cilium-246753             │ jenkins │ v1.37.0 │ 27 Dec 25 09:52 UTC │ 27 Dec 25 09:52 UTC │
	│ start   │ -p cert-expiration-028595 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio                                                                                                                                        │ cert-expiration-028595    │ jenkins │ v1.37.0 │ 27 Dec 25 09:52 UTC │ 27 Dec 25 09:52 UTC │
	│ start   │ -p cert-expiration-028595 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio                                                                                                                                     │ cert-expiration-028595    │ jenkins │ v1.37.0 │ 27 Dec 25 09:55 UTC │ 27 Dec 25 09:55 UTC │
	│ delete  │ -p cert-expiration-028595                                                                                                                                                                                                                     │ cert-expiration-028595    │ jenkins │ v1.37.0 │ 27 Dec 25 09:55 UTC │ 27 Dec 25 09:55 UTC │
	│ start   │ -p force-systemd-flag-779725 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                                                                                                                   │ force-systemd-flag-779725 │ jenkins │ v1.37.0 │ 27 Dec 25 09:55 UTC │                     │
	│ delete  │ -p force-systemd-env-029895                                                                                                                                                                                                                   │ force-systemd-env-029895  │ jenkins │ v1.37.0 │ 27 Dec 25 09:58 UTC │ 27 Dec 25 09:58 UTC │
	│ start   │ -p cert-options-057459 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio                     │ cert-options-057459       │ jenkins │ v1.37.0 │ 27 Dec 25 09:58 UTC │ 27 Dec 25 09:59 UTC │
	│ ssh     │ cert-options-057459 ssh openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt                                                                                                                                                   │ cert-options-057459       │ jenkins │ v1.37.0 │ 27 Dec 25 09:59 UTC │ 27 Dec 25 09:59 UTC │
	│ ssh     │ -p cert-options-057459 -- sudo cat /etc/kubernetes/admin.conf                                                                                                                                                                                 │ cert-options-057459       │ jenkins │ v1.37.0 │ 27 Dec 25 09:59 UTC │ 27 Dec 25 09:59 UTC │
	│ delete  │ -p cert-options-057459                                                                                                                                                                                                                        │ cert-options-057459       │ jenkins │ v1.37.0 │ 27 Dec 25 09:59 UTC │ 27 Dec 25 09:59 UTC │
	│ start   │ -p old-k8s-version-156305 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-156305    │ jenkins │ v1.37.0 │ 27 Dec 25 09:59 UTC │ 27 Dec 25 10:00 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-156305 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-156305    │ jenkins │ v1.37.0 │ 27 Dec 25 10:00 UTC │                     │
	│ stop    │ -p old-k8s-version-156305 --alsologtostderr -v=3                                                                                                                                                                                              │ old-k8s-version-156305    │ jenkins │ v1.37.0 │ 27 Dec 25 10:00 UTC │ 27 Dec 25 10:00 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-156305 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-156305    │ jenkins │ v1.37.0 │ 27 Dec 25 10:00 UTC │ 27 Dec 25 10:00 UTC │
	│ start   │ -p old-k8s-version-156305 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-156305    │ jenkins │ v1.37.0 │ 27 Dec 25 10:00 UTC │ 27 Dec 25 10:01 UTC │
	│ image   │ old-k8s-version-156305 image list --format=json                                                                                                                                                                                               │ old-k8s-version-156305    │ jenkins │ v1.37.0 │ 27 Dec 25 10:01 UTC │ 27 Dec 25 10:01 UTC │
	│ pause   │ -p old-k8s-version-156305 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-156305    │ jenkins │ v1.37.0 │ 27 Dec 25 10:01 UTC │                     │
	│ delete  │ -p old-k8s-version-156305                                                                                                                                                                                                                     │ old-k8s-version-156305    │ jenkins │ v1.37.0 │ 27 Dec 25 10:02 UTC │ 27 Dec 25 10:02 UTC │
	│ delete  │ -p old-k8s-version-156305                                                                                                                                                                                                                     │ old-k8s-version-156305    │ jenkins │ v1.37.0 │ 27 Dec 25 10:02 UTC │ 27 Dec 25 10:02 UTC │
	│ start   │ -p no-preload-021144 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0                                                                                       │ no-preload-021144         │ jenkins │ v1.37.0 │ 27 Dec 25 10:02 UTC │ 27 Dec 25 10:03 UTC │
	│ addons  │ enable metrics-server -p no-preload-021144 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-021144         │ jenkins │ v1.37.0 │ 27 Dec 25 10:03 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────
────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/27 10:02:07
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1227 10:02:07.290503  500772 out.go:360] Setting OutFile to fd 1 ...
	I1227 10:02:07.290713  500772 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 10:02:07.290742  500772 out.go:374] Setting ErrFile to fd 2...
	I1227 10:02:07.290763  500772 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 10:02:07.291157  500772 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22344-301174/.minikube/bin
	I1227 10:02:07.291760  500772 out.go:368] Setting JSON to false
	I1227 10:02:07.292853  500772 start.go:133] hostinfo: {"hostname":"ip-172-31-31-251","uptime":9877,"bootTime":1766819851,"procs":177,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1227 10:02:07.293274  500772 start.go:143] virtualization:  
	I1227 10:02:07.297242  500772 out.go:179] * [no-preload-021144] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1227 10:02:07.301570  500772 out.go:179]   - MINIKUBE_LOCATION=22344
	I1227 10:02:07.301752  500772 notify.go:221] Checking for updates...
	I1227 10:02:07.307878  500772 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1227 10:02:07.310916  500772 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22344-301174/kubeconfig
	I1227 10:02:07.314018  500772 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22344-301174/.minikube
	I1227 10:02:07.317060  500772 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1227 10:02:07.319986  500772 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1227 10:02:07.323412  500772 config.go:182] Loaded profile config "force-systemd-flag-779725": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1227 10:02:07.323518  500772 driver.go:422] Setting default libvirt URI to qemu:///system
	I1227 10:02:07.352578  500772 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1227 10:02:07.352702  500772 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1227 10:02:07.425258  500772 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-27 10:02:07.413293532 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1227 10:02:07.425370  500772 docker.go:319] overlay module found
	I1227 10:02:07.428688  500772 out.go:179] * Using the docker driver based on user configuration
	I1227 10:02:07.431689  500772 start.go:309] selected driver: docker
	I1227 10:02:07.431714  500772 start.go:928] validating driver "docker" against <nil>
	I1227 10:02:07.431744  500772 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1227 10:02:07.432457  500772 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1227 10:02:07.504489  500772 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-27 10:02:07.495447262 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1227 10:02:07.504656  500772 start_flags.go:333] no existing cluster config was found, will generate one from the flags 
	I1227 10:02:07.504891  500772 start_flags.go:1019] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1227 10:02:07.507907  500772 out.go:179] * Using Docker driver with root privileges
	I1227 10:02:07.510754  500772 cni.go:84] Creating CNI manager for ""
	I1227 10:02:07.510825  500772 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1227 10:02:07.510839  500772 start_flags.go:342] Found "CNI" CNI - setting NetworkPlugin=cni
	I1227 10:02:07.510947  500772 start.go:353] cluster config:
	{Name:no-preload-021144 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:no-preload-021144 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID
:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1227 10:02:07.516014  500772 out.go:179] * Starting "no-preload-021144" primary control-plane node in "no-preload-021144" cluster
	I1227 10:02:07.518864  500772 cache.go:134] Beginning downloading kic base image for docker with crio
	I1227 10:02:07.521977  500772 out.go:179] * Pulling base image v0.0.48-1766570851-22316 ...
	I1227 10:02:07.524759  500772 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime crio
	I1227 10:02:07.524840  500772 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a in local docker daemon
	I1227 10:02:07.524904  500772 profile.go:143] Saving config to /home/jenkins/minikube-integration/22344-301174/.minikube/profiles/no-preload-021144/config.json ...
	I1227 10:02:07.524935  500772 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22344-301174/.minikube/profiles/no-preload-021144/config.json: {Name:mkc958872be3c59db2f099269baae43811bcaed5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 10:02:07.525193  500772 cache.go:107] acquiring lock: {Name:mk7d95993b5087d5334ae23cc35b07dd938b4c75 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1227 10:02:07.525248  500772 cache.go:115] /home/jenkins/minikube-integration/22344-301174/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1227 10:02:07.525263  500772 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/22344-301174/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 76.144µs
	I1227 10:02:07.525272  500772 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/22344-301174/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1227 10:02:07.525287  500772 cache.go:107] acquiring lock: {Name:mk6192369ad8584a99a6720429a8e6ed9f2d2233 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1227 10:02:07.525330  500772 cache.go:115] /home/jenkins/minikube-integration/22344-301174/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0 exists
	I1227 10:02:07.525341  500772 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.35.0" -> "/home/jenkins/minikube-integration/22344-301174/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0" took 54.368µs
	I1227 10:02:07.525347  500772 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.35.0 -> /home/jenkins/minikube-integration/22344-301174/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0 succeeded
	I1227 10:02:07.525358  500772 cache.go:107] acquiring lock: {Name:mkf532e70fa97678d09d9e1a398534a24cbf9538 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1227 10:02:07.525394  500772 cache.go:115] /home/jenkins/minikube-integration/22344-301174/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0 exists
	I1227 10:02:07.525404  500772 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.35.0" -> "/home/jenkins/minikube-integration/22344-301174/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0" took 47.353µs
	I1227 10:02:07.525411  500772 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.35.0 -> /home/jenkins/minikube-integration/22344-301174/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0 succeeded
	I1227 10:02:07.525420  500772 cache.go:107] acquiring lock: {Name:mk0c3ba49bab6e0c44483449eacbd8852cc4fa46 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1227 10:02:07.525451  500772 cache.go:115] /home/jenkins/minikube-integration/22344-301174/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0 exists
	I1227 10:02:07.525460  500772 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.35.0" -> "/home/jenkins/minikube-integration/22344-301174/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0" took 41.206µs
	I1227 10:02:07.525467  500772 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.35.0 -> /home/jenkins/minikube-integration/22344-301174/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0 succeeded
	I1227 10:02:07.525475  500772 cache.go:107] acquiring lock: {Name:mkab5988ea0c107a79947dffe93ac31b732eff3a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1227 10:02:07.525512  500772 cache.go:115] /home/jenkins/minikube-integration/22344-301174/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0 exists
	I1227 10:02:07.525522  500772 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.35.0" -> "/home/jenkins/minikube-integration/22344-301174/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0" took 47.229µs
	I1227 10:02:07.525531  500772 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.35.0 -> /home/jenkins/minikube-integration/22344-301174/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0 succeeded
	I1227 10:02:07.525540  500772 cache.go:107] acquiring lock: {Name:mkb370ce4e4194287b205d66c7b65e6a2ed45413 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1227 10:02:07.525575  500772 cache.go:115] /home/jenkins/minikube-integration/22344-301174/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 exists
	I1227 10:02:07.525585  500772 cache.go:96] cache image "registry.k8s.io/pause:3.10.1" -> "/home/jenkins/minikube-integration/22344-301174/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1" took 45.334µs
	I1227 10:02:07.525590  500772 cache.go:80] save to tar file registry.k8s.io/pause:3.10.1 -> /home/jenkins/minikube-integration/22344-301174/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 succeeded
	I1227 10:02:07.525599  500772 cache.go:107] acquiring lock: {Name:mkafd8402b85c8e9941d589b0a0272c8df27837d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1227 10:02:07.525625  500772 cache.go:115] /home/jenkins/minikube-integration/22344-301174/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.6-0 exists
	I1227 10:02:07.525634  500772 cache.go:96] cache image "registry.k8s.io/etcd:3.6.6-0" -> "/home/jenkins/minikube-integration/22344-301174/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.6-0" took 35.537µs
	I1227 10:02:07.525640  500772 cache.go:80] save to tar file registry.k8s.io/etcd:3.6.6-0 -> /home/jenkins/minikube-integration/22344-301174/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.6-0 succeeded
	I1227 10:02:07.525648  500772 cache.go:107] acquiring lock: {Name:mkffbc7f5ad1358fd7e7925aa1649b58cadec1a3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1227 10:02:07.525679  500772 cache.go:115] /home/jenkins/minikube-integration/22344-301174/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1 exists
	I1227 10:02:07.525687  500772 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.13.1" -> "/home/jenkins/minikube-integration/22344-301174/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1" took 39.746µs
	I1227 10:02:07.525693  500772 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.13.1 -> /home/jenkins/minikube-integration/22344-301174/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1 succeeded
	I1227 10:02:07.525699  500772 cache.go:87] Successfully saved all images to host disk.
	I1227 10:02:07.544117  500772 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a in local docker daemon, skipping pull
	I1227 10:02:07.544141  500772 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a exists in daemon, skipping load
	I1227 10:02:07.544156  500772 cache.go:243] Successfully downloaded all kic artifacts
	I1227 10:02:07.544188  500772 start.go:360] acquireMachinesLock for no-preload-021144: {Name:mk023bae09bbe814fea61a003c760e0dae17d436 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1227 10:02:07.544299  500772 start.go:364] duration metric: took 90.356µs to acquireMachinesLock for "no-preload-021144"
	I1227 10:02:07.544329  500772 start.go:93] Provisioning new machine with config: &{Name:no-preload-021144 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:no-preload-021144 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwa
rePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false} &{Name: IP: Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1227 10:02:07.544399  500772 start.go:125] createHost starting for "" (driver="docker")
	I1227 10:02:07.549783  500772 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1227 10:02:07.550059  500772 start.go:159] libmachine.API.Create for "no-preload-021144" (driver="docker")
	I1227 10:02:07.550115  500772 client.go:173] LocalClient.Create starting
	I1227 10:02:07.550211  500772 main.go:144] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22344-301174/.minikube/certs/ca.pem
	I1227 10:02:07.550266  500772 main.go:144] libmachine: Decoding PEM data...
	I1227 10:02:07.550288  500772 main.go:144] libmachine: Parsing certificate...
	I1227 10:02:07.550343  500772 main.go:144] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22344-301174/.minikube/certs/cert.pem
	I1227 10:02:07.550366  500772 main.go:144] libmachine: Decoding PEM data...
	I1227 10:02:07.550378  500772 main.go:144] libmachine: Parsing certificate...
	I1227 10:02:07.550776  500772 cli_runner.go:164] Run: docker network inspect no-preload-021144 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1227 10:02:07.567022  500772 cli_runner.go:211] docker network inspect no-preload-021144 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1227 10:02:07.567108  500772 network_create.go:284] running [docker network inspect no-preload-021144] to gather additional debugging logs...
	I1227 10:02:07.567133  500772 cli_runner.go:164] Run: docker network inspect no-preload-021144
	W1227 10:02:07.583603  500772 cli_runner.go:211] docker network inspect no-preload-021144 returned with exit code 1
	I1227 10:02:07.583635  500772 network_create.go:287] error running [docker network inspect no-preload-021144]: docker network inspect no-preload-021144: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network no-preload-021144 not found
	I1227 10:02:07.583648  500772 network_create.go:289] output of [docker network inspect no-preload-021144]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network no-preload-021144 not found
	
	** /stderr **
	I1227 10:02:07.583771  500772 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1227 10:02:07.600797  500772 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-cc45c0939b74 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:be:a9:eb:a6:c1:01} reservation:<nil>}
	I1227 10:02:07.601203  500772 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-b122a856da6d IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:2e:eb:89:95:c1:62} reservation:<nil>}
	I1227 10:02:07.601448  500772 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-a1a16649dea9 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:9e:51:2b:90:d1:ea} reservation:<nil>}
	I1227 10:02:07.601716  500772 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-489f01168e32 IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:de:17:c3:81:51:6c} reservation:<nil>}
	I1227 10:02:07.602255  500772 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001a8a790}
	I1227 10:02:07.602278  500772 network_create.go:124] attempt to create docker network no-preload-021144 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I1227 10:02:07.602336  500772 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=no-preload-021144 no-preload-021144
	I1227 10:02:07.662723  500772 network_create.go:108] docker network no-preload-021144 192.168.85.0/24 created
	I1227 10:02:07.662762  500772 kic.go:121] calculated static IP "192.168.85.2" for the "no-preload-021144" container
	I1227 10:02:07.662837  500772 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1227 10:02:07.678805  500772 cli_runner.go:164] Run: docker volume create no-preload-021144 --label name.minikube.sigs.k8s.io=no-preload-021144 --label created_by.minikube.sigs.k8s.io=true
	I1227 10:02:07.696093  500772 oci.go:103] Successfully created a docker volume no-preload-021144
	I1227 10:02:07.696197  500772 cli_runner.go:164] Run: docker run --rm --name no-preload-021144-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=no-preload-021144 --entrypoint /usr/bin/test -v no-preload-021144:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a -d /var/lib
	I1227 10:02:08.249082  500772 oci.go:107] Successfully prepared a docker volume no-preload-021144
	I1227 10:02:08.249150  500772 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime crio
	W1227 10:02:08.249282  500772 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1227 10:02:08.249397  500772 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1227 10:02:08.303970  500772 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname no-preload-021144 --name no-preload-021144 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=no-preload-021144 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=no-preload-021144 --network no-preload-021144 --ip 192.168.85.2 --volume no-preload-021144:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a
	I1227 10:02:08.612115  500772 cli_runner.go:164] Run: docker container inspect no-preload-021144 --format={{.State.Running}}
	I1227 10:02:08.632945  500772 cli_runner.go:164] Run: docker container inspect no-preload-021144 --format={{.State.Status}}
	I1227 10:02:08.659264  500772 cli_runner.go:164] Run: docker exec no-preload-021144 stat /var/lib/dpkg/alternatives/iptables
	I1227 10:02:08.733044  500772 oci.go:144] the created container "no-preload-021144" has a running status.
	I1227 10:02:08.733078  500772 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/22344-301174/.minikube/machines/no-preload-021144/id_rsa...
	I1227 10:02:08.909060  500772 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22344-301174/.minikube/machines/no-preload-021144/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1227 10:02:08.952054  500772 cli_runner.go:164] Run: docker container inspect no-preload-021144 --format={{.State.Status}}
	I1227 10:02:08.981892  500772 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1227 10:02:08.981912  500772 kic_runner.go:114] Args: [docker exec --privileged no-preload-021144 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1227 10:02:09.045932  500772 cli_runner.go:164] Run: docker container inspect no-preload-021144 --format={{.State.Status}}
	I1227 10:02:09.072328  500772 machine.go:94] provisionDockerMachine start ...
	I1227 10:02:09.072417  500772 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-021144
	I1227 10:02:09.092006  500772 main.go:144] libmachine: Using SSH client type: native
	I1227 10:02:09.092343  500772 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dbbe0] 0x3de0e0 <nil>  [] 0s} 127.0.0.1 33431 <nil> <nil>}
	I1227 10:02:09.092355  500772 main.go:144] libmachine: About to run SSH command:
	hostname
	I1227 10:02:09.093070  500772 main.go:144] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1227 10:02:12.233884  500772 main.go:144] libmachine: SSH cmd err, output: <nil>: no-preload-021144
	
	I1227 10:02:12.233911  500772 ubuntu.go:182] provisioning hostname "no-preload-021144"
	I1227 10:02:12.233977  500772 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-021144
	I1227 10:02:12.252003  500772 main.go:144] libmachine: Using SSH client type: native
	I1227 10:02:12.252326  500772 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dbbe0] 0x3de0e0 <nil>  [] 0s} 127.0.0.1 33431 <nil> <nil>}
	I1227 10:02:12.252344  500772 main.go:144] libmachine: About to run SSH command:
	sudo hostname no-preload-021144 && echo "no-preload-021144" | sudo tee /etc/hostname
	I1227 10:02:12.400166  500772 main.go:144] libmachine: SSH cmd err, output: <nil>: no-preload-021144
	
	I1227 10:02:12.400260  500772 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-021144
	I1227 10:02:12.418275  500772 main.go:144] libmachine: Using SSH client type: native
	I1227 10:02:12.418596  500772 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dbbe0] 0x3de0e0 <nil>  [] 0s} 127.0.0.1 33431 <nil> <nil>}
	I1227 10:02:12.418619  500772 main.go:144] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-021144' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-021144/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-021144' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1227 10:02:12.558677  500772 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	I1227 10:02:12.558705  500772 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22344-301174/.minikube CaCertPath:/home/jenkins/minikube-integration/22344-301174/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22344-301174/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22344-301174/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22344-301174/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22344-301174/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22344-301174/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22344-301174/.minikube}
	I1227 10:02:12.558739  500772 ubuntu.go:190] setting up certificates
	I1227 10:02:12.558749  500772 provision.go:84] configureAuth start
	I1227 10:02:12.558818  500772 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-021144
	I1227 10:02:12.576842  500772 provision.go:143] copyHostCerts
	I1227 10:02:12.576915  500772 exec_runner.go:144] found /home/jenkins/minikube-integration/22344-301174/.minikube/ca.pem, removing ...
	I1227 10:02:12.576930  500772 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22344-301174/.minikube/ca.pem
	I1227 10:02:12.577014  500772 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22344-301174/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22344-301174/.minikube/ca.pem (1082 bytes)
	I1227 10:02:12.577121  500772 exec_runner.go:144] found /home/jenkins/minikube-integration/22344-301174/.minikube/cert.pem, removing ...
	I1227 10:02:12.577132  500772 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22344-301174/.minikube/cert.pem
	I1227 10:02:12.577159  500772 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22344-301174/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22344-301174/.minikube/cert.pem (1123 bytes)
	I1227 10:02:12.577223  500772 exec_runner.go:144] found /home/jenkins/minikube-integration/22344-301174/.minikube/key.pem, removing ...
	I1227 10:02:12.577232  500772 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22344-301174/.minikube/key.pem
	I1227 10:02:12.577256  500772 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22344-301174/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22344-301174/.minikube/key.pem (1675 bytes)
	I1227 10:02:12.577306  500772 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22344-301174/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22344-301174/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22344-301174/.minikube/certs/ca-key.pem org=jenkins.no-preload-021144 san=[127.0.0.1 192.168.85.2 localhost minikube no-preload-021144]
	I1227 10:02:12.729740  500772 provision.go:177] copyRemoteCerts
	I1227 10:02:12.729828  500772 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1227 10:02:12.729870  500772 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-021144
	I1227 10:02:12.747945  500772 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33431 SSHKeyPath:/home/jenkins/minikube-integration/22344-301174/.minikube/machines/no-preload-021144/id_rsa Username:docker}
	I1227 10:02:12.845937  500772 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22344-301174/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1227 10:02:12.864416  500772 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22344-301174/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1227 10:02:12.884454  500772 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22344-301174/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1227 10:02:12.902599  500772 provision.go:87] duration metric: took 343.818246ms to configureAuth
	I1227 10:02:12.902670  500772 ubuntu.go:206] setting minikube options for container-runtime
	I1227 10:02:12.902888  500772 config.go:182] Loaded profile config "no-preload-021144": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1227 10:02:12.903011  500772 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-021144
	I1227 10:02:12.920157  500772 main.go:144] libmachine: Using SSH client type: native
	I1227 10:02:12.920470  500772 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dbbe0] 0x3de0e0 <nil>  [] 0s} 127.0.0.1 33431 <nil> <nil>}
	I1227 10:02:12.920490  500772 main.go:144] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1227 10:02:13.230024  500772 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1227 10:02:13.230044  500772 machine.go:97] duration metric: took 4.157695377s to provisionDockerMachine
	I1227 10:02:13.230055  500772 client.go:176] duration metric: took 5.679929147s to LocalClient.Create
	I1227 10:02:13.230068  500772 start.go:167] duration metric: took 5.680012955s to libmachine.API.Create "no-preload-021144"
	I1227 10:02:13.230075  500772 start.go:293] postStartSetup for "no-preload-021144" (driver="docker")
	I1227 10:02:13.230085  500772 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1227 10:02:13.230218  500772 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1227 10:02:13.230278  500772 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-021144
	I1227 10:02:13.249035  500772 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33431 SSHKeyPath:/home/jenkins/minikube-integration/22344-301174/.minikube/machines/no-preload-021144/id_rsa Username:docker}
	I1227 10:02:13.350662  500772 ssh_runner.go:195] Run: cat /etc/os-release
	I1227 10:02:13.354449  500772 main.go:144] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1227 10:02:13.354476  500772 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1227 10:02:13.354488  500772 filesync.go:126] Scanning /home/jenkins/minikube-integration/22344-301174/.minikube/addons for local assets ...
	I1227 10:02:13.354547  500772 filesync.go:126] Scanning /home/jenkins/minikube-integration/22344-301174/.minikube/files for local assets ...
	I1227 10:02:13.354632  500772 filesync.go:149] local asset: /home/jenkins/minikube-integration/22344-301174/.minikube/files/etc/ssl/certs/3030432.pem -> 3030432.pem in /etc/ssl/certs
	I1227 10:02:13.354745  500772 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1227 10:02:13.362657  500772 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22344-301174/.minikube/files/etc/ssl/certs/3030432.pem --> /etc/ssl/certs/3030432.pem (1708 bytes)
	I1227 10:02:13.381479  500772 start.go:296] duration metric: took 151.389097ms for postStartSetup
	I1227 10:02:13.381861  500772 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-021144
	I1227 10:02:13.401159  500772 profile.go:143] Saving config to /home/jenkins/minikube-integration/22344-301174/.minikube/profiles/no-preload-021144/config.json ...
	I1227 10:02:13.401445  500772 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1227 10:02:13.401492  500772 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-021144
	I1227 10:02:13.420250  500772 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33431 SSHKeyPath:/home/jenkins/minikube-integration/22344-301174/.minikube/machines/no-preload-021144/id_rsa Username:docker}
	I1227 10:02:13.519584  500772 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1227 10:02:13.524544  500772 start.go:128] duration metric: took 5.980129389s to createHost
	I1227 10:02:13.524570  500772 start.go:83] releasing machines lock for "no-preload-021144", held for 5.980258038s
	I1227 10:02:13.524645  500772 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-021144
	I1227 10:02:13.541762  500772 ssh_runner.go:195] Run: cat /version.json
	I1227 10:02:13.541792  500772 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1227 10:02:13.541835  500772 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-021144
	I1227 10:02:13.541856  500772 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-021144
	I1227 10:02:13.566958  500772 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33431 SSHKeyPath:/home/jenkins/minikube-integration/22344-301174/.minikube/machines/no-preload-021144/id_rsa Username:docker}
	I1227 10:02:13.579773  500772 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33431 SSHKeyPath:/home/jenkins/minikube-integration/22344-301174/.minikube/machines/no-preload-021144/id_rsa Username:docker}
	I1227 10:02:13.766032  500772 ssh_runner.go:195] Run: systemctl --version
	I1227 10:02:13.772848  500772 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1227 10:02:13.807301  500772 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1227 10:02:13.811851  500772 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1227 10:02:13.811932  500772 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1227 10:02:13.842999  500772 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1227 10:02:13.843072  500772 start.go:496] detecting cgroup driver to use...
	I1227 10:02:13.843122  500772 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1227 10:02:13.843228  500772 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1227 10:02:13.862139  500772 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1227 10:02:13.875784  500772 docker.go:218] disabling cri-docker service (if available) ...
	I1227 10:02:13.875845  500772 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1227 10:02:13.894458  500772 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1227 10:02:13.913521  500772 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1227 10:02:14.045920  500772 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1227 10:02:14.185563  500772 docker.go:234] disabling docker service ...
	I1227 10:02:14.185640  500772 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1227 10:02:14.209954  500772 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1227 10:02:14.225694  500772 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1227 10:02:14.352439  500772 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1227 10:02:14.469729  500772 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1227 10:02:14.483310  500772 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1227 10:02:14.499214  500772 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1227 10:02:14.499340  500772 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 10:02:14.508662  500772 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1227 10:02:14.508780  500772 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 10:02:14.518382  500772 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 10:02:14.527485  500772 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 10:02:14.536667  500772 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1227 10:02:14.545418  500772 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 10:02:14.555135  500772 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 10:02:14.569359  500772 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 10:02:14.578789  500772 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1227 10:02:14.586804  500772 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1227 10:02:14.594626  500772 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1227 10:02:14.715397  500772 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1227 10:02:14.880275  500772 start.go:553] Will wait 60s for socket path /var/run/crio/crio.sock
	I1227 10:02:14.880400  500772 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1227 10:02:14.884350  500772 start.go:574] Will wait 60s for crictl version
	I1227 10:02:14.884426  500772 ssh_runner.go:195] Run: which crictl
	I1227 10:02:14.888059  500772 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1227 10:02:14.921896  500772 start.go:590] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1227 10:02:14.921990  500772 ssh_runner.go:195] Run: crio --version
	I1227 10:02:14.957597  500772 ssh_runner.go:195] Run: crio --version
	I1227 10:02:14.992676  500772 out.go:179] * Preparing Kubernetes v1.35.0 on CRI-O 1.34.3 ...
	I1227 10:02:14.995486  500772 cli_runner.go:164] Run: docker network inspect no-preload-021144 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1227 10:02:15.032629  500772 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1227 10:02:15.038215  500772 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1227 10:02:15.050300  500772 kubeadm.go:884] updating cluster {Name:no-preload-021144 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:no-preload-021144 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false} ...
	I1227 10:02:15.050428  500772 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime crio
	I1227 10:02:15.050474  500772 ssh_runner.go:195] Run: sudo crictl images --output json
	I1227 10:02:15.077495  500772 crio.go:557] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.35.0". assuming images are not preloaded.
	I1227 10:02:15.077521  500772 cache_images.go:90] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.35.0 registry.k8s.io/kube-controller-manager:v1.35.0 registry.k8s.io/kube-scheduler:v1.35.0 registry.k8s.io/kube-proxy:v1.35.0 registry.k8s.io/pause:3.10.1 registry.k8s.io/etcd:3.6.6-0 registry.k8s.io/coredns/coredns:v1.13.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1227 10:02:15.077579  500772 image.go:138] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1227 10:02:15.077815  500772 image.go:138] retrieving image: registry.k8s.io/kube-apiserver:v1.35.0
	I1227 10:02:15.077928  500772 image.go:138] retrieving image: registry.k8s.io/kube-controller-manager:v1.35.0
	I1227 10:02:15.078030  500772 image.go:138] retrieving image: registry.k8s.io/kube-scheduler:v1.35.0
	I1227 10:02:15.078206  500772 image.go:138] retrieving image: registry.k8s.io/kube-proxy:v1.35.0
	I1227 10:02:15.078314  500772 image.go:138] retrieving image: registry.k8s.io/pause:3.10.1
	I1227 10:02:15.078397  500772 image.go:138] retrieving image: registry.k8s.io/etcd:3.6.6-0
	I1227 10:02:15.078484  500772 image.go:138] retrieving image: registry.k8s.io/coredns/coredns:v1.13.1
	I1227 10:02:15.079890  500772 image.go:181] daemon lookup for registry.k8s.io/kube-proxy:v1.35.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.35.0
	I1227 10:02:15.080355  500772 image.go:181] daemon lookup for registry.k8s.io/pause:3.10.1: Error response from daemon: No such image: registry.k8s.io/pause:3.10.1
	I1227 10:02:15.080522  500772 image.go:181] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1227 10:02:15.080757  500772 image.go:181] daemon lookup for registry.k8s.io/kube-apiserver:v1.35.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.35.0
	I1227 10:02:15.080908  500772 image.go:181] daemon lookup for registry.k8s.io/kube-controller-manager:v1.35.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.35.0
	I1227 10:02:15.081130  500772 image.go:181] daemon lookup for registry.k8s.io/etcd:3.6.6-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.6.6-0
	I1227 10:02:15.081335  500772 image.go:181] daemon lookup for registry.k8s.io/kube-scheduler:v1.35.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.35.0
	I1227 10:02:15.082573  500772 image.go:181] daemon lookup for registry.k8s.io/coredns/coredns:v1.13.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.13.1
	I1227 10:02:15.392236  500772 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.35.0
	I1227 10:02:15.394473  500772 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.10.1
	I1227 10:02:15.398551  500772 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.6.6-0
	I1227 10:02:15.409302  500772 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.13.1
	I1227 10:02:15.411311  500772 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.35.0
	I1227 10:02:15.416097  500772 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.35.0
	I1227 10:02:15.417981  500772 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.35.0
	I1227 10:02:15.486681  500772 cache_images.go:118] "registry.k8s.io/kube-proxy:v1.35.0" needs transfer: "registry.k8s.io/kube-proxy:v1.35.0" does not exist at hash "de369f46c2ff55c31ea783a663eb203caa820f3db1f9b9c935e79e7d1e9fd9e5" in container runtime
	I1227 10:02:15.486785  500772 cri.go:226] Removing image: registry.k8s.io/kube-proxy:v1.35.0
	I1227 10:02:15.486865  500772 ssh_runner.go:195] Run: which crictl
	I1227 10:02:15.489337  500772 cache_images.go:118] "registry.k8s.io/pause:3.10.1" needs transfer: "registry.k8s.io/pause:3.10.1" does not exist at hash "d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd" in container runtime
	I1227 10:02:15.489434  500772 cri.go:226] Removing image: registry.k8s.io/pause:3.10.1
	I1227 10:02:15.489513  500772 ssh_runner.go:195] Run: which crictl
	I1227 10:02:15.534715  500772 cache_images.go:118] "registry.k8s.io/etcd:3.6.6-0" needs transfer: "registry.k8s.io/etcd:3.6.6-0" does not exist at hash "271e49a0ebc56647476845128fcd2a73bb138beeca3878cc3bf52b4ff1172a57" in container runtime
	I1227 10:02:15.534888  500772 cri.go:226] Removing image: registry.k8s.io/etcd:3.6.6-0
	I1227 10:02:15.534967  500772 ssh_runner.go:195] Run: which crictl
	I1227 10:02:15.534837  500772 cache_images.go:118] "registry.k8s.io/coredns/coredns:v1.13.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.13.1" does not exist at hash "e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf" in container runtime
	I1227 10:02:15.535087  500772 cri.go:226] Removing image: registry.k8s.io/coredns/coredns:v1.13.1
	I1227 10:02:15.535133  500772 ssh_runner.go:195] Run: which crictl
	I1227 10:02:15.566277  500772 cache_images.go:118] "registry.k8s.io/kube-scheduler:v1.35.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.35.0" does not exist at hash "ddc8422d4d35a6fc66c34be61e24df795e5cebf197eb546f62740d0bafef874f" in container runtime
	I1227 10:02:15.566324  500772 cri.go:226] Removing image: registry.k8s.io/kube-scheduler:v1.35.0
	I1227 10:02:15.566388  500772 ssh_runner.go:195] Run: which crictl
	I1227 10:02:15.566464  500772 cache_images.go:118] "registry.k8s.io/kube-apiserver:v1.35.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.35.0" does not exist at hash "c3fcf259c473a57a5d7da116e29161904491091743512d27467c907c5516f856" in container runtime
	I1227 10:02:15.566485  500772 cri.go:226] Removing image: registry.k8s.io/kube-apiserver:v1.35.0
	I1227 10:02:15.566523  500772 ssh_runner.go:195] Run: which crictl
	I1227 10:02:15.579715  500772 cache_images.go:118] "registry.k8s.io/kube-controller-manager:v1.35.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.35.0" does not exist at hash "88898f1d1a62a3ea9db5d4d099dee7aa52ebe8191016c5b3c721388a309983e0" in container runtime
	I1227 10:02:15.579789  500772 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I1227 10:02:15.579832  500772 cri.go:226] Removing image: registry.k8s.io/kube-controller-manager:v1.35.0
	I1227 10:02:15.579874  500772 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.6.6-0
	I1227 10:02:15.579922  500772 ssh_runner.go:195] Run: which crictl
	I1227 10:02:15.579960  500772 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.35.0
	I1227 10:02:15.580010  500772 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.35.0
	I1227 10:02:15.579757  500772 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.35.0
	I1227 10:02:15.579933  500772 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.13.1
	I1227 10:02:15.675063  500772 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I1227 10:02:15.703263  500772 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.35.0
	I1227 10:02:15.703364  500772 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.6.6-0
	I1227 10:02:15.703431  500772 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.35.0
	I1227 10:02:15.703498  500772 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.35.0
	I1227 10:02:15.703567  500772 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.13.1
	I1227 10:02:15.703653  500772 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.35.0
	I1227 10:02:15.835214  500772 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I1227 10:02:15.865800  500772 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.6.6-0
	I1227 10:02:15.865880  500772 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.35.0
	I1227 10:02:15.865946  500772 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.35.0
	I1227 10:02:15.866023  500772 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.35.0
	I1227 10:02:15.866076  500772 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.35.0
	I1227 10:02:15.883455  500772 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.13.1
	I1227 10:02:15.915367  500772 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22344-301174/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1
	I1227 10:02:15.915472  500772 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.10.1
	I1227 10:02:15.976733  500772 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.35.0
	I1227 10:02:15.976813  500772 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22344-301174/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.6-0
	I1227 10:02:15.976884  500772 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.6.6-0
	I1227 10:02:15.976940  500772 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22344-301174/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0
	I1227 10:02:15.976991  500772 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.35.0
	I1227 10:02:15.977039  500772 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22344-301174/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0
	I1227 10:02:15.977084  500772 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.35.0
	I1227 10:02:15.977134  500772 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22344-301174/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0
	I1227 10:02:15.977179  500772 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.35.0
	I1227 10:02:15.977228  500772 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22344-301174/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1
	I1227 10:02:15.977276  500772 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.13.1
	I1227 10:02:15.977325  500772 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.10.1: stat -c "%s %y" /var/lib/minikube/images/pause_3.10.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/pause_3.10.1': No such file or directory
	I1227 10:02:15.977339  500772 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22344-301174/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 --> /var/lib/minikube/images/pause_3.10.1 (268288 bytes)
	I1227 10:02:16.020450  500772 crio.go:275] Loading image: /var/lib/minikube/images/pause_3.10.1
	I1227 10:02:16.020524  500772 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/pause_3.10.1
	I1227 10:02:16.026902  500772 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22344-301174/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0
	I1227 10:02:16.027012  500772 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.35.0
	I1227 10:02:16.027075  500772 ssh_runner.go:352] existence check for /var/lib/minikube/images/etcd_3.6.6-0: stat -c "%s %y" /var/lib/minikube/images/etcd_3.6.6-0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/etcd_3.6.6-0': No such file or directory
	I1227 10:02:16.027094  500772 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22344-301174/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.6-0 --> /var/lib/minikube/images/etcd_3.6.6-0 (21761024 bytes)
	I1227 10:02:16.027138  500772 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-scheduler_v1.35.0: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.35.0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-scheduler_v1.35.0': No such file or directory
	I1227 10:02:16.027151  500772 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22344-301174/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0 --> /var/lib/minikube/images/kube-scheduler_v1.35.0 (15415808 bytes)
	I1227 10:02:16.027193  500772 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-proxy_v1.35.0: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.35.0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-proxy_v1.35.0': No such file or directory
	I1227 10:02:16.027219  500772 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22344-301174/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0 --> /var/lib/minikube/images/kube-proxy_v1.35.0 (22434816 bytes)
	I1227 10:02:16.027261  500772 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-apiserver_v1.35.0: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.35.0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-apiserver_v1.35.0': No such file or directory
	I1227 10:02:16.027276  500772 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22344-301174/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0 --> /var/lib/minikube/images/kube-apiserver_v1.35.0 (24702976 bytes)
	I1227 10:02:16.027322  500772 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.13.1: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.13.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/coredns_v1.13.1': No such file or directory
	I1227 10:02:16.027336  500772 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22344-301174/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1 --> /var/lib/minikube/images/coredns_v1.13.1 (21178368 bytes)
	W1227 10:02:16.305837  500772 image.go:328] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I1227 10:02:16.306027  500772 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1227 10:02:16.366067  500772 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-controller-manager_v1.35.0: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.35.0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-controller-manager_v1.35.0': No such file or directory
	I1227 10:02:16.366129  500772 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22344-301174/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0 --> /var/lib/minikube/images/kube-controller-manager_v1.35.0 (20682752 bytes)
	I1227 10:02:16.366243  500772 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22344-301174/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 from cache
	I1227 10:02:16.461792  500772 cache_images.go:118] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51" in container runtime
	I1227 10:02:16.462488  500772 cri.go:226] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1227 10:02:16.462685  500772 ssh_runner.go:195] Run: which crictl
	I1227 10:02:16.561859  500772 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1227 10:02:16.624852  500772 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.35.0
	I1227 10:02:16.624922  500772 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.35.0
	I1227 10:02:16.700704  500772 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1227 10:02:18.253575  500772 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.35.0: (1.628628458s)
	I1227 10:02:18.253609  500772 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22344-301174/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0 from cache
	I1227 10:02:18.253629  500772 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.13.1
	I1227 10:02:18.253655  500772 ssh_runner.go:235] Completed: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.552862418s)
	I1227 10:02:18.253679  500772 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.13.1
	I1227 10:02:18.253756  500772 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1227 10:02:19.493046  500772 ssh_runner.go:235] Completed: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.239250833s)
	I1227 10:02:19.493094  500772 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22344-301174/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I1227 10:02:19.493144  500772 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.13.1: (1.239441391s)
	I1227 10:02:19.493162  500772 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22344-301174/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1 from cache
	I1227 10:02:19.493179  500772 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.6.6-0
	I1227 10:02:19.493194  500772 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I1227 10:02:19.493221  500772 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.6.6-0
	I1227 10:02:19.499359  500772 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I1227 10:02:19.499399  500772 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22344-301174/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (8035840 bytes)
	I1227 10:02:21.326412  500772 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.6.6-0: (1.833168692s)
	I1227 10:02:21.326441  500772 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22344-301174/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.6-0 from cache
	I1227 10:02:21.326459  500772 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.35.0
	I1227 10:02:21.326507  500772 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.35.0
	I1227 10:02:22.626605  500772 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.35.0: (1.300070832s)
	I1227 10:02:22.626632  500772 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22344-301174/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0 from cache
	I1227 10:02:22.626650  500772 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.35.0
	I1227 10:02:22.626698  500772 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.35.0
	I1227 10:02:23.952141  500772 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.35.0: (1.325412556s)
	I1227 10:02:23.952172  500772 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22344-301174/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0 from cache
	I1227 10:02:23.952191  500772 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.35.0
	I1227 10:02:23.952244  500772 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.35.0
	I1227 10:02:25.109155  500772 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.35.0: (1.156882609s)
	I1227 10:02:25.109182  500772 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22344-301174/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0 from cache
	I1227 10:02:25.109201  500772 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I1227 10:02:25.109252  500772 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I1227 10:02:25.684093  500772 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22344-301174/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I1227 10:02:25.684136  500772 cache_images.go:125] Successfully loaded all cached images
	I1227 10:02:25.684143  500772 cache_images.go:94] duration metric: took 10.606598787s to LoadCachedImages
	I1227 10:02:25.684156  500772 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.35.0 crio true true} ...
	I1227 10:02:25.684255  500772 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=no-preload-021144 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0 ClusterName:no-preload-021144 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1227 10:02:25.684350  500772 ssh_runner.go:195] Run: crio config
	I1227 10:02:25.745382  500772 cni.go:84] Creating CNI manager for ""
	I1227 10:02:25.745408  500772 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1227 10:02:25.745430  500772 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1227 10:02:25.745464  500772 kubeadm.go:197] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.35.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-021144 NodeName:no-preload-021144 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc
/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock failCgroupV1:false hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1227 10:02:25.745613  500772 kubeadm.go:203] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-021144"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	failCgroupV1: false
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1227 10:02:25.745713  500772 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0
	I1227 10:02:25.755042  500772 binaries.go:54] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.35.0: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.35.0': No such file or directory
	
	Initiating transfer...
	I1227 10:02:25.755161  500772 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.35.0
	I1227 10:02:25.765008  500772 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0/bin/linux/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.35.0/bin/linux/arm64/kubectl.sha256
	I1227 10:02:25.765162  500772 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0/kubectl
	I1227 10:02:25.765281  500772 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0/bin/linux/arm64/kubelet?checksum=file:https://dl.k8s.io/release/v1.35.0/bin/linux/arm64/kubelet.sha256
	I1227 10:02:25.765323  500772 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1227 10:02:25.765425  500772 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0/bin/linux/arm64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.35.0/bin/linux/arm64/kubeadm.sha256
	I1227 10:02:25.765485  500772 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0/kubeadm
	I1227 10:02:25.775380  500772 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.35.0/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.35.0/kubeadm': No such file or directory
	I1227 10:02:25.775414  500772 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22344-301174/.minikube/cache/linux/arm64/v1.35.0/kubeadm --> /var/lib/minikube/binaries/v1.35.0/kubeadm (68354232 bytes)
	I1227 10:02:25.775665  500772 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.35.0/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.35.0/kubectl': No such file or directory
	I1227 10:02:25.775701  500772 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22344-301174/.minikube/cache/linux/arm64/v1.35.0/kubectl --> /var/lib/minikube/binaries/v1.35.0/kubectl (55247032 bytes)
	I1227 10:02:25.792957  500772 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0/kubelet
	I1227 10:02:25.844661  500772 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.35.0/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.35.0/kubelet': No such file or directory
	I1227 10:02:25.844709  500772 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22344-301174/.minikube/cache/linux/arm64/v1.35.0/kubelet --> /var/lib/minikube/binaries/v1.35.0/kubelet (54329636 bytes)
	I1227 10:02:26.632120  500772 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1227 10:02:26.643175  500772 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I1227 10:02:26.665531  500772 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1227 10:02:26.686095  500772 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2234 bytes)
	I1227 10:02:26.700348  500772 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1227 10:02:26.706331  500772 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1227 10:02:26.717711  500772 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1227 10:02:26.835861  500772 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1227 10:02:26.863004  500772 certs.go:69] Setting up /home/jenkins/minikube-integration/22344-301174/.minikube/profiles/no-preload-021144 for IP: 192.168.85.2
	I1227 10:02:26.863027  500772 certs.go:195] generating shared ca certs ...
	I1227 10:02:26.863044  500772 certs.go:227] acquiring lock for ca certs: {Name:mk8bd99999eb300c33129cb11fd55a1be3001328 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 10:02:26.863235  500772 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22344-301174/.minikube/ca.key
	I1227 10:02:26.863287  500772 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22344-301174/.minikube/proxy-client-ca.key
	I1227 10:02:26.863306  500772 certs.go:257] generating profile certs ...
	I1227 10:02:26.863368  500772 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22344-301174/.minikube/profiles/no-preload-021144/client.key
	I1227 10:02:26.863391  500772 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22344-301174/.minikube/profiles/no-preload-021144/client.crt with IP's: []
	I1227 10:02:27.016427  500772 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22344-301174/.minikube/profiles/no-preload-021144/client.crt ...
	I1227 10:02:27.016461  500772 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22344-301174/.minikube/profiles/no-preload-021144/client.crt: {Name:mk0fad12c22a50158a8658ed7238db83574fa479 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 10:02:27.016670  500772 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22344-301174/.minikube/profiles/no-preload-021144/client.key ...
	I1227 10:02:27.016685  500772 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22344-301174/.minikube/profiles/no-preload-021144/client.key: {Name:mka7f2cd0fea7d2d3e4d2b58e5584867962b8080 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 10:02:27.016780  500772 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22344-301174/.minikube/profiles/no-preload-021144/apiserver.key.d17a6b29
	I1227 10:02:27.016799  500772 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22344-301174/.minikube/profiles/no-preload-021144/apiserver.crt.d17a6b29 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.85.2]
	I1227 10:02:27.835920  500772 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22344-301174/.minikube/profiles/no-preload-021144/apiserver.crt.d17a6b29 ...
	I1227 10:02:27.835957  500772 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22344-301174/.minikube/profiles/no-preload-021144/apiserver.crt.d17a6b29: {Name:mk12ff2e988c711ac6507c68b1594311f0a52282 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 10:02:27.836210  500772 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22344-301174/.minikube/profiles/no-preload-021144/apiserver.key.d17a6b29 ...
	I1227 10:02:27.836230  500772 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22344-301174/.minikube/profiles/no-preload-021144/apiserver.key.d17a6b29: {Name:mk154e23c2077983b902f55b569160bafe491d74 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 10:02:27.836357  500772 certs.go:382] copying /home/jenkins/minikube-integration/22344-301174/.minikube/profiles/no-preload-021144/apiserver.crt.d17a6b29 -> /home/jenkins/minikube-integration/22344-301174/.minikube/profiles/no-preload-021144/apiserver.crt
	I1227 10:02:27.836460  500772 certs.go:386] copying /home/jenkins/minikube-integration/22344-301174/.minikube/profiles/no-preload-021144/apiserver.key.d17a6b29 -> /home/jenkins/minikube-integration/22344-301174/.minikube/profiles/no-preload-021144/apiserver.key
	I1227 10:02:27.836551  500772 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22344-301174/.minikube/profiles/no-preload-021144/proxy-client.key
	I1227 10:02:27.836573  500772 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22344-301174/.minikube/profiles/no-preload-021144/proxy-client.crt with IP's: []
	I1227 10:02:27.992990  500772 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22344-301174/.minikube/profiles/no-preload-021144/proxy-client.crt ...
	I1227 10:02:27.993024  500772 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22344-301174/.minikube/profiles/no-preload-021144/proxy-client.crt: {Name:mka566c68ac07edf884fe2959772f45ea05260aa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 10:02:27.993273  500772 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22344-301174/.minikube/profiles/no-preload-021144/proxy-client.key ...
	I1227 10:02:27.993291  500772 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22344-301174/.minikube/profiles/no-preload-021144/proxy-client.key: {Name:mk4ab138d8e9090a1d9407ef1306e51be64d08b1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 10:02:27.993506  500772 certs.go:484] found cert: /home/jenkins/minikube-integration/22344-301174/.minikube/certs/303043.pem (1338 bytes)
	W1227 10:02:27.993553  500772 certs.go:480] ignoring /home/jenkins/minikube-integration/22344-301174/.minikube/certs/303043_empty.pem, impossibly tiny 0 bytes
	I1227 10:02:27.993565  500772 certs.go:484] found cert: /home/jenkins/minikube-integration/22344-301174/.minikube/certs/ca-key.pem (1675 bytes)
	I1227 10:02:27.993593  500772 certs.go:484] found cert: /home/jenkins/minikube-integration/22344-301174/.minikube/certs/ca.pem (1082 bytes)
	I1227 10:02:27.993621  500772 certs.go:484] found cert: /home/jenkins/minikube-integration/22344-301174/.minikube/certs/cert.pem (1123 bytes)
	I1227 10:02:27.993649  500772 certs.go:484] found cert: /home/jenkins/minikube-integration/22344-301174/.minikube/certs/key.pem (1675 bytes)
	I1227 10:02:27.993711  500772 certs.go:484] found cert: /home/jenkins/minikube-integration/22344-301174/.minikube/files/etc/ssl/certs/3030432.pem (1708 bytes)
	I1227 10:02:27.994395  500772 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22344-301174/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1227 10:02:28.018009  500772 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22344-301174/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1227 10:02:28.039855  500772 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22344-301174/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1227 10:02:28.061561  500772 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22344-301174/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1227 10:02:28.081267  500772 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22344-301174/.minikube/profiles/no-preload-021144/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1227 10:02:28.100900  500772 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22344-301174/.minikube/profiles/no-preload-021144/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1227 10:02:28.120586  500772 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22344-301174/.minikube/profiles/no-preload-021144/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1227 10:02:28.139803  500772 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22344-301174/.minikube/profiles/no-preload-021144/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1671 bytes)
	I1227 10:02:28.158951  500772 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22344-301174/.minikube/files/etc/ssl/certs/3030432.pem --> /usr/share/ca-certificates/3030432.pem (1708 bytes)
	I1227 10:02:28.177946  500772 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22344-301174/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1227 10:02:28.197402  500772 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22344-301174/.minikube/certs/303043.pem --> /usr/share/ca-certificates/303043.pem (1338 bytes)
	I1227 10:02:28.216561  500772 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (722 bytes)
	I1227 10:02:28.231343  500772 ssh_runner.go:195] Run: openssl version
	I1227 10:02:28.239317  500772 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1227 10:02:28.248311  500772 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1227 10:02:28.257014  500772 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1227 10:02:28.261968  500772 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 27 09:13 /usr/share/ca-certificates/minikubeCA.pem
	I1227 10:02:28.262042  500772 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1227 10:02:28.306434  500772 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1227 10:02:28.315387  500772 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1227 10:02:28.323831  500772 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/303043.pem
	I1227 10:02:28.332643  500772 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/303043.pem /etc/ssl/certs/303043.pem
	I1227 10:02:28.341557  500772 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/303043.pem
	I1227 10:02:28.346509  500772 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 27 09:17 /usr/share/ca-certificates/303043.pem
	I1227 10:02:28.346621  500772 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/303043.pem
	I1227 10:02:28.402197  500772 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1227 10:02:28.422483  500772 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/303043.pem /etc/ssl/certs/51391683.0
	I1227 10:02:28.442262  500772 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/3030432.pem
	I1227 10:02:28.459624  500772 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/3030432.pem /etc/ssl/certs/3030432.pem
	I1227 10:02:28.470857  500772 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3030432.pem
	I1227 10:02:28.476085  500772 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 27 09:17 /usr/share/ca-certificates/3030432.pem
	I1227 10:02:28.476153  500772 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3030432.pem
	I1227 10:02:28.519238  500772 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1227 10:02:28.528068  500772 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/3030432.pem /etc/ssl/certs/3ec20f2e.0
	I1227 10:02:28.536825  500772 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1227 10:02:28.541827  500772 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1227 10:02:28.541922  500772 kubeadm.go:401] StartCluster: {Name:no-preload-021144 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:no-preload-021144 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: So
cketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1227 10:02:28.542008  500772 cri.go:61] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1227 10:02:28.542073  500772 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1227 10:02:28.570966  500772 cri.go:96] found id: ""
	I1227 10:02:28.571037  500772 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1227 10:02:28.580500  500772 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1227 10:02:28.589580  500772 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1227 10:02:28.589646  500772 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1227 10:02:28.600707  500772 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1227 10:02:28.600731  500772 kubeadm.go:158] found existing configuration files:
	
	I1227 10:02:28.600787  500772 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1227 10:02:28.609876  500772 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1227 10:02:28.609971  500772 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1227 10:02:28.619612  500772 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1227 10:02:28.628876  500772 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1227 10:02:28.629005  500772 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1227 10:02:28.637797  500772 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1227 10:02:28.646576  500772 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1227 10:02:28.646641  500772 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1227 10:02:28.655081  500772 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1227 10:02:28.664056  500772 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1227 10:02:28.664122  500772 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1227 10:02:28.673053  500772 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1227 10:02:28.715033  500772 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0
	I1227 10:02:28.715275  500772 kubeadm.go:319] [preflight] Running pre-flight checks
	I1227 10:02:28.789848  500772 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1227 10:02:28.789988  500772 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1227 10:02:28.790045  500772 kubeadm.go:319] OS: Linux
	I1227 10:02:28.790176  500772 kubeadm.go:319] CGROUPS_CPU: enabled
	I1227 10:02:28.790286  500772 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1227 10:02:28.790368  500772 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1227 10:02:28.790450  500772 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1227 10:02:28.790536  500772 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1227 10:02:28.790644  500772 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1227 10:02:28.790731  500772 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1227 10:02:28.790823  500772 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1227 10:02:28.790913  500772 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1227 10:02:28.862071  500772 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1227 10:02:28.862305  500772 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1227 10:02:28.862447  500772 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1227 10:02:28.876861  500772 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1227 10:02:28.885207  500772 out.go:252]   - Generating certificates and keys ...
	I1227 10:02:28.885385  500772 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1227 10:02:28.885498  500772 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1227 10:02:28.965191  500772 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1227 10:02:29.295350  500772 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1227 10:02:29.543866  500772 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1227 10:02:29.713946  500772 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1227 10:02:30.037369  500772 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1227 10:02:30.037692  500772 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [localhost no-preload-021144] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1227 10:02:30.391061  500772 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1227 10:02:30.391374  500772 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [localhost no-preload-021144] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1227 10:02:30.752288  500772 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1227 10:02:31.025675  500772 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1227 10:02:31.447324  500772 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1227 10:02:31.447830  500772 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1227 10:02:31.811729  500772 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1227 10:02:31.938263  500772 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1227 10:02:32.462120  500772 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1227 10:02:32.566270  500772 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1227 10:02:33.349674  500772 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1227 10:02:33.350339  500772 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1227 10:02:33.353002  500772 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1227 10:02:33.358052  500772 out.go:252]   - Booting up control plane ...
	I1227 10:02:33.358187  500772 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1227 10:02:33.358266  500772 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1227 10:02:33.358344  500772 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1227 10:02:33.376703  500772 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1227 10:02:33.376849  500772 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1227 10:02:33.384691  500772 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1227 10:02:33.385056  500772 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1227 10:02:33.385125  500772 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1227 10:02:33.528392  500772 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1227 10:02:33.528554  500772 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1227 10:02:35.029922  500772 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.501777471s
	I1227 10:02:35.033525  500772 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1227 10:02:35.033631  500772 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.85.2:8443/livez
	I1227 10:02:35.033725  500772 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1227 10:02:35.033914  500772 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1227 10:02:36.042817  500772 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 1.008597771s
	I1227 10:02:38.132647  500772 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 3.098863716s
	I1227 10:02:40.045993  500772 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 5.002434686s
	I1227 10:02:40.098206  500772 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1227 10:02:40.120369  500772 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1227 10:02:40.147466  500772 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1227 10:02:40.147711  500772 kubeadm.go:319] [mark-control-plane] Marking the node no-preload-021144 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1227 10:02:40.168601  500772 kubeadm.go:319] [bootstrap-token] Using token: yn8xbt.nud2sclojoksgj7u
	I1227 10:02:40.171808  500772 out.go:252]   - Configuring RBAC rules ...
	I1227 10:02:40.171958  500772 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1227 10:02:40.180864  500772 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1227 10:02:40.194347  500772 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1227 10:02:40.202826  500772 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1227 10:02:40.207922  500772 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1227 10:02:40.222644  500772 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1227 10:02:40.443330  500772 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1227 10:02:40.876853  500772 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1227 10:02:41.444201  500772 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1227 10:02:41.445452  500772 kubeadm.go:319] 
	I1227 10:02:41.445552  500772 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1227 10:02:41.445580  500772 kubeadm.go:319] 
	I1227 10:02:41.445664  500772 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1227 10:02:41.445669  500772 kubeadm.go:319] 
	I1227 10:02:41.445694  500772 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1227 10:02:41.445754  500772 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1227 10:02:41.445804  500772 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1227 10:02:41.445808  500772 kubeadm.go:319] 
	I1227 10:02:41.445862  500772 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1227 10:02:41.445867  500772 kubeadm.go:319] 
	I1227 10:02:41.445914  500772 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1227 10:02:41.445918  500772 kubeadm.go:319] 
	I1227 10:02:41.445970  500772 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1227 10:02:41.446046  500772 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1227 10:02:41.446123  500772 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1227 10:02:41.446129  500772 kubeadm.go:319] 
	I1227 10:02:41.446239  500772 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1227 10:02:41.446320  500772 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1227 10:02:41.446324  500772 kubeadm.go:319] 
	I1227 10:02:41.446415  500772 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token yn8xbt.nud2sclojoksgj7u \
	I1227 10:02:41.446518  500772 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:08d8899242412948893d7bdf12a68d0584eea0f77b00577c527378f6462e133c \
	I1227 10:02:41.446538  500772 kubeadm.go:319] 	--control-plane 
	I1227 10:02:41.446542  500772 kubeadm.go:319] 
	I1227 10:02:41.446628  500772 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1227 10:02:41.446633  500772 kubeadm.go:319] 
	I1227 10:02:41.446714  500772 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token yn8xbt.nud2sclojoksgj7u \
	I1227 10:02:41.446816  500772 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:08d8899242412948893d7bdf12a68d0584eea0f77b00577c527378f6462e133c 
	I1227 10:02:41.449486  500772 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1227 10:02:41.449908  500772 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1227 10:02:41.450015  500772 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1227 10:02:41.450030  500772 cni.go:84] Creating CNI manager for ""
	I1227 10:02:41.450038  500772 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1227 10:02:41.453202  500772 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1227 10:02:41.456164  500772 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1227 10:02:41.462527  500772 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.35.0/kubectl ...
	I1227 10:02:41.462546  500772 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2620 bytes)
	I1227 10:02:41.479843  500772 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1227 10:02:41.787480  500772 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1227 10:02:41.787545  500772 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1227 10:02:41.787625  500772 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes no-preload-021144 minikube.k8s.io/updated_at=2025_12_27T10_02_41_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=a2daf445edf4872fd9586416ba5dbf507613db86 minikube.k8s.io/name=no-preload-021144 minikube.k8s.io/primary=true
	I1227 10:02:41.985101  500772 ops.go:34] apiserver oom_adj: -16
	I1227 10:02:41.985214  500772 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1227 10:02:42.486132  500772 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1227 10:02:42.986057  500772 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1227 10:02:43.485509  500772 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1227 10:02:43.985952  500772 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1227 10:02:44.486310  500772 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1227 10:02:44.985328  500772 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1227 10:02:45.485841  500772 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1227 10:02:45.985800  500772 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1227 10:02:46.137951  500772 kubeadm.go:1114] duration metric: took 4.350463096s to wait for elevateKubeSystemPrivileges
	I1227 10:02:46.137995  500772 kubeadm.go:403] duration metric: took 17.596076462s to StartCluster
	I1227 10:02:46.138014  500772 settings.go:142] acquiring lock: {Name:mkd3c0090464e277fd903dd0170c6ace5a6172ea Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 10:02:46.138080  500772 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22344-301174/kubeconfig
	I1227 10:02:46.138733  500772 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22344-301174/kubeconfig: {Name:mk3a3b85dc4ae4614cb3a9a7459a1ec0ee3f4d9f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 10:02:46.138973  500772 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1227 10:02:46.138995  500772 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1227 10:02:46.139061  500772 addons.go:70] Setting storage-provisioner=true in profile "no-preload-021144"
	I1227 10:02:46.139075  500772 addons.go:239] Setting addon storage-provisioner=true in "no-preload-021144"
	I1227 10:02:46.139095  500772 host.go:66] Checking if "no-preload-021144" exists ...
	I1227 10:02:46.138977  500772 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1227 10:02:46.139581  500772 cli_runner.go:164] Run: docker container inspect no-preload-021144 --format={{.State.Status}}
	I1227 10:02:46.140001  500772 config.go:182] Loaded profile config "no-preload-021144": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1227 10:02:46.140044  500772 addons.go:70] Setting default-storageclass=true in profile "no-preload-021144"
	I1227 10:02:46.140071  500772 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "no-preload-021144"
	I1227 10:02:46.140310  500772 cli_runner.go:164] Run: docker container inspect no-preload-021144 --format={{.State.Status}}
	I1227 10:02:46.144646  500772 out.go:179] * Verifying Kubernetes components...
	I1227 10:02:46.154020  500772 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1227 10:02:46.183214  500772 addons.go:239] Setting addon default-storageclass=true in "no-preload-021144"
	I1227 10:02:46.183254  500772 host.go:66] Checking if "no-preload-021144" exists ...
	I1227 10:02:46.183679  500772 cli_runner.go:164] Run: docker container inspect no-preload-021144 --format={{.State.Status}}
	I1227 10:02:46.186969  500772 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1227 10:02:46.198244  500772 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1227 10:02:46.198277  500772 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1227 10:02:46.198343  500772 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-021144
	I1227 10:02:46.217087  500772 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1227 10:02:46.217128  500772 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1227 10:02:46.217195  500772 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-021144
	I1227 10:02:46.231006  500772 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33431 SSHKeyPath:/home/jenkins/minikube-integration/22344-301174/.minikube/machines/no-preload-021144/id_rsa Username:docker}
	I1227 10:02:46.263764  500772 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33431 SSHKeyPath:/home/jenkins/minikube-integration/22344-301174/.minikube/machines/no-preload-021144/id_rsa Username:docker}
	I1227 10:02:46.549858  500772 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.85.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.35.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1227 10:02:46.549971  500772 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1227 10:02:46.635086  500772 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1227 10:02:46.636923  500772 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1227 10:02:47.270068  500772 node_ready.go:35] waiting up to 6m0s for node "no-preload-021144" to be "Ready" ...
	I1227 10:02:47.270414  500772 start.go:987] {"host.minikube.internal": 192.168.85.1} host record injected into CoreDNS's ConfigMap
	I1227 10:02:47.671995  500772 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1227 10:02:47.675051  500772 addons.go:530] duration metric: took 1.536046787s for enable addons: enabled=[storage-provisioner default-storageclass]
	I1227 10:02:47.775300  500772 kapi.go:214] "coredns" deployment in "kube-system" namespace and "no-preload-021144" context rescaled to 1 replicas
	W1227 10:02:49.273871  500772 node_ready.go:57] node "no-preload-021144" has "Ready":"False" status (will retry)
	W1227 10:02:51.777081  500772 node_ready.go:57] node "no-preload-021144" has "Ready":"False" status (will retry)
	W1227 10:02:54.273594  500772 node_ready.go:57] node "no-preload-021144" has "Ready":"False" status (will retry)
	W1227 10:02:56.772811  500772 node_ready.go:57] node "no-preload-021144" has "Ready":"False" status (will retry)
	W1227 10:02:58.772858  500772 node_ready.go:57] node "no-preload-021144" has "Ready":"False" status (will retry)
	I1227 10:03:00.300223  500772 node_ready.go:49] node "no-preload-021144" is "Ready"
	I1227 10:03:00.300257  500772 node_ready.go:38] duration metric: took 13.030141645s for node "no-preload-021144" to be "Ready" ...
	I1227 10:03:00.300273  500772 api_server.go:52] waiting for apiserver process to appear ...
	I1227 10:03:00.300346  500772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 10:03:00.346917  500772 api_server.go:72] duration metric: took 14.20776527s to wait for apiserver process to appear ...
	I1227 10:03:00.346945  500772 api_server.go:88] waiting for apiserver healthz status ...
	I1227 10:03:00.346968  500772 api_server.go:299] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1227 10:03:00.358093  500772 api_server.go:325] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1227 10:03:00.359623  500772 api_server.go:141] control plane version: v1.35.0
	I1227 10:03:00.359656  500772 api_server.go:131] duration metric: took 12.70374ms to wait for apiserver health ...
	I1227 10:03:00.359666  500772 system_pods.go:43] waiting for kube-system pods to appear ...
	I1227 10:03:00.372356  500772 system_pods.go:59] 8 kube-system pods found
	I1227 10:03:00.372402  500772 system_pods.go:61] "coredns-7d764666f9-p7h6b" [a93fe941-a366-4e29-952d-b0141a7ddfdb] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1227 10:03:00.372410  500772 system_pods.go:61] "etcd-no-preload-021144" [0b49553e-9ec8-475e-8fc6-7c906eeaaf93] Running
	I1227 10:03:00.372417  500772 system_pods.go:61] "kindnet-hnnqk" [12c31c7c-1258-40d9-a7b8-a110007bf0d0] Running
	I1227 10:03:00.372422  500772 system_pods.go:61] "kube-apiserver-no-preload-021144" [b4f282e0-9f47-4336-ad51-0bf6f73cd7d9] Running
	I1227 10:03:00.372427  500772 system_pods.go:61] "kube-controller-manager-no-preload-021144" [ddb13d4f-7bd1-4a06-a77d-297c348063cb] Running
	I1227 10:03:00.372432  500772 system_pods.go:61] "kube-proxy-gzt2m" [f93b8a8e-6739-4118-8e21-a27511a17f92] Running
	I1227 10:03:00.372436  500772 system_pods.go:61] "kube-scheduler-no-preload-021144" [1890b5fe-4829-4d35-b5fc-f454aae53829] Running
	I1227 10:03:00.372443  500772 system_pods.go:61] "storage-provisioner" [d93d446a-5434-4190-862c-fb660b9b87df] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1227 10:03:00.372450  500772 system_pods.go:74] duration metric: took 12.777972ms to wait for pod list to return data ...
	I1227 10:03:00.372459  500772 default_sa.go:34] waiting for default service account to be created ...
	I1227 10:03:00.386648  500772 default_sa.go:45] found service account: "default"
	I1227 10:03:00.386675  500772 default_sa.go:55] duration metric: took 14.20949ms for default service account to be created ...
	I1227 10:03:00.386689  500772 system_pods.go:116] waiting for k8s-apps to be running ...
	I1227 10:03:00.390814  500772 system_pods.go:86] 8 kube-system pods found
	I1227 10:03:00.390849  500772 system_pods.go:89] "coredns-7d764666f9-p7h6b" [a93fe941-a366-4e29-952d-b0141a7ddfdb] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1227 10:03:00.390858  500772 system_pods.go:89] "etcd-no-preload-021144" [0b49553e-9ec8-475e-8fc6-7c906eeaaf93] Running
	I1227 10:03:00.390866  500772 system_pods.go:89] "kindnet-hnnqk" [12c31c7c-1258-40d9-a7b8-a110007bf0d0] Running
	I1227 10:03:00.390871  500772 system_pods.go:89] "kube-apiserver-no-preload-021144" [b4f282e0-9f47-4336-ad51-0bf6f73cd7d9] Running
	I1227 10:03:00.390877  500772 system_pods.go:89] "kube-controller-manager-no-preload-021144" [ddb13d4f-7bd1-4a06-a77d-297c348063cb] Running
	I1227 10:03:00.390882  500772 system_pods.go:89] "kube-proxy-gzt2m" [f93b8a8e-6739-4118-8e21-a27511a17f92] Running
	I1227 10:03:00.390887  500772 system_pods.go:89] "kube-scheduler-no-preload-021144" [1890b5fe-4829-4d35-b5fc-f454aae53829] Running
	I1227 10:03:00.390894  500772 system_pods.go:89] "storage-provisioner" [d93d446a-5434-4190-862c-fb660b9b87df] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1227 10:03:00.390925  500772 retry.go:84] will retry after 200ms: missing components: kube-dns
	I1227 10:03:00.588239  500772 system_pods.go:86] 8 kube-system pods found
	I1227 10:03:00.588275  500772 system_pods.go:89] "coredns-7d764666f9-p7h6b" [a93fe941-a366-4e29-952d-b0141a7ddfdb] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1227 10:03:00.588282  500772 system_pods.go:89] "etcd-no-preload-021144" [0b49553e-9ec8-475e-8fc6-7c906eeaaf93] Running
	I1227 10:03:00.588289  500772 system_pods.go:89] "kindnet-hnnqk" [12c31c7c-1258-40d9-a7b8-a110007bf0d0] Running
	I1227 10:03:00.588294  500772 system_pods.go:89] "kube-apiserver-no-preload-021144" [b4f282e0-9f47-4336-ad51-0bf6f73cd7d9] Running
	I1227 10:03:00.588298  500772 system_pods.go:89] "kube-controller-manager-no-preload-021144" [ddb13d4f-7bd1-4a06-a77d-297c348063cb] Running
	I1227 10:03:00.588344  500772 system_pods.go:89] "kube-proxy-gzt2m" [f93b8a8e-6739-4118-8e21-a27511a17f92] Running
	I1227 10:03:00.588356  500772 system_pods.go:89] "kube-scheduler-no-preload-021144" [1890b5fe-4829-4d35-b5fc-f454aae53829] Running
	I1227 10:03:00.588364  500772 system_pods.go:89] "storage-provisioner" [d93d446a-5434-4190-862c-fb660b9b87df] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1227 10:03:00.936188  500772 system_pods.go:86] 8 kube-system pods found
	I1227 10:03:00.936228  500772 system_pods.go:89] "coredns-7d764666f9-p7h6b" [a93fe941-a366-4e29-952d-b0141a7ddfdb] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1227 10:03:00.936235  500772 system_pods.go:89] "etcd-no-preload-021144" [0b49553e-9ec8-475e-8fc6-7c906eeaaf93] Running
	I1227 10:03:00.936241  500772 system_pods.go:89] "kindnet-hnnqk" [12c31c7c-1258-40d9-a7b8-a110007bf0d0] Running
	I1227 10:03:00.936246  500772 system_pods.go:89] "kube-apiserver-no-preload-021144" [b4f282e0-9f47-4336-ad51-0bf6f73cd7d9] Running
	I1227 10:03:00.936251  500772 system_pods.go:89] "kube-controller-manager-no-preload-021144" [ddb13d4f-7bd1-4a06-a77d-297c348063cb] Running
	I1227 10:03:00.936255  500772 system_pods.go:89] "kube-proxy-gzt2m" [f93b8a8e-6739-4118-8e21-a27511a17f92] Running
	I1227 10:03:00.936260  500772 system_pods.go:89] "kube-scheduler-no-preload-021144" [1890b5fe-4829-4d35-b5fc-f454aae53829] Running
	I1227 10:03:00.936266  500772 system_pods.go:89] "storage-provisioner" [d93d446a-5434-4190-862c-fb660b9b87df] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1227 10:03:01.323036  500772 system_pods.go:86] 8 kube-system pods found
	I1227 10:03:01.323073  500772 system_pods.go:89] "coredns-7d764666f9-p7h6b" [a93fe941-a366-4e29-952d-b0141a7ddfdb] Running
	I1227 10:03:01.323081  500772 system_pods.go:89] "etcd-no-preload-021144" [0b49553e-9ec8-475e-8fc6-7c906eeaaf93] Running
	I1227 10:03:01.323087  500772 system_pods.go:89] "kindnet-hnnqk" [12c31c7c-1258-40d9-a7b8-a110007bf0d0] Running
	I1227 10:03:01.323091  500772 system_pods.go:89] "kube-apiserver-no-preload-021144" [b4f282e0-9f47-4336-ad51-0bf6f73cd7d9] Running
	I1227 10:03:01.323097  500772 system_pods.go:89] "kube-controller-manager-no-preload-021144" [ddb13d4f-7bd1-4a06-a77d-297c348063cb] Running
	I1227 10:03:01.323102  500772 system_pods.go:89] "kube-proxy-gzt2m" [f93b8a8e-6739-4118-8e21-a27511a17f92] Running
	I1227 10:03:01.323107  500772 system_pods.go:89] "kube-scheduler-no-preload-021144" [1890b5fe-4829-4d35-b5fc-f454aae53829] Running
	I1227 10:03:01.323113  500772 system_pods.go:89] "storage-provisioner" [d93d446a-5434-4190-862c-fb660b9b87df] Running
	I1227 10:03:01.323122  500772 system_pods.go:126] duration metric: took 936.425325ms to wait for k8s-apps to be running ...
	I1227 10:03:01.323136  500772 system_svc.go:44] waiting for kubelet service to be running ....
	I1227 10:03:01.323200  500772 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1227 10:03:01.337626  500772 system_svc.go:56] duration metric: took 14.480944ms WaitForService to wait for kubelet
	I1227 10:03:01.337657  500772 kubeadm.go:587] duration metric: took 15.198510354s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1227 10:03:01.337677  500772 node_conditions.go:102] verifying NodePressure condition ...
	I1227 10:03:01.340576  500772 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1227 10:03:01.340608  500772 node_conditions.go:123] node cpu capacity is 2
	I1227 10:03:01.340624  500772 node_conditions.go:105] duration metric: took 2.941698ms to run NodePressure ...
	I1227 10:03:01.340638  500772 start.go:242] waiting for startup goroutines ...
	I1227 10:03:01.340645  500772 start.go:247] waiting for cluster config update ...
	I1227 10:03:01.340657  500772 start.go:256] writing updated cluster config ...
	I1227 10:03:01.340943  500772 ssh_runner.go:195] Run: rm -f paused
	I1227 10:03:01.345756  500772 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1227 10:03:01.349434  500772 pod_ready.go:83] waiting for pod "coredns-7d764666f9-p7h6b" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 10:03:01.354672  500772 pod_ready.go:94] pod "coredns-7d764666f9-p7h6b" is "Ready"
	I1227 10:03:01.354745  500772 pod_ready.go:86] duration metric: took 5.282536ms for pod "coredns-7d764666f9-p7h6b" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 10:03:01.357100  500772 pod_ready.go:83] waiting for pod "etcd-no-preload-021144" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 10:03:01.361679  500772 pod_ready.go:94] pod "etcd-no-preload-021144" is "Ready"
	I1227 10:03:01.361749  500772 pod_ready.go:86] duration metric: took 4.622302ms for pod "etcd-no-preload-021144" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 10:03:01.364061  500772 pod_ready.go:83] waiting for pod "kube-apiserver-no-preload-021144" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 10:03:01.368272  500772 pod_ready.go:94] pod "kube-apiserver-no-preload-021144" is "Ready"
	I1227 10:03:01.368301  500772 pod_ready.go:86] duration metric: took 4.211072ms for pod "kube-apiserver-no-preload-021144" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 10:03:01.370681  500772 pod_ready.go:83] waiting for pod "kube-controller-manager-no-preload-021144" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 10:03:01.750348  500772 pod_ready.go:94] pod "kube-controller-manager-no-preload-021144" is "Ready"
	I1227 10:03:01.750380  500772 pod_ready.go:86] duration metric: took 379.667414ms for pod "kube-controller-manager-no-preload-021144" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 10:03:01.949803  500772 pod_ready.go:83] waiting for pod "kube-proxy-gzt2m" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 10:03:02.349921  500772 pod_ready.go:94] pod "kube-proxy-gzt2m" is "Ready"
	I1227 10:03:02.349954  500772 pod_ready.go:86] duration metric: took 400.120716ms for pod "kube-proxy-gzt2m" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 10:03:02.549950  500772 pod_ready.go:83] waiting for pod "kube-scheduler-no-preload-021144" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 10:03:02.950559  500772 pod_ready.go:94] pod "kube-scheduler-no-preload-021144" is "Ready"
	I1227 10:03:02.950589  500772 pod_ready.go:86] duration metric: took 400.605669ms for pod "kube-scheduler-no-preload-021144" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 10:03:02.950603  500772 pod_ready.go:40] duration metric: took 1.60481337s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1227 10:03:03.015518  500772 start.go:625] kubectl: 1.33.2, cluster: 1.35.0 (minor skew: 2)
	I1227 10:03:03.018608  500772 out.go:203] 
	W1227 10:03:03.021725  500772 out.go:285] ! /usr/local/bin/kubectl is version 1.33.2, which may have incompatibilities with Kubernetes 1.35.0.
	I1227 10:03:03.024884  500772 out.go:179]   - Want kubectl v1.35.0? Try 'minikube kubectl -- get pods -A'
	I1227 10:03:03.028000  500772 out.go:179] * Done! kubectl is now configured to use "no-preload-021144" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Dec 27 10:03:00 no-preload-021144 crio[835]: time="2025-12-27T10:03:00.673767788Z" level=info msg="Created container 6294271fb2394fcf9c23f071f2184f5e48732eb2edf669c03b265294a1a09df5: kube-system/coredns-7d764666f9-p7h6b/coredns" id=28e220e8-1e9c-4d49-8c51-cf86116b16b0 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 27 10:03:00 no-preload-021144 crio[835]: time="2025-12-27T10:03:00.674742528Z" level=info msg="Starting container: 6294271fb2394fcf9c23f071f2184f5e48732eb2edf669c03b265294a1a09df5" id=45c6ff7e-c212-44ca-9e0e-da3d949b601c name=/runtime.v1.RuntimeService/StartContainer
	Dec 27 10:03:00 no-preload-021144 crio[835]: time="2025-12-27T10:03:00.678909317Z" level=info msg="Started container" PID=2422 containerID=6294271fb2394fcf9c23f071f2184f5e48732eb2edf669c03b265294a1a09df5 description=kube-system/coredns-7d764666f9-p7h6b/coredns id=45c6ff7e-c212-44ca-9e0e-da3d949b601c name=/runtime.v1.RuntimeService/StartContainer sandboxID=63428a3523cef7725e26ca22dcfde818007c113f850b022491168a5f90385033
	Dec 27 10:03:03 no-preload-021144 crio[835]: time="2025-12-27T10:03:03.537067058Z" level=info msg="Running pod sandbox: default/busybox/POD" id=f96e5ef9-3dda-4d4c-b0d8-2a3715c15408 name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 27 10:03:03 no-preload-021144 crio[835]: time="2025-12-27T10:03:03.537149635Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 27 10:03:03 no-preload-021144 crio[835]: time="2025-12-27T10:03:03.542373183Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:ea2774cfe04a9a1e22fbca75c5edf7a5222c7fa75755b0439effd957d8cd02a6 UID:7e543378-18ad-4c55-8879-0efffa9bdb70 NetNS:/var/run/netns/5e89519f-c66b-49fc-b25c-5ef6cf32ca65 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x400012d630}] Aliases:map[]}"
	Dec 27 10:03:03 no-preload-021144 crio[835]: time="2025-12-27T10:03:03.542414144Z" level=info msg="Adding pod default_busybox to CNI network \"kindnet\" (type=ptp)"
	Dec 27 10:03:03 no-preload-021144 crio[835]: time="2025-12-27T10:03:03.553483443Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:ea2774cfe04a9a1e22fbca75c5edf7a5222c7fa75755b0439effd957d8cd02a6 UID:7e543378-18ad-4c55-8879-0efffa9bdb70 NetNS:/var/run/netns/5e89519f-c66b-49fc-b25c-5ef6cf32ca65 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x400012d630}] Aliases:map[]}"
	Dec 27 10:03:03 no-preload-021144 crio[835]: time="2025-12-27T10:03:03.55363574Z" level=info msg="Checking pod default_busybox for CNI network kindnet (type=ptp)"
	Dec 27 10:03:03 no-preload-021144 crio[835]: time="2025-12-27T10:03:03.55736537Z" level=info msg="Ran pod sandbox ea2774cfe04a9a1e22fbca75c5edf7a5222c7fa75755b0439effd957d8cd02a6 with infra container: default/busybox/POD" id=f96e5ef9-3dda-4d4c-b0d8-2a3715c15408 name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 27 10:03:03 no-preload-021144 crio[835]: time="2025-12-27T10:03:03.559139915Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=1c9cd425-ac21-4dde-83ec-33fc24acce22 name=/runtime.v1.ImageService/ImageStatus
	Dec 27 10:03:03 no-preload-021144 crio[835]: time="2025-12-27T10:03:03.559443747Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=1c9cd425-ac21-4dde-83ec-33fc24acce22 name=/runtime.v1.ImageService/ImageStatus
	Dec 27 10:03:03 no-preload-021144 crio[835]: time="2025-12-27T10:03:03.55956438Z" level=info msg="Neither image nor artfiact gcr.io/k8s-minikube/busybox:1.28.4-glibc found" id=1c9cd425-ac21-4dde-83ec-33fc24acce22 name=/runtime.v1.ImageService/ImageStatus
	Dec 27 10:03:03 no-preload-021144 crio[835]: time="2025-12-27T10:03:03.560743538Z" level=info msg="Pulling image: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=c53a5b6a-a5d5-4e3b-88ed-ddd7daea4fbb name=/runtime.v1.ImageService/PullImage
	Dec 27 10:03:03 no-preload-021144 crio[835]: time="2025-12-27T10:03:03.562932152Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Dec 27 10:03:05 no-preload-021144 crio[835]: time="2025-12-27T10:03:05.493754519Z" level=info msg="Pulled image: gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e" id=c53a5b6a-a5d5-4e3b-88ed-ddd7daea4fbb name=/runtime.v1.ImageService/PullImage
	Dec 27 10:03:05 no-preload-021144 crio[835]: time="2025-12-27T10:03:05.494448132Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=94a99570-1e71-42aa-82e1-3199fe0d6676 name=/runtime.v1.ImageService/ImageStatus
	Dec 27 10:03:05 no-preload-021144 crio[835]: time="2025-12-27T10:03:05.497702892Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=ee6d2f94-f9a3-4a78-a4db-924f5269123c name=/runtime.v1.ImageService/ImageStatus
	Dec 27 10:03:05 no-preload-021144 crio[835]: time="2025-12-27T10:03:05.503727698Z" level=info msg="Creating container: default/busybox/busybox" id=b613c9f9-c319-423c-a462-43d7aa0a503f name=/runtime.v1.RuntimeService/CreateContainer
	Dec 27 10:03:05 no-preload-021144 crio[835]: time="2025-12-27T10:03:05.503833906Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 27 10:03:05 no-preload-021144 crio[835]: time="2025-12-27T10:03:05.513900584Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 27 10:03:05 no-preload-021144 crio[835]: time="2025-12-27T10:03:05.514438707Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 27 10:03:05 no-preload-021144 crio[835]: time="2025-12-27T10:03:05.531345855Z" level=info msg="Created container 48ad96a73a7ce1fcadd325e195fdb80b6ad205f90d22997795a4e756e2ddf3c3: default/busybox/busybox" id=b613c9f9-c319-423c-a462-43d7aa0a503f name=/runtime.v1.RuntimeService/CreateContainer
	Dec 27 10:03:05 no-preload-021144 crio[835]: time="2025-12-27T10:03:05.532307812Z" level=info msg="Starting container: 48ad96a73a7ce1fcadd325e195fdb80b6ad205f90d22997795a4e756e2ddf3c3" id=a9847cd3-60a7-442d-b356-d84ee6bea230 name=/runtime.v1.RuntimeService/StartContainer
	Dec 27 10:03:05 no-preload-021144 crio[835]: time="2025-12-27T10:03:05.534744446Z" level=info msg="Started container" PID=2479 containerID=48ad96a73a7ce1fcadd325e195fdb80b6ad205f90d22997795a4e756e2ddf3c3 description=default/busybox/busybox id=a9847cd3-60a7-442d-b356-d84ee6bea230 name=/runtime.v1.RuntimeService/StartContainer sandboxID=ea2774cfe04a9a1e22fbca75c5edf7a5222c7fa75755b0439effd957d8cd02a6
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	48ad96a73a7ce       gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e   7 seconds ago       Running             busybox                   0                   ea2774cfe04a9       busybox                                     default
	6294271fb2394       e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf                                      12 seconds ago      Running             coredns                   0                   63428a3523cef       coredns-7d764666f9-p7h6b                    kube-system
	97f75d56dc619       66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51                                      12 seconds ago      Running             storage-provisioner       0                   4b659a6b08288       storage-provisioner                         kube-system
	7afbf7694fecb       docker.io/kindest/kindnetd@sha256:f1260f5691195cc9a693dc0b55178aa724d944efd62486a8320f0583272b1fa3    23 seconds ago      Running             kindnet-cni               0                   acab15f160400       kindnet-hnnqk                               kube-system
	2dabcda5fdd0e       de369f46c2ff55c31ea783a663eb203caa820f3db1f9b9c935e79e7d1e9fd9e5                                      26 seconds ago      Running             kube-proxy                0                   0e500334a2e3c       kube-proxy-gzt2m                            kube-system
	b3f8ec1ef1611       c3fcf259c473a57a5d7da116e29161904491091743512d27467c907c5516f856                                      37 seconds ago      Running             kube-apiserver            0                   c89cde1a6375d       kube-apiserver-no-preload-021144            kube-system
	91f6d7ef57152       271e49a0ebc56647476845128fcd2a73bb138beeca3878cc3bf52b4ff1172a57                                      37 seconds ago      Running             etcd                      0                   912a79ea753c4       etcd-no-preload-021144                      kube-system
	9431198890ade       88898f1d1a62a3ea9db5d4d099dee7aa52ebe8191016c5b3c721388a309983e0                                      37 seconds ago      Running             kube-controller-manager   0                   3338c19d85fc6       kube-controller-manager-no-preload-021144   kube-system
	10579f9fc2b63       ddc8422d4d35a6fc66c34be61e24df795e5cebf197eb546f62740d0bafef874f                                      37 seconds ago      Running             kube-scheduler            0                   4bc19b3baa447       kube-scheduler-no-preload-021144            kube-system
	
	
	==> coredns [6294271fb2394fcf9c23f071f2184f5e48732eb2edf669c03b265294a1a09df5] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.13.1
	linux/arm64, go1.25.2, 1db4568
	[INFO] 127.0.0.1:52258 - 40698 "HINFO IN 1592522164675162567.3492152237614246012. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.023682462s
	
	
	==> describe nodes <==
	Name:               no-preload-021144
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=no-preload-021144
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=a2daf445edf4872fd9586416ba5dbf507613db86
	                    minikube.k8s.io/name=no-preload-021144
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_27T10_02_41_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 27 Dec 2025 10:02:38 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-021144
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 27 Dec 2025 10:03:11 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 27 Dec 2025 10:03:11 +0000   Sat, 27 Dec 2025 10:02:35 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 27 Dec 2025 10:03:11 +0000   Sat, 27 Dec 2025 10:02:35 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 27 Dec 2025 10:03:11 +0000   Sat, 27 Dec 2025 10:02:35 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 27 Dec 2025 10:03:11 +0000   Sat, 27 Dec 2025 10:02:59 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    no-preload-021144
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022304Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022304Ki
	  pods:               110
	System Info:
	  Machine ID:                 56495bcb733b49eb3642bf15694bbc8c
	  System UUID:                cb1e511d-4a03-4ff7-9ae5-96dca8c8e0f7
	  Boot ID:                    bc2ae459-717b-455e-ba6b-b9cf89c5e3c8
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.3
	  Kubelet Version:            v1.35.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         9s
	  kube-system                 coredns-7d764666f9-p7h6b                     100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     26s
	  kube-system                 etcd-no-preload-021144                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         34s
	  kube-system                 kindnet-hnnqk                                100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      27s
	  kube-system                 kube-apiserver-no-preload-021144             250m (12%)    0 (0%)      0 (0%)           0 (0%)         31s
	  kube-system                 kube-controller-manager-no-preload-021144    200m (10%)    0 (0%)      0 (0%)           0 (0%)         34s
	  kube-system                 kube-proxy-gzt2m                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         27s
	  kube-system                 kube-scheduler-no-preload-021144             100m (5%)     0 (0%)      0 (0%)           0 (0%)         31s
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         25s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type    Reason          Age   From             Message
	  ----    ------          ----  ----             -------
	  Normal  RegisteredNode  28s   node-controller  Node no-preload-021144 event: Registered Node no-preload-021144 in Controller
	
	
	==> dmesg <==
	[Dec27 09:28] overlayfs: idmapped layers are currently not supported
	[Dec27 09:29] overlayfs: idmapped layers are currently not supported
	[Dec27 09:32] overlayfs: idmapped layers are currently not supported
	[Dec27 09:33] overlayfs: idmapped layers are currently not supported
	[Dec27 09:34] overlayfs: idmapped layers are currently not supported
	[Dec27 09:35] overlayfs: idmapped layers are currently not supported
	[ +35.855481] overlayfs: idmapped layers are currently not supported
	[Dec27 09:36] overlayfs: idmapped layers are currently not supported
	[Dec27 09:37] overlayfs: idmapped layers are currently not supported
	[  +4.962986] overlayfs: idmapped layers are currently not supported
	[Dec27 09:38] overlayfs: idmapped layers are currently not supported
	[ +23.051169] overlayfs: idmapped layers are currently not supported
	[Dec27 09:39] overlayfs: idmapped layers are currently not supported
	[ +34.383287] overlayfs: idmapped layers are currently not supported
	[Dec27 09:41] overlayfs: idmapped layers are currently not supported
	[Dec27 09:43] overlayfs: idmapped layers are currently not supported
	[ +35.522529] overlayfs: idmapped layers are currently not supported
	[Dec27 09:48] overlayfs: idmapped layers are currently not supported
	[Dec27 09:49] overlayfs: idmapped layers are currently not supported
	[Dec27 09:51] overlayfs: idmapped layers are currently not supported
	[Dec27 09:52] overlayfs: idmapped layers are currently not supported
	[Dec27 09:59] overlayfs: idmapped layers are currently not supported
	[ +37.649191] overlayfs: idmapped layers are currently not supported
	[Dec27 10:01] overlayfs: idmapped layers are currently not supported
	[Dec27 10:02] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [91f6d7ef57152e0479a6aa803c0e587a1c5ed256c413540aae6805de22a55eb3] <==
	{"level":"info","ts":"2025-12-27T10:02:35.298686Z","caller":"fileutil/purge.go:49","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-12-27T10:02:36.155757Z","logger":"raft","caller":"v3@v3.6.0/raft.go:988","msg":"9f0758e1c58a86ed is starting a new election at term 1"}
	{"level":"info","ts":"2025-12-27T10:02:36.155806Z","logger":"raft","caller":"v3@v3.6.0/raft.go:930","msg":"9f0758e1c58a86ed became pre-candidate at term 1"}
	{"level":"info","ts":"2025-12-27T10:02:36.155868Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1077","msg":"9f0758e1c58a86ed received MsgPreVoteResp from 9f0758e1c58a86ed at term 1"}
	{"level":"info","ts":"2025-12-27T10:02:36.155969Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1693","msg":"9f0758e1c58a86ed has received 1 MsgPreVoteResp votes and 0 vote rejections"}
	{"level":"info","ts":"2025-12-27T10:02:36.156000Z","logger":"raft","caller":"v3@v3.6.0/raft.go:912","msg":"9f0758e1c58a86ed became candidate at term 2"}
	{"level":"info","ts":"2025-12-27T10:02:36.156982Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1077","msg":"9f0758e1c58a86ed received MsgVoteResp from 9f0758e1c58a86ed at term 2"}
	{"level":"info","ts":"2025-12-27T10:02:36.157013Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1693","msg":"9f0758e1c58a86ed has received 1 MsgVoteResp votes and 0 vote rejections"}
	{"level":"info","ts":"2025-12-27T10:02:36.157031Z","logger":"raft","caller":"v3@v3.6.0/raft.go:970","msg":"9f0758e1c58a86ed became leader at term 2"}
	{"level":"info","ts":"2025-12-27T10:02:36.157044Z","logger":"raft","caller":"v3@v3.6.0/node.go:370","msg":"raft.node: 9f0758e1c58a86ed elected leader 9f0758e1c58a86ed at term 2"}
	{"level":"info","ts":"2025-12-27T10:02:36.158343Z","caller":"etcdserver/server.go:1820","msg":"published local member to cluster through raft","local-member-id":"9f0758e1c58a86ed","local-member-attributes":"{Name:no-preload-021144 ClientURLs:[https://192.168.85.2:2379]}","cluster-id":"68eaea490fab4e05","publish-timeout":"7s"}
	{"level":"info","ts":"2025-12-27T10:02:36.158374Z","caller":"embed/serve.go:138","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-12-27T10:02:36.158451Z","caller":"etcdserver/server.go:2420","msg":"setting up initial cluster version using v3 API","cluster-version":"3.6"}
	{"level":"info","ts":"2025-12-27T10:02:36.158562Z","caller":"embed/serve.go:138","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-12-27T10:02:36.159547Z","caller":"membership/cluster.go:682","msg":"set initial cluster version","cluster-id":"68eaea490fab4e05","local-member-id":"9f0758e1c58a86ed","cluster-version":"3.6"}
	{"level":"info","ts":"2025-12-27T10:02:36.159604Z","caller":"v3rpc/health.go:63","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-12-27T10:02:36.159672Z","caller":"api/capability.go:76","msg":"enabled capabilities for version","cluster-version":"3.6"}
	{"level":"info","ts":"2025-12-27T10:02:36.159731Z","caller":"etcdserver/server.go:2440","msg":"cluster version is updated","cluster-version":"3.6"}
	{"level":"info","ts":"2025-12-27T10:02:36.159802Z","caller":"version/monitor.go:116","msg":"cluster version differs from storage version.","cluster-version":"3.6.0","storage-version":"3.5.0"}
	{"level":"info","ts":"2025-12-27T10:02:36.159896Z","caller":"schema/migration.go:65","msg":"updated storage version","new-storage-version":"3.6.0"}
	{"level":"info","ts":"2025-12-27T10:02:36.159955Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-12-27T10:02:36.159987Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-12-27T10:02:36.160648Z","caller":"v3rpc/health.go:63","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-12-27T10:02:36.161817Z","caller":"embed/serve.go:283","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.85.2:2379"}
	{"level":"info","ts":"2025-12-27T10:02:36.162547Z","caller":"embed/serve.go:283","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	
	
	==> kernel <==
	 10:03:12 up  2:45,  0 user,  load average: 1.69, 1.66, 1.97
	Linux no-preload-021144 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [7afbf7694fecbe27110f1dccd85af07dbe4979c4f978990156e6d2caac7a8297] <==
	I1227 10:02:49.425160       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1227 10:02:49.427583       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1227 10:02:49.427732       1 main.go:148] setting mtu 1500 for CNI 
	I1227 10:02:49.427751       1 main.go:178] kindnetd IP family: "ipv4"
	I1227 10:02:49.427765       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-12-27T10:02:49Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1227 10:02:49.722709       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1227 10:02:49.722743       1 controller.go:381] "Waiting for informer caches to sync"
	I1227 10:02:49.722753       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1227 10:02:49.722905       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1227 10:02:49.923518       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1227 10:02:49.923624       1 metrics.go:72] Registering metrics
	I1227 10:02:49.923706       1 controller.go:711] "Syncing nftables rules"
	I1227 10:02:59.723490       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1227 10:02:59.723550       1 main.go:301] handling current node
	I1227 10:03:09.724369       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1227 10:03:09.724485       1 main.go:301] handling current node
	
	
	==> kube-apiserver [b3f8ec1ef161132afc2a5668fc6a4b9421d04a057951da2e9b0a1b2b513c6f58] <==
	E1227 10:02:38.218027       1 controller.go:156] "Error while syncing ConfigMap" err="namespaces \"kube-system\" not found" logger="UnhandledError" configmap="kube-system/kube-apiserver-legacy-service-account-token-tracking"
	I1227 10:02:38.229097       1 controller.go:667] quota admission added evaluator for: namespaces
	I1227 10:02:38.237004       1 cidrallocator.go:302] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1227 10:02:38.237549       1 default_servicecidr_controller.go:231] Setting default ServiceCIDR condition Ready to True
	I1227 10:02:38.244250       1 cidrallocator.go:278] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1227 10:02:38.245262       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1227 10:02:38.249452       1 default_servicecidr_controller.go:231] Setting default ServiceCIDR condition Ready to True
	I1227 10:02:38.830184       1 storage_scheduling.go:123] created PriorityClass system-node-critical with value 2000001000
	I1227 10:02:38.835332       1 storage_scheduling.go:123] created PriorityClass system-cluster-critical with value 2000000000
	I1227 10:02:38.835353       1 storage_scheduling.go:139] all system priority classes are created successfully or already exist.
	I1227 10:02:39.664453       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1227 10:02:39.726603       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1227 10:02:39.837978       1 alloc.go:329] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1227 10:02:39.849324       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.85.2]
	I1227 10:02:39.850496       1 controller.go:667] quota admission added evaluator for: endpoints
	I1227 10:02:39.855478       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1227 10:02:40.072713       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1227 10:02:40.852893       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1227 10:02:40.875833       1 alloc.go:329] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1227 10:02:40.890569       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1227 10:02:45.728330       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1227 10:02:45.879942       1 cidrallocator.go:278] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1227 10:02:45.884856       1 cidrallocator.go:278] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1227 10:02:45.973914       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	E1227 10:03:11.357818       1 conn.go:339] Error on socket receive: read tcp 192.168.85.2:8443->192.168.85.1:55050: use of closed network connection
	
	
	==> kube-controller-manager [9431198890adeb005f85e9c1b1a7a7203ea85921e525f39224dfc4355cdd13ff] <==
	I1227 10:02:44.892503       1 shared_informer.go:377] "Caches are synced"
	I1227 10:02:44.892554       1 shared_informer.go:377] "Caches are synced"
	I1227 10:02:44.892593       1 shared_informer.go:377] "Caches are synced"
	I1227 10:02:44.893228       1 shared_informer.go:377] "Caches are synced"
	I1227 10:02:44.893467       1 shared_informer.go:377] "Caches are synced"
	I1227 10:02:44.893525       1 shared_informer.go:377] "Caches are synced"
	I1227 10:02:44.893809       1 shared_informer.go:377] "Caches are synced"
	I1227 10:02:44.896633       1 shared_informer.go:377] "Caches are synced"
	I1227 10:02:44.896712       1 shared_informer.go:377] "Caches are synced"
	I1227 10:02:44.903861       1 shared_informer.go:377] "Caches are synced"
	I1227 10:02:44.907041       1 shared_informer.go:377] "Caches are synced"
	I1227 10:02:44.882721       1 shared_informer.go:377] "Caches are synced"
	I1227 10:02:44.940921       1 shared_informer.go:370] "Waiting for caches to sync"
	I1227 10:02:44.884100       1 shared_informer.go:377] "Caches are synced"
	I1227 10:02:44.944151       1 shared_informer.go:377] "Caches are synced"
	I1227 10:02:44.944855       1 range_allocator.go:433] "Set node PodCIDR" node="no-preload-021144" podCIDRs=["10.244.0.0/24"]
	I1227 10:02:44.956085       1 shared_informer.go:377] "Caches are synced"
	I1227 10:02:44.956217       1 shared_informer.go:377] "Caches are synced"
	I1227 10:02:44.974508       1 controller_utils.go:151] "Failed to update status for pod" pod="kube-system/etcd-no-preload-021144" err="Operation cannot be fulfilled on pods \"etcd-no-preload-021144\": the object has been modified; please apply your changes to the latest version and try again"
	I1227 10:02:44.974560       1 node_lifecycle_controller.go:1155] "Unable to mark pod NotReady on node" pod="kube-system/etcd-no-preload-021144" node="no-preload-021144" err="Operation cannot be fulfilled on pods \"etcd-no-preload-021144\": the object has been modified; please apply your changes to the latest version and try again"
	I1227 10:02:45.056865       1 shared_informer.go:377] "Caches are synced"
	I1227 10:02:45.081125       1 shared_informer.go:377] "Caches are synced"
	I1227 10:02:45.081241       1 garbagecollector.go:166] "Garbage collector: all resource monitors have synced"
	I1227 10:02:45.081282       1 garbagecollector.go:169] "Proceeding to collect garbage"
	I1227 10:03:04.885854       1 node_lifecycle_controller.go:1057] "Controller detected that some Nodes are Ready. Exiting master disruption mode"
	
	
	==> kube-proxy [2dabcda5fdd0e472da6b9a3f4795b0d2a64dbb76195b5cb70ef7c25764dacfa2] <==
	I1227 10:02:46.637963       1 server_linux.go:53] "Using iptables proxy"
	I1227 10:02:46.735789       1 shared_informer.go:370] "Waiting for caches to sync"
	I1227 10:02:46.835879       1 shared_informer.go:377] "Caches are synced"
	I1227 10:02:46.835912       1 server.go:218] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1227 10:02:46.835993       1 server.go:255] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1227 10:02:46.909416       1 server.go:264] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1227 10:02:46.909613       1 server_linux.go:136] "Using iptables Proxier"
	I1227 10:02:46.943309       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1227 10:02:46.943616       1 server.go:529] "Version info" version="v1.35.0"
	I1227 10:02:46.943629       1 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1227 10:02:46.945041       1 config.go:200] "Starting service config controller"
	I1227 10:02:46.945055       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1227 10:02:46.945070       1 config.go:106] "Starting endpoint slice config controller"
	I1227 10:02:46.945075       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1227 10:02:46.945094       1 config.go:403] "Starting serviceCIDR config controller"
	I1227 10:02:46.945099       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1227 10:02:46.945757       1 config.go:309] "Starting node config controller"
	I1227 10:02:46.945765       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1227 10:02:46.945771       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1227 10:02:47.048943       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1227 10:02:47.048979       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1227 10:02:47.049011       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [10579f9fc2b63f0173590545e7136fff851f8d2664a802c5492c09aa6b566d64] <==
	E1227 10:02:38.132766       1 reflector.go:204] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.PersistentVolumeClaim"
	E1227 10:02:38.132851       1 reflector.go:204] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.CSINode"
	E1227 10:02:38.132869       1 reflector.go:204] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.PersistentVolume"
	E1227 10:02:38.132932       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ResourceClaim"
	E1227 10:02:38.134679       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ResourceSlice"
	E1227 10:02:38.134790       1 reflector.go:204] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.CSIDriver"
	E1227 10:02:38.134829       1 reflector.go:204] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.PodDisruptionBudget"
	E1227 10:02:38.134866       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Pod"
	E1227 10:02:38.134915       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Node"
	E1227 10:02:38.968525       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Node"
	E1227 10:02:38.971955       1 reflector.go:204] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.DeviceClass"
	E1227 10:02:39.009558       1 reflector.go:204] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.PodDisruptionBudget"
	E1227 10:02:39.025931       1 reflector.go:204] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.StatefulSet"
	E1227 10:02:39.084721       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Service"
	E1227 10:02:39.094742       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ResourceClaim"
	E1227 10:02:39.198426       1 reflector.go:204] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.CSINode"
	E1227 10:02:39.206202       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ReplicaSet"
	E1227 10:02:39.247119       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Pod"
	E1227 10:02:39.276186       1 reflector.go:204] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.PersistentVolumeClaim"
	E1227 10:02:39.277003       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ReplicationController"
	E1227 10:02:39.314995       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1268" type="*v1.ConfigMap"
	E1227 10:02:39.337855       1 reflector.go:204] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.PersistentVolume"
	E1227 10:02:39.396609       1 reflector.go:204] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.CSIStorageCapacity"
	E1227 10:02:39.407481       1 reflector.go:204] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.VolumeAttachment"
	I1227 10:02:42.111051       1 shared_informer.go:377] "Caches are synced"
	
	
	==> kubelet <==
	Dec 27 10:02:46 no-preload-021144 kubelet[1936]: I1227 10:02:46.052231    1936 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f93b8a8e-6739-4118-8e21-a27511a17f92-lib-modules\") pod \"kube-proxy-gzt2m\" (UID: \"f93b8a8e-6739-4118-8e21-a27511a17f92\") " pod="kube-system/kube-proxy-gzt2m"
	Dec 27 10:02:46 no-preload-021144 kubelet[1936]: I1227 10:02:46.206410    1936 swap_util.go:78] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory"
	Dec 27 10:02:46 no-preload-021144 kubelet[1936]: W1227 10:02:46.371905    1936 manager.go:1172] Failed to process watch event {EventType:0 Name:/docker/ab89938537bbdf10f981cdbb065149ea236bebfe2c08f7d5ff90bb70bae01ff1/crio-0e500334a2e3c45c768ddad9b7211280596af62c2430e3ccf3d23c25eda3cd11 WatchSource:0}: Error finding container 0e500334a2e3c45c768ddad9b7211280596af62c2430e3ccf3d23c25eda3cd11: Status 404 returned error can't find the container with id 0e500334a2e3c45c768ddad9b7211280596af62c2430e3ccf3d23c25eda3cd11
	Dec 27 10:02:46 no-preload-021144 kubelet[1936]: W1227 10:02:46.388416    1936 manager.go:1172] Failed to process watch event {EventType:0 Name:/docker/ab89938537bbdf10f981cdbb065149ea236bebfe2c08f7d5ff90bb70bae01ff1/crio-acab15f160400edb3effccb4decfd7eafb207a5b21c1c3edccab1052b10cc599 WatchSource:0}: Error finding container acab15f160400edb3effccb4decfd7eafb207a5b21c1c3edccab1052b10cc599: Status 404 returned error can't find the container with id acab15f160400edb3effccb4decfd7eafb207a5b21c1c3edccab1052b10cc599
	Dec 27 10:02:47 no-preload-021144 kubelet[1936]: E1227 10:02:47.060819    1936 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-no-preload-021144" containerName="kube-scheduler"
	Dec 27 10:02:47 no-preload-021144 kubelet[1936]: I1227 10:02:47.069868    1936 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kube-proxy-gzt2m" podStartSLOduration=2.069837988 podStartE2EDuration="2.069837988s" podCreationTimestamp="2025-12-27 10:02:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-27 10:02:46.98940775 +0000 UTC m=+6.310487407" watchObservedRunningTime="2025-12-27 10:02:47.069837988 +0000 UTC m=+6.390917654"
	Dec 27 10:02:49 no-preload-021144 kubelet[1936]: I1227 10:02:49.976434    1936 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kindnet-hnnqk" podStartSLOduration=2.049594002 podStartE2EDuration="4.976416431s" podCreationTimestamp="2025-12-27 10:02:45 +0000 UTC" firstStartedPulling="2025-12-27 10:02:46.410793673 +0000 UTC m=+5.731873330" lastFinishedPulling="2025-12-27 10:02:49.337616102 +0000 UTC m=+8.658695759" observedRunningTime="2025-12-27 10:02:49.976301697 +0000 UTC m=+9.297381362" watchObservedRunningTime="2025-12-27 10:02:49.976416431 +0000 UTC m=+9.297496088"
	Dec 27 10:02:51 no-preload-021144 kubelet[1936]: E1227 10:02:51.743391    1936 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-no-preload-021144" containerName="kube-apiserver"
	Dec 27 10:02:51 no-preload-021144 kubelet[1936]: E1227 10:02:51.911322    1936 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-controller-manager-no-preload-021144" containerName="kube-controller-manager"
	Dec 27 10:02:51 no-preload-021144 kubelet[1936]: E1227 10:02:51.968548    1936 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-no-preload-021144" containerName="kube-apiserver"
	Dec 27 10:02:53 no-preload-021144 kubelet[1936]: E1227 10:02:53.995015    1936 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/etcd-no-preload-021144" containerName="etcd"
	Dec 27 10:02:57 no-preload-021144 kubelet[1936]: E1227 10:02:57.060018    1936 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-no-preload-021144" containerName="kube-scheduler"
	Dec 27 10:02:59 no-preload-021144 kubelet[1936]: I1227 10:02:59.931233    1936 kubelet_node_status.go:427] "Fast updating node status as it just became ready"
	Dec 27 10:03:00 no-preload-021144 kubelet[1936]: I1227 10:03:00.154141    1936 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jcq6t\" (UniqueName: \"kubernetes.io/projected/a93fe941-a366-4e29-952d-b0141a7ddfdb-kube-api-access-jcq6t\") pod \"coredns-7d764666f9-p7h6b\" (UID: \"a93fe941-a366-4e29-952d-b0141a7ddfdb\") " pod="kube-system/coredns-7d764666f9-p7h6b"
	Dec 27 10:03:00 no-preload-021144 kubelet[1936]: I1227 10:03:00.154254    1936 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a93fe941-a366-4e29-952d-b0141a7ddfdb-config-volume\") pod \"coredns-7d764666f9-p7h6b\" (UID: \"a93fe941-a366-4e29-952d-b0141a7ddfdb\") " pod="kube-system/coredns-7d764666f9-p7h6b"
	Dec 27 10:03:00 no-preload-021144 kubelet[1936]: I1227 10:03:00.154280    1936 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/d93d446a-5434-4190-862c-fb660b9b87df-tmp\") pod \"storage-provisioner\" (UID: \"d93d446a-5434-4190-862c-fb660b9b87df\") " pod="kube-system/storage-provisioner"
	Dec 27 10:03:00 no-preload-021144 kubelet[1936]: I1227 10:03:00.154302    1936 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tzxgn\" (UniqueName: \"kubernetes.io/projected/d93d446a-5434-4190-862c-fb660b9b87df-kube-api-access-tzxgn\") pod \"storage-provisioner\" (UID: \"d93d446a-5434-4190-862c-fb660b9b87df\") " pod="kube-system/storage-provisioner"
	Dec 27 10:03:00 no-preload-021144 kubelet[1936]: W1227 10:03:00.597178    1936 manager.go:1172] Failed to process watch event {EventType:0 Name:/docker/ab89938537bbdf10f981cdbb065149ea236bebfe2c08f7d5ff90bb70bae01ff1/crio-4b659a6b0828819515577e58088de3d21b7caefa6367882263c27eef45929e3e WatchSource:0}: Error finding container 4b659a6b0828819515577e58088de3d21b7caefa6367882263c27eef45929e3e: Status 404 returned error can't find the container with id 4b659a6b0828819515577e58088de3d21b7caefa6367882263c27eef45929e3e
	Dec 27 10:03:00 no-preload-021144 kubelet[1936]: W1227 10:03:00.611940    1936 manager.go:1172] Failed to process watch event {EventType:0 Name:/docker/ab89938537bbdf10f981cdbb065149ea236bebfe2c08f7d5ff90bb70bae01ff1/crio-63428a3523cef7725e26ca22dcfde818007c113f850b022491168a5f90385033 WatchSource:0}: Error finding container 63428a3523cef7725e26ca22dcfde818007c113f850b022491168a5f90385033: Status 404 returned error can't find the container with id 63428a3523cef7725e26ca22dcfde818007c113f850b022491168a5f90385033
	Dec 27 10:03:00 no-preload-021144 kubelet[1936]: E1227 10:03:00.990855    1936 prober_manager.go:209] "Readiness probe already exists for container" pod="kube-system/coredns-7d764666f9-p7h6b" containerName="coredns"
	Dec 27 10:03:01 no-preload-021144 kubelet[1936]: I1227 10:03:01.019309    1936 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=14.019282465 podStartE2EDuration="14.019282465s" podCreationTimestamp="2025-12-27 10:02:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-27 10:03:01.005023284 +0000 UTC m=+20.326102981" watchObservedRunningTime="2025-12-27 10:03:01.019282465 +0000 UTC m=+20.340362121"
	Dec 27 10:03:01 no-preload-021144 kubelet[1936]: E1227 10:03:01.992666    1936 prober_manager.go:209] "Readiness probe already exists for container" pod="kube-system/coredns-7d764666f9-p7h6b" containerName="coredns"
	Dec 27 10:03:02 no-preload-021144 kubelet[1936]: E1227 10:03:02.995120    1936 prober_manager.go:209] "Readiness probe already exists for container" pod="kube-system/coredns-7d764666f9-p7h6b" containerName="coredns"
	Dec 27 10:03:03 no-preload-021144 kubelet[1936]: I1227 10:03:03.226880    1936 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/coredns-7d764666f9-p7h6b" podStartSLOduration=17.226858987 podStartE2EDuration="17.226858987s" podCreationTimestamp="2025-12-27 10:02:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-27 10:03:01.020142561 +0000 UTC m=+20.341222226" watchObservedRunningTime="2025-12-27 10:03:03.226858987 +0000 UTC m=+22.547938652"
	Dec 27 10:03:03 no-preload-021144 kubelet[1936]: I1227 10:03:03.281939    1936 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c2mm9\" (UniqueName: \"kubernetes.io/projected/7e543378-18ad-4c55-8879-0efffa9bdb70-kube-api-access-c2mm9\") pod \"busybox\" (UID: \"7e543378-18ad-4c55-8879-0efffa9bdb70\") " pod="default/busybox"
	
	
	==> storage-provisioner [97f75d56dc6195d2098cca1f1ed11fd3a4c96276a50b5a601cb86f2c17dfc7f3] <==
	I1227 10:03:00.675674       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1227 10:03:00.730756       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1227 10:03:00.730858       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1227 10:03:00.750480       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1227 10:03:00.767093       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1227 10:03:00.767696       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1227 10:03:00.768096       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"9f5df63c-860d-4bc0-ad23-f9b0ec7df9e5", APIVersion:"v1", ResourceVersion:"421", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-021144_6c48e791-72c0-4a0e-978c-94ac63d8a256 became leader
	I1227 10:03:00.768138       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-021144_6c48e791-72c0-4a0e-978c-94ac63d8a256!
	W1227 10:03:00.776962       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1227 10:03:00.780289       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1227 10:03:00.869315       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-021144_6c48e791-72c0-4a0e-978c-94ac63d8a256!
	W1227 10:03:02.784004       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1227 10:03:02.788486       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1227 10:03:04.792251       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1227 10:03:04.801826       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1227 10:03:06.805168       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1227 10:03:06.809945       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1227 10:03:08.812552       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1227 10:03:08.819481       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1227 10:03:10.825909       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1227 10:03:10.830500       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1227 10:03:12.842724       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1227 10:03:12.849527       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-021144 -n no-preload-021144
helpers_test.go:270: (dbg) Run:  kubectl --context no-preload-021144 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:294: <<< TestStartStop/group/no-preload/serial/EnableAddonWhileActive FAILED: end of post-mortem logs <<<
helpers_test.go:295: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (2.41s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (7.45s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p no-preload-021144 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 pause -p no-preload-021144 --alsologtostderr -v=1: exit status 80 (2.108761125s)

                                                
                                                
-- stdout --
	* Pausing node no-preload-021144 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1227 10:04:28.267302  509706 out.go:360] Setting OutFile to fd 1 ...
	I1227 10:04:28.267518  509706 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 10:04:28.267546  509706 out.go:374] Setting ErrFile to fd 2...
	I1227 10:04:28.267566  509706 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 10:04:28.267882  509706 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22344-301174/.minikube/bin
	I1227 10:04:28.268190  509706 out.go:368] Setting JSON to false
	I1227 10:04:28.268238  509706 mustload.go:66] Loading cluster: no-preload-021144
	I1227 10:04:28.268682  509706 config.go:182] Loaded profile config "no-preload-021144": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1227 10:04:28.269217  509706 cli_runner.go:164] Run: docker container inspect no-preload-021144 --format={{.State.Status}}
	I1227 10:04:28.292137  509706 host.go:66] Checking if "no-preload-021144" exists ...
	I1227 10:04:28.292444  509706 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1227 10:04:28.388944  509706 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:52 OomKillDisable:true NGoroutines:63 SystemTime:2025-12-27 10:04:28.378883345 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1227 10:04:28.389560  509706 pause.go:60] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/22158/minikube-v1.37.0-1766719468-22158-arm64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1766719468-22158/minikube-v1.37.0-1766719468-22158-arm64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1766719468-22158-arm64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) preload-source:auto profile:no-preload-021144 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) rosetta:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool
=true) wantupdatenotification:%!s(bool=true) wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1227 10:04:28.395632  509706 out.go:179] * Pausing node no-preload-021144 ... 
	I1227 10:04:28.399923  509706 host.go:66] Checking if "no-preload-021144" exists ...
	I1227 10:04:28.400282  509706 ssh_runner.go:195] Run: systemctl --version
	I1227 10:04:28.400328  509706 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-021144
	I1227 10:04:28.426313  509706 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33436 SSHKeyPath:/home/jenkins/minikube-integration/22344-301174/.minikube/machines/no-preload-021144/id_rsa Username:docker}
	I1227 10:04:28.531126  509706 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1227 10:04:28.553744  509706 pause.go:52] kubelet running: true
	I1227 10:04:28.553874  509706 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1227 10:04:28.853093  509706 cri.go:61] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1227 10:04:28.853177  509706 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1227 10:04:28.962505  509706 cri.go:96] found id: "6293953f907e5528b67e0d977b446baf30b852aa274e59762369a92e0ab91949"
	I1227 10:04:28.962529  509706 cri.go:96] found id: "c4fa9ef9befbfd98ffa6e9119e6de96da382909198101360695a996f61df6014"
	I1227 10:04:28.962534  509706 cri.go:96] found id: "45ecaf16299f98151be059174b564a2b8e41d59011fe0dc39e4f49fe2d671775"
	I1227 10:04:28.962537  509706 cri.go:96] found id: "352daa6cd4a6a1964f4f855817ee1e9291a42a8caa5882022bce5a87b5ef38e7"
	I1227 10:04:28.962540  509706 cri.go:96] found id: "5bf3302e4122f8d606058ec1bcb193df3d81177fb1f03e2221b04f64f3be159b"
	I1227 10:04:28.962544  509706 cri.go:96] found id: "826fac1ed8726ef3a42c94e0e83f18ada09304c6805c9f89f7b3c2d04e4c1a04"
	I1227 10:04:28.962547  509706 cri.go:96] found id: "8dcf711110ef0028adc15392e388eb3ea778715b2b9bac9fb0a2657eff4887a0"
	I1227 10:04:28.962550  509706 cri.go:96] found id: "327ad0c5ea77e5cb07dbc495716c696dbfb5bd8050c9432839733c1be978ab8f"
	I1227 10:04:28.962553  509706 cri.go:96] found id: "de4d8646c03a270d9b795d812404b843b39536ef99277aa58fc56f50232ffd89"
	I1227 10:04:28.962563  509706 cri.go:96] found id: "bc7f7a469469335774383c3c8e2a4c8a94b08630a54e8d542062c409fece5186"
	I1227 10:04:28.962567  509706 cri.go:96] found id: "aeca985cca2e204c06a9df88fcfd42a200defdc60b6d97b4d4d19192e4a14d30"
	I1227 10:04:28.962570  509706 cri.go:96] found id: ""
	I1227 10:04:28.962617  509706 ssh_runner.go:195] Run: sudo runc list -f json
	I1227 10:04:28.977235  509706 retry.go:84] will retry after 100ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-27T10:04:28Z" level=error msg="open /run/runc: no such file or directory"
	I1227 10:04:29.122582  509706 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1227 10:04:29.137250  509706 pause.go:52] kubelet running: false
	I1227 10:04:29.137317  509706 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1227 10:04:29.339696  509706 cri.go:61] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1227 10:04:29.339777  509706 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1227 10:04:29.424857  509706 cri.go:96] found id: "6293953f907e5528b67e0d977b446baf30b852aa274e59762369a92e0ab91949"
	I1227 10:04:29.424880  509706 cri.go:96] found id: "c4fa9ef9befbfd98ffa6e9119e6de96da382909198101360695a996f61df6014"
	I1227 10:04:29.424885  509706 cri.go:96] found id: "45ecaf16299f98151be059174b564a2b8e41d59011fe0dc39e4f49fe2d671775"
	I1227 10:04:29.424889  509706 cri.go:96] found id: "352daa6cd4a6a1964f4f855817ee1e9291a42a8caa5882022bce5a87b5ef38e7"
	I1227 10:04:29.424893  509706 cri.go:96] found id: "5bf3302e4122f8d606058ec1bcb193df3d81177fb1f03e2221b04f64f3be159b"
	I1227 10:04:29.424896  509706 cri.go:96] found id: "826fac1ed8726ef3a42c94e0e83f18ada09304c6805c9f89f7b3c2d04e4c1a04"
	I1227 10:04:29.424900  509706 cri.go:96] found id: "8dcf711110ef0028adc15392e388eb3ea778715b2b9bac9fb0a2657eff4887a0"
	I1227 10:04:29.424903  509706 cri.go:96] found id: "327ad0c5ea77e5cb07dbc495716c696dbfb5bd8050c9432839733c1be978ab8f"
	I1227 10:04:29.424906  509706 cri.go:96] found id: "de4d8646c03a270d9b795d812404b843b39536ef99277aa58fc56f50232ffd89"
	I1227 10:04:29.424913  509706 cri.go:96] found id: "bc7f7a469469335774383c3c8e2a4c8a94b08630a54e8d542062c409fece5186"
	I1227 10:04:29.424917  509706 cri.go:96] found id: "aeca985cca2e204c06a9df88fcfd42a200defdc60b6d97b4d4d19192e4a14d30"
	I1227 10:04:29.424920  509706 cri.go:96] found id: ""
	I1227 10:04:29.424982  509706 ssh_runner.go:195] Run: sudo runc list -f json
	I1227 10:04:29.909089  509706 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1227 10:04:29.923072  509706 pause.go:52] kubelet running: false
	I1227 10:04:29.923181  509706 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1227 10:04:30.175182  509706 cri.go:61] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1227 10:04:30.175287  509706 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1227 10:04:30.273056  509706 cri.go:96] found id: "6293953f907e5528b67e0d977b446baf30b852aa274e59762369a92e0ab91949"
	I1227 10:04:30.273080  509706 cri.go:96] found id: "c4fa9ef9befbfd98ffa6e9119e6de96da382909198101360695a996f61df6014"
	I1227 10:04:30.273085  509706 cri.go:96] found id: "45ecaf16299f98151be059174b564a2b8e41d59011fe0dc39e4f49fe2d671775"
	I1227 10:04:30.273089  509706 cri.go:96] found id: "352daa6cd4a6a1964f4f855817ee1e9291a42a8caa5882022bce5a87b5ef38e7"
	I1227 10:04:30.273092  509706 cri.go:96] found id: "5bf3302e4122f8d606058ec1bcb193df3d81177fb1f03e2221b04f64f3be159b"
	I1227 10:04:30.273096  509706 cri.go:96] found id: "826fac1ed8726ef3a42c94e0e83f18ada09304c6805c9f89f7b3c2d04e4c1a04"
	I1227 10:04:30.273099  509706 cri.go:96] found id: "8dcf711110ef0028adc15392e388eb3ea778715b2b9bac9fb0a2657eff4887a0"
	I1227 10:04:30.273102  509706 cri.go:96] found id: "327ad0c5ea77e5cb07dbc495716c696dbfb5bd8050c9432839733c1be978ab8f"
	I1227 10:04:30.273105  509706 cri.go:96] found id: "de4d8646c03a270d9b795d812404b843b39536ef99277aa58fc56f50232ffd89"
	I1227 10:04:30.273111  509706 cri.go:96] found id: "bc7f7a469469335774383c3c8e2a4c8a94b08630a54e8d542062c409fece5186"
	I1227 10:04:30.273114  509706 cri.go:96] found id: "aeca985cca2e204c06a9df88fcfd42a200defdc60b6d97b4d4d19192e4a14d30"
	I1227 10:04:30.273117  509706 cri.go:96] found id: ""
	I1227 10:04:30.273168  509706 ssh_runner.go:195] Run: sudo runc list -f json
	I1227 10:04:30.299009  509706 out.go:203] 
	W1227 10:04:30.301943  509706 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-27T10:04:30Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-27T10:04:30Z" level=error msg="open /run/runc: no such file or directory"
	
	W1227 10:04:30.301971  509706 out.go:285] * 
	* 
	W1227 10:04:30.305845  509706 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1227 10:04:30.307997  509706 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:309: out/minikube-linux-arm64 pause -p no-preload-021144 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect no-preload-021144
helpers_test.go:244: (dbg) docker inspect no-preload-021144:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "ab89938537bbdf10f981cdbb065149ea236bebfe2c08f7d5ff90bb70bae01ff1",
	        "Created": "2025-12-27T10:02:08.318546254Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 505388,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-27T10:03:26.338401631Z",
	            "FinishedAt": "2025-12-27T10:03:25.487493066Z"
	        },
	        "Image": "sha256:713d59eed1cb7ea89d31e3cbef8f6274a6ab7509a421e96cf0197c334e76398d",
	        "ResolvConfPath": "/var/lib/docker/containers/ab89938537bbdf10f981cdbb065149ea236bebfe2c08f7d5ff90bb70bae01ff1/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/ab89938537bbdf10f981cdbb065149ea236bebfe2c08f7d5ff90bb70bae01ff1/hostname",
	        "HostsPath": "/var/lib/docker/containers/ab89938537bbdf10f981cdbb065149ea236bebfe2c08f7d5ff90bb70bae01ff1/hosts",
	        "LogPath": "/var/lib/docker/containers/ab89938537bbdf10f981cdbb065149ea236bebfe2c08f7d5ff90bb70bae01ff1/ab89938537bbdf10f981cdbb065149ea236bebfe2c08f7d5ff90bb70bae01ff1-json.log",
	        "Name": "/no-preload-021144",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "no-preload-021144:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "no-preload-021144",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "ab89938537bbdf10f981cdbb065149ea236bebfe2c08f7d5ff90bb70bae01ff1",
	                "LowerDir": "/var/lib/docker/overlay2/bd2f19d759acf34ab41ac7962a845e4705e5c52a8a2d5b4fa791e70efb755ef7-init/diff:/var/lib/docker/overlay2/888349771a41a46b09e7de676064af9effbe2f5ae2a8ba49ad062335fb2a70e5/diff",
	                "MergedDir": "/var/lib/docker/overlay2/bd2f19d759acf34ab41ac7962a845e4705e5c52a8a2d5b4fa791e70efb755ef7/merged",
	                "UpperDir": "/var/lib/docker/overlay2/bd2f19d759acf34ab41ac7962a845e4705e5c52a8a2d5b4fa791e70efb755ef7/diff",
	                "WorkDir": "/var/lib/docker/overlay2/bd2f19d759acf34ab41ac7962a845e4705e5c52a8a2d5b4fa791e70efb755ef7/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "no-preload-021144",
	                "Source": "/var/lib/docker/volumes/no-preload-021144/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-021144",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-021144",
	                "name.minikube.sigs.k8s.io": "no-preload-021144",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "2ea25ee6f2642cd3dc4a48560eac2f565fea4d065b6c5a05c2b11faea202ac58",
	            "SandboxKey": "/var/run/docker/netns/2ea25ee6f264",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33436"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33437"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33440"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33438"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33439"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "no-preload-021144": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "4a:e2:23:5e:0f:93",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "580e567ffdb0a3108b9672089c71417e29baa569ff9d213d3d1dd6886e00e475",
	                    "EndpointID": "8dd162f939f08a9cc82d8e0b6ff21d75544497532448984d85cf245495e013ee",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "no-preload-021144",
	                        "ab89938537bb"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-021144 -n no-preload-021144
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-021144 -n no-preload-021144: exit status 2 (436.467899ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestStartStop/group/no-preload/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p no-preload-021144 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-arm64 -p no-preload-021144 logs -n 25: (1.626335303s)
helpers_test.go:261: TestStartStop/group/no-preload/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────
────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │          PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────
────┤
	│ start   │ -p force-systemd-flag-779725 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                                                                                                                   │ force-systemd-flag-779725 │ jenkins │ v1.37.0 │ 27 Dec 25 09:55 UTC │                     │
	│ delete  │ -p force-systemd-env-029895                                                                                                                                                                                                                   │ force-systemd-env-029895  │ jenkins │ v1.37.0 │ 27 Dec 25 09:58 UTC │ 27 Dec 25 09:58 UTC │
	│ start   │ -p cert-options-057459 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio                     │ cert-options-057459       │ jenkins │ v1.37.0 │ 27 Dec 25 09:58 UTC │ 27 Dec 25 09:59 UTC │
	│ ssh     │ cert-options-057459 ssh openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt                                                                                                                                                   │ cert-options-057459       │ jenkins │ v1.37.0 │ 27 Dec 25 09:59 UTC │ 27 Dec 25 09:59 UTC │
	│ ssh     │ -p cert-options-057459 -- sudo cat /etc/kubernetes/admin.conf                                                                                                                                                                                 │ cert-options-057459       │ jenkins │ v1.37.0 │ 27 Dec 25 09:59 UTC │ 27 Dec 25 09:59 UTC │
	│ delete  │ -p cert-options-057459                                                                                                                                                                                                                        │ cert-options-057459       │ jenkins │ v1.37.0 │ 27 Dec 25 09:59 UTC │ 27 Dec 25 09:59 UTC │
	│ start   │ -p old-k8s-version-156305 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-156305    │ jenkins │ v1.37.0 │ 27 Dec 25 09:59 UTC │ 27 Dec 25 10:00 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-156305 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-156305    │ jenkins │ v1.37.0 │ 27 Dec 25 10:00 UTC │                     │
	│ stop    │ -p old-k8s-version-156305 --alsologtostderr -v=3                                                                                                                                                                                              │ old-k8s-version-156305    │ jenkins │ v1.37.0 │ 27 Dec 25 10:00 UTC │ 27 Dec 25 10:00 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-156305 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-156305    │ jenkins │ v1.37.0 │ 27 Dec 25 10:00 UTC │ 27 Dec 25 10:00 UTC │
	│ start   │ -p old-k8s-version-156305 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-156305    │ jenkins │ v1.37.0 │ 27 Dec 25 10:00 UTC │ 27 Dec 25 10:01 UTC │
	│ image   │ old-k8s-version-156305 image list --format=json                                                                                                                                                                                               │ old-k8s-version-156305    │ jenkins │ v1.37.0 │ 27 Dec 25 10:01 UTC │ 27 Dec 25 10:01 UTC │
	│ pause   │ -p old-k8s-version-156305 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-156305    │ jenkins │ v1.37.0 │ 27 Dec 25 10:01 UTC │                     │
	│ delete  │ -p old-k8s-version-156305                                                                                                                                                                                                                     │ old-k8s-version-156305    │ jenkins │ v1.37.0 │ 27 Dec 25 10:02 UTC │ 27 Dec 25 10:02 UTC │
	│ delete  │ -p old-k8s-version-156305                                                                                                                                                                                                                     │ old-k8s-version-156305    │ jenkins │ v1.37.0 │ 27 Dec 25 10:02 UTC │ 27 Dec 25 10:02 UTC │
	│ start   │ -p no-preload-021144 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0                                                                                       │ no-preload-021144         │ jenkins │ v1.37.0 │ 27 Dec 25 10:02 UTC │ 27 Dec 25 10:03 UTC │
	│ addons  │ enable metrics-server -p no-preload-021144 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-021144         │ jenkins │ v1.37.0 │ 27 Dec 25 10:03 UTC │                     │
	│ stop    │ -p no-preload-021144 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-021144         │ jenkins │ v1.37.0 │ 27 Dec 25 10:03 UTC │ 27 Dec 25 10:03 UTC │
	│ addons  │ enable dashboard -p no-preload-021144 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-021144         │ jenkins │ v1.37.0 │ 27 Dec 25 10:03 UTC │ 27 Dec 25 10:03 UTC │
	│ start   │ -p no-preload-021144 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0                                                                                       │ no-preload-021144         │ jenkins │ v1.37.0 │ 27 Dec 25 10:03 UTC │ 27 Dec 25 10:04 UTC │
	│ ssh     │ force-systemd-flag-779725 ssh cat /etc/crio/crio.conf.d/02-crio.conf                                                                                                                                                                          │ force-systemd-flag-779725 │ jenkins │ v1.37.0 │ 27 Dec 25 10:04 UTC │ 27 Dec 25 10:04 UTC │
	│ delete  │ -p force-systemd-flag-779725                                                                                                                                                                                                                  │ force-systemd-flag-779725 │ jenkins │ v1.37.0 │ 27 Dec 25 10:04 UTC │ 27 Dec 25 10:04 UTC │
	│ start   │ -p embed-certs-017122 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0                                                                                        │ embed-certs-017122        │ jenkins │ v1.37.0 │ 27 Dec 25 10:04 UTC │                     │
	│ image   │ no-preload-021144 image list --format=json                                                                                                                                                                                                    │ no-preload-021144         │ jenkins │ v1.37.0 │ 27 Dec 25 10:04 UTC │ 27 Dec 25 10:04 UTC │
	│ pause   │ -p no-preload-021144 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-021144         │ jenkins │ v1.37.0 │ 27 Dec 25 10:04 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────
────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/27 10:04:17
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1227 10:04:17.739658  508478 out.go:360] Setting OutFile to fd 1 ...
	I1227 10:04:17.739865  508478 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 10:04:17.739893  508478 out.go:374] Setting ErrFile to fd 2...
	I1227 10:04:17.739913  508478 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 10:04:17.740289  508478 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22344-301174/.minikube/bin
	I1227 10:04:17.740839  508478 out.go:368] Setting JSON to false
	I1227 10:04:17.741819  508478 start.go:133] hostinfo: {"hostname":"ip-172-31-31-251","uptime":10007,"bootTime":1766819851,"procs":183,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1227 10:04:17.741929  508478 start.go:143] virtualization:  
	I1227 10:04:17.745436  508478 out.go:179] * [embed-certs-017122] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1227 10:04:17.749746  508478 out.go:179]   - MINIKUBE_LOCATION=22344
	I1227 10:04:17.749832  508478 notify.go:221] Checking for updates...
	I1227 10:04:17.756202  508478 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1227 10:04:17.759561  508478 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22344-301174/kubeconfig
	I1227 10:04:17.762723  508478 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22344-301174/.minikube
	I1227 10:04:17.765845  508478 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1227 10:04:17.768780  508478 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1227 10:04:17.772502  508478 config.go:182] Loaded profile config "no-preload-021144": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1227 10:04:17.772608  508478 driver.go:422] Setting default libvirt URI to qemu:///system
	I1227 10:04:17.809977  508478 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1227 10:04:17.811148  508478 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1227 10:04:17.868906  508478 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-27 10:04:17.858435415 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1227 10:04:17.869014  508478 docker.go:319] overlay module found
	I1227 10:04:17.872307  508478 out.go:179] * Using the docker driver based on user configuration
	I1227 10:04:17.875316  508478 start.go:309] selected driver: docker
	I1227 10:04:17.875341  508478 start.go:928] validating driver "docker" against <nil>
	I1227 10:04:17.875356  508478 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1227 10:04:17.876121  508478 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1227 10:04:17.941997  508478 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-27 10:04:17.931979405 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1227 10:04:17.942242  508478 start_flags.go:333] no existing cluster config was found, will generate one from the flags 
	I1227 10:04:17.942479  508478 start_flags.go:1019] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1227 10:04:17.945549  508478 out.go:179] * Using Docker driver with root privileges
	I1227 10:04:17.948426  508478 cni.go:84] Creating CNI manager for ""
	I1227 10:04:17.948506  508478 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1227 10:04:17.948520  508478 start_flags.go:342] Found "CNI" CNI - setting NetworkPlugin=cni
	I1227 10:04:17.948606  508478 start.go:353] cluster config:
	{Name:embed-certs-017122 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:embed-certs-017122 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Contain
erRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPI
D:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1227 10:04:17.951774  508478 out.go:179] * Starting "embed-certs-017122" primary control-plane node in "embed-certs-017122" cluster
	I1227 10:04:17.954592  508478 cache.go:134] Beginning downloading kic base image for docker with crio
	I1227 10:04:17.957538  508478 out.go:179] * Pulling base image v0.0.48-1766570851-22316 ...
	I1227 10:04:17.960474  508478 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime crio
	I1227 10:04:17.960527  508478 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22344-301174/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-cri-o-overlay-arm64.tar.lz4
	I1227 10:04:17.960538  508478 cache.go:65] Caching tarball of preloaded images
	I1227 10:04:17.960577  508478 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a in local docker daemon
	I1227 10:04:17.960654  508478 preload.go:251] Found /home/jenkins/minikube-integration/22344-301174/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1227 10:04:17.960665  508478 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0 on crio
	I1227 10:04:17.960785  508478 profile.go:143] Saving config to /home/jenkins/minikube-integration/22344-301174/.minikube/profiles/embed-certs-017122/config.json ...
	I1227 10:04:17.960803  508478 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22344-301174/.minikube/profiles/embed-certs-017122/config.json: {Name:mkad2255aee1f11f52b5c34344b6a9598626841f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 10:04:17.981300  508478 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a in local docker daemon, skipping pull
	I1227 10:04:17.981325  508478 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a exists in daemon, skipping load
	I1227 10:04:17.981347  508478 cache.go:243] Successfully downloaded all kic artifacts
	I1227 10:04:17.981381  508478 start.go:360] acquireMachinesLock for embed-certs-017122: {Name:mkc5c6a144bc51d843c500d769feb1ef839b15a5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1227 10:04:17.981559  508478 start.go:364] duration metric: took 155.382µs to acquireMachinesLock for "embed-certs-017122"
	I1227 10:04:17.981593  508478 start.go:93] Provisioning new machine with config: &{Name:embed-certs-017122 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:embed-certs-017122 Namespace:default APIServerHAVIP: APIServe
rName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmw
arePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false} &{Name: IP: Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1227 10:04:17.981665  508478 start.go:125] createHost starting for "" (driver="docker")
	I1227 10:04:17.985192  508478 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1227 10:04:17.985587  508478 start.go:159] libmachine.API.Create for "embed-certs-017122" (driver="docker")
	I1227 10:04:17.985621  508478 client.go:173] LocalClient.Create starting
	I1227 10:04:17.985760  508478 main.go:144] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22344-301174/.minikube/certs/ca.pem
	I1227 10:04:17.985857  508478 main.go:144] libmachine: Decoding PEM data...
	I1227 10:04:17.985919  508478 main.go:144] libmachine: Parsing certificate...
	I1227 10:04:17.986013  508478 main.go:144] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22344-301174/.minikube/certs/cert.pem
	I1227 10:04:17.986092  508478 main.go:144] libmachine: Decoding PEM data...
	I1227 10:04:17.986109  508478 main.go:144] libmachine: Parsing certificate...
	I1227 10:04:17.986689  508478 cli_runner.go:164] Run: docker network inspect embed-certs-017122 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1227 10:04:18.006447  508478 cli_runner.go:211] docker network inspect embed-certs-017122 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1227 10:04:18.006540  508478 network_create.go:284] running [docker network inspect embed-certs-017122] to gather additional debugging logs...
	I1227 10:04:18.006563  508478 cli_runner.go:164] Run: docker network inspect embed-certs-017122
	W1227 10:04:18.025821  508478 cli_runner.go:211] docker network inspect embed-certs-017122 returned with exit code 1
	I1227 10:04:18.025855  508478 network_create.go:287] error running [docker network inspect embed-certs-017122]: docker network inspect embed-certs-017122: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network embed-certs-017122 not found
	I1227 10:04:18.025868  508478 network_create.go:289] output of [docker network inspect embed-certs-017122]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network embed-certs-017122 not found
	
	** /stderr **
	I1227 10:04:18.025961  508478 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1227 10:04:18.043441  508478 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-cc45c0939b74 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:be:a9:eb:a6:c1:01} reservation:<nil>}
	I1227 10:04:18.043828  508478 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-b122a856da6d IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:2e:eb:89:95:c1:62} reservation:<nil>}
	I1227 10:04:18.044196  508478 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-a1a16649dea9 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:9e:51:2b:90:d1:ea} reservation:<nil>}
	I1227 10:04:18.044642  508478 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001a139b0}
	I1227 10:04:18.044665  508478 network_create.go:124] attempt to create docker network embed-certs-017122 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I1227 10:04:18.044731  508478 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=embed-certs-017122 embed-certs-017122
	I1227 10:04:18.110996  508478 network_create.go:108] docker network embed-certs-017122 192.168.76.0/24 created
	I1227 10:04:18.111041  508478 kic.go:121] calculated static IP "192.168.76.2" for the "embed-certs-017122" container
	I1227 10:04:18.111115  508478 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1227 10:04:18.132608  508478 cli_runner.go:164] Run: docker volume create embed-certs-017122 --label name.minikube.sigs.k8s.io=embed-certs-017122 --label created_by.minikube.sigs.k8s.io=true
	I1227 10:04:18.151597  508478 oci.go:103] Successfully created a docker volume embed-certs-017122
	I1227 10:04:18.151691  508478 cli_runner.go:164] Run: docker run --rm --name embed-certs-017122-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=embed-certs-017122 --entrypoint /usr/bin/test -v embed-certs-017122:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a -d /var/lib
	I1227 10:04:18.680605  508478 oci.go:107] Successfully prepared a docker volume embed-certs-017122
	I1227 10:04:18.680677  508478 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime crio
	I1227 10:04:18.680692  508478 kic.go:194] Starting extracting preloaded images to volume ...
	I1227 10:04:18.680792  508478 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22344-301174/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v embed-certs-017122:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a -I lz4 -xf /preloaded.tar -C /extractDir
	I1227 10:04:22.605359  508478 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22344-301174/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v embed-certs-017122:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a -I lz4 -xf /preloaded.tar -C /extractDir: (3.924525795s)
	I1227 10:04:22.605396  508478 kic.go:203] duration metric: took 3.92470005s to extract preloaded images to volume ...
	W1227 10:04:22.605549  508478 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1227 10:04:22.605679  508478 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1227 10:04:22.667004  508478 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname embed-certs-017122 --name embed-certs-017122 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=embed-certs-017122 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=embed-certs-017122 --network embed-certs-017122 --ip 192.168.76.2 --volume embed-certs-017122:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a
	I1227 10:04:22.973098  508478 cli_runner.go:164] Run: docker container inspect embed-certs-017122 --format={{.State.Running}}
	I1227 10:04:23.000002  508478 cli_runner.go:164] Run: docker container inspect embed-certs-017122 --format={{.State.Status}}
	I1227 10:04:23.030061  508478 cli_runner.go:164] Run: docker exec embed-certs-017122 stat /var/lib/dpkg/alternatives/iptables
	I1227 10:04:23.079658  508478 oci.go:144] the created container "embed-certs-017122" has a running status.
	I1227 10:04:23.079686  508478 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/22344-301174/.minikube/machines/embed-certs-017122/id_rsa...
	I1227 10:04:23.487247  508478 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22344-301174/.minikube/machines/embed-certs-017122/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1227 10:04:23.512968  508478 cli_runner.go:164] Run: docker container inspect embed-certs-017122 --format={{.State.Status}}
	I1227 10:04:23.543388  508478 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1227 10:04:23.543409  508478 kic_runner.go:114] Args: [docker exec --privileged embed-certs-017122 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1227 10:04:23.623028  508478 cli_runner.go:164] Run: docker container inspect embed-certs-017122 --format={{.State.Status}}
	I1227 10:04:23.655947  508478 machine.go:94] provisionDockerMachine start ...
	I1227 10:04:23.656057  508478 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-017122
	I1227 10:04:23.683901  508478 main.go:144] libmachine: Using SSH client type: native
	I1227 10:04:23.684239  508478 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dbbe0] 0x3de0e0 <nil>  [] 0s} 127.0.0.1 33441 <nil> <nil>}
	I1227 10:04:23.684248  508478 main.go:144] libmachine: About to run SSH command:
	hostname
	I1227 10:04:23.684911  508478 main.go:144] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1227 10:04:26.829827  508478 main.go:144] libmachine: SSH cmd err, output: <nil>: embed-certs-017122
	
	I1227 10:04:26.829864  508478 ubuntu.go:182] provisioning hostname "embed-certs-017122"
	I1227 10:04:26.829944  508478 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-017122
	I1227 10:04:26.848484  508478 main.go:144] libmachine: Using SSH client type: native
	I1227 10:04:26.848806  508478 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dbbe0] 0x3de0e0 <nil>  [] 0s} 127.0.0.1 33441 <nil> <nil>}
	I1227 10:04:26.848817  508478 main.go:144] libmachine: About to run SSH command:
	sudo hostname embed-certs-017122 && echo "embed-certs-017122" | sudo tee /etc/hostname
	I1227 10:04:27.005008  508478 main.go:144] libmachine: SSH cmd err, output: <nil>: embed-certs-017122
	
	I1227 10:04:27.005109  508478 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-017122
	I1227 10:04:27.024304  508478 main.go:144] libmachine: Using SSH client type: native
	I1227 10:04:27.024624  508478 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dbbe0] 0x3de0e0 <nil>  [] 0s} 127.0.0.1 33441 <nil> <nil>}
	I1227 10:04:27.024640  508478 main.go:144] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-017122' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-017122/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-017122' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1227 10:04:27.166618  508478 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	I1227 10:04:27.166687  508478 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22344-301174/.minikube CaCertPath:/home/jenkins/minikube-integration/22344-301174/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22344-301174/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22344-301174/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22344-301174/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22344-301174/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22344-301174/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22344-301174/.minikube}
	I1227 10:04:27.166731  508478 ubuntu.go:190] setting up certificates
	I1227 10:04:27.166770  508478 provision.go:84] configureAuth start
	I1227 10:04:27.166859  508478 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-017122
	I1227 10:04:27.185246  508478 provision.go:143] copyHostCerts
	I1227 10:04:27.185327  508478 exec_runner.go:144] found /home/jenkins/minikube-integration/22344-301174/.minikube/ca.pem, removing ...
	I1227 10:04:27.185349  508478 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22344-301174/.minikube/ca.pem
	I1227 10:04:27.185429  508478 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22344-301174/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22344-301174/.minikube/ca.pem (1082 bytes)
	I1227 10:04:27.185534  508478 exec_runner.go:144] found /home/jenkins/minikube-integration/22344-301174/.minikube/cert.pem, removing ...
	I1227 10:04:27.185543  508478 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22344-301174/.minikube/cert.pem
	I1227 10:04:27.185576  508478 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22344-301174/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22344-301174/.minikube/cert.pem (1123 bytes)
	I1227 10:04:27.185639  508478 exec_runner.go:144] found /home/jenkins/minikube-integration/22344-301174/.minikube/key.pem, removing ...
	I1227 10:04:27.185662  508478 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22344-301174/.minikube/key.pem
	I1227 10:04:27.185694  508478 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22344-301174/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22344-301174/.minikube/key.pem (1675 bytes)
	I1227 10:04:27.185755  508478 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22344-301174/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22344-301174/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22344-301174/.minikube/certs/ca-key.pem org=jenkins.embed-certs-017122 san=[127.0.0.1 192.168.76.2 embed-certs-017122 localhost minikube]
	I1227 10:04:28.113037  508478 provision.go:177] copyRemoteCerts
	I1227 10:04:28.113174  508478 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1227 10:04:28.113259  508478 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-017122
	I1227 10:04:28.131769  508478 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33441 SSHKeyPath:/home/jenkins/minikube-integration/22344-301174/.minikube/machines/embed-certs-017122/id_rsa Username:docker}
	I1227 10:04:28.243263  508478 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22344-301174/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1227 10:04:28.267550  508478 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22344-301174/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1227 10:04:28.291847  508478 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22344-301174/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1227 10:04:28.313892  508478 provision.go:87] duration metric: took 1.147090535s to configureAuth
	I1227 10:04:28.313926  508478 ubuntu.go:206] setting minikube options for container-runtime
	I1227 10:04:28.314131  508478 config.go:182] Loaded profile config "embed-certs-017122": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1227 10:04:28.314714  508478 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-017122
	I1227 10:04:28.347712  508478 main.go:144] libmachine: Using SSH client type: native
	I1227 10:04:28.348026  508478 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dbbe0] 0x3de0e0 <nil>  [] 0s} 127.0.0.1 33441 <nil> <nil>}
	I1227 10:04:28.348048  508478 main.go:144] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1227 10:04:28.699722  508478 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1227 10:04:28.699797  508478 machine.go:97] duration metric: took 5.043817631s to provisionDockerMachine
	I1227 10:04:28.699821  508478 client.go:176] duration metric: took 10.714194264s to LocalClient.Create
	I1227 10:04:28.699855  508478 start.go:167] duration metric: took 10.714276217s to libmachine.API.Create "embed-certs-017122"
	I1227 10:04:28.699893  508478 start.go:293] postStartSetup for "embed-certs-017122" (driver="docker")
	I1227 10:04:28.699918  508478 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1227 10:04:28.700054  508478 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1227 10:04:28.700118  508478 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-017122
	I1227 10:04:28.727495  508478 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33441 SSHKeyPath:/home/jenkins/minikube-integration/22344-301174/.minikube/machines/embed-certs-017122/id_rsa Username:docker}
	I1227 10:04:28.827673  508478 ssh_runner.go:195] Run: cat /etc/os-release
	I1227 10:04:28.831255  508478 main.go:144] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1227 10:04:28.831285  508478 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1227 10:04:28.831297  508478 filesync.go:126] Scanning /home/jenkins/minikube-integration/22344-301174/.minikube/addons for local assets ...
	I1227 10:04:28.831355  508478 filesync.go:126] Scanning /home/jenkins/minikube-integration/22344-301174/.minikube/files for local assets ...
	I1227 10:04:28.831441  508478 filesync.go:149] local asset: /home/jenkins/minikube-integration/22344-301174/.minikube/files/etc/ssl/certs/3030432.pem -> 3030432.pem in /etc/ssl/certs
	I1227 10:04:28.831548  508478 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1227 10:04:28.840712  508478 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22344-301174/.minikube/files/etc/ssl/certs/3030432.pem --> /etc/ssl/certs/3030432.pem (1708 bytes)
	I1227 10:04:28.868965  508478 start.go:296] duration metric: took 169.02811ms for postStartSetup
	I1227 10:04:28.869350  508478 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-017122
	I1227 10:04:28.889534  508478 profile.go:143] Saving config to /home/jenkins/minikube-integration/22344-301174/.minikube/profiles/embed-certs-017122/config.json ...
	I1227 10:04:28.889827  508478 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1227 10:04:28.889878  508478 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-017122
	I1227 10:04:28.911983  508478 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33441 SSHKeyPath:/home/jenkins/minikube-integration/22344-301174/.minikube/machines/embed-certs-017122/id_rsa Username:docker}
	I1227 10:04:29.011630  508478 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1227 10:04:29.016867  508478 start.go:128] duration metric: took 11.035184784s to createHost
	I1227 10:04:29.016893  508478 start.go:83] releasing machines lock for "embed-certs-017122", held for 11.035319203s
	I1227 10:04:29.016975  508478 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-017122
	I1227 10:04:29.034813  508478 ssh_runner.go:195] Run: cat /version.json
	I1227 10:04:29.034853  508478 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1227 10:04:29.034871  508478 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-017122
	I1227 10:04:29.034914  508478 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-017122
	I1227 10:04:29.052778  508478 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33441 SSHKeyPath:/home/jenkins/minikube-integration/22344-301174/.minikube/machines/embed-certs-017122/id_rsa Username:docker}
	I1227 10:04:29.062277  508478 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33441 SSHKeyPath:/home/jenkins/minikube-integration/22344-301174/.minikube/machines/embed-certs-017122/id_rsa Username:docker}
	I1227 10:04:29.254264  508478 ssh_runner.go:195] Run: systemctl --version
	I1227 10:04:29.266887  508478 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1227 10:04:29.325555  508478 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1227 10:04:29.332462  508478 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1227 10:04:29.332536  508478 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1227 10:04:29.370497  508478 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1227 10:04:29.370524  508478 start.go:496] detecting cgroup driver to use...
	I1227 10:04:29.370560  508478 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1227 10:04:29.370615  508478 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1227 10:04:29.395640  508478 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1227 10:04:29.410720  508478 docker.go:218] disabling cri-docker service (if available) ...
	I1227 10:04:29.410788  508478 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1227 10:04:29.432472  508478 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1227 10:04:29.454832  508478 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1227 10:04:29.587722  508478 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1227 10:04:29.713897  508478 docker.go:234] disabling docker service ...
	I1227 10:04:29.713974  508478 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1227 10:04:29.735341  508478 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1227 10:04:29.749499  508478 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1227 10:04:29.878880  508478 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1227 10:04:30.052006  508478 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1227 10:04:30.075471  508478 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1227 10:04:30.096067  508478 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1227 10:04:30.096193  508478 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 10:04:30.107312  508478 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1227 10:04:30.107466  508478 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 10:04:30.117893  508478 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 10:04:30.128196  508478 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 10:04:30.141596  508478 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1227 10:04:30.155733  508478 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 10:04:30.167807  508478 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 10:04:30.190521  508478 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 10:04:30.203864  508478 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1227 10:04:30.213330  508478 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1227 10:04:30.224216  508478 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1227 10:04:30.389282  508478 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1227 10:04:30.570177  508478 start.go:553] Will wait 60s for socket path /var/run/crio/crio.sock
	I1227 10:04:30.570251  508478 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1227 10:04:30.574783  508478 start.go:574] Will wait 60s for crictl version
	I1227 10:04:30.574847  508478 ssh_runner.go:195] Run: which crictl
	I1227 10:04:30.579682  508478 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1227 10:04:30.606090  508478 start.go:590] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1227 10:04:30.606204  508478 ssh_runner.go:195] Run: crio --version
	I1227 10:04:30.640961  508478 ssh_runner.go:195] Run: crio --version
	I1227 10:04:30.679772  508478 out.go:179] * Preparing Kubernetes v1.35.0 on CRI-O 1.34.3 ...
	
	
	==> CRI-O <==
	Dec 27 10:04:18 no-preload-021144 crio[656]: time="2025-12-27T10:04:18.927647236Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Dec 27 10:04:18 no-preload-021144 crio[656]: time="2025-12-27T10:04:18.931403246Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 27 10:04:18 no-preload-021144 crio[656]: time="2025-12-27T10:04:18.931571838Z" level=info msg="Updated default CNI network name to kindnet"
	Dec 27 10:04:18 no-preload-021144 crio[656]: time="2025-12-27T10:04:18.931654761Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Dec 27 10:04:18 no-preload-021144 crio[656]: time="2025-12-27T10:04:18.94089641Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 27 10:04:18 no-preload-021144 crio[656]: time="2025-12-27T10:04:18.941060564Z" level=info msg="Updated default CNI network name to kindnet"
	Dec 27 10:04:18 no-preload-021144 crio[656]: time="2025-12-27T10:04:18.941139843Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Dec 27 10:04:18 no-preload-021144 crio[656]: time="2025-12-27T10:04:18.945744667Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 27 10:04:18 no-preload-021144 crio[656]: time="2025-12-27T10:04:18.945909412Z" level=info msg="Updated default CNI network name to kindnet"
	Dec 27 10:04:18 no-preload-021144 crio[656]: time="2025-12-27T10:04:18.94598458Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Dec 27 10:04:18 no-preload-021144 crio[656]: time="2025-12-27T10:04:18.949817842Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 27 10:04:18 no-preload-021144 crio[656]: time="2025-12-27T10:04:18.949974267Z" level=info msg="Updated default CNI network name to kindnet"
	Dec 27 10:04:21 no-preload-021144 crio[656]: time="2025-12-27T10:04:21.145580488Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=68487fb3-0408-4a66-93b3-039d4b040ddd name=/runtime.v1.ImageService/ImageStatus
	Dec 27 10:04:21 no-preload-021144 crio[656]: time="2025-12-27T10:04:21.147417893Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=99f7fc65-4b70-4a67-9bf2-53d154b55344 name=/runtime.v1.ImageService/ImageStatus
	Dec 27 10:04:21 no-preload-021144 crio[656]: time="2025-12-27T10:04:21.149183831Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-9hmrk/dashboard-metrics-scraper" id=0c5cc63f-002f-4cd6-af55-aced53c90731 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 27 10:04:21 no-preload-021144 crio[656]: time="2025-12-27T10:04:21.149310807Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 27 10:04:21 no-preload-021144 crio[656]: time="2025-12-27T10:04:21.167374557Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 27 10:04:21 no-preload-021144 crio[656]: time="2025-12-27T10:04:21.167945206Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 27 10:04:21 no-preload-021144 crio[656]: time="2025-12-27T10:04:21.201633798Z" level=info msg="Created container bc7f7a469469335774383c3c8e2a4c8a94b08630a54e8d542062c409fece5186: kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-9hmrk/dashboard-metrics-scraper" id=0c5cc63f-002f-4cd6-af55-aced53c90731 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 27 10:04:21 no-preload-021144 crio[656]: time="2025-12-27T10:04:21.203885781Z" level=info msg="Starting container: bc7f7a469469335774383c3c8e2a4c8a94b08630a54e8d542062c409fece5186" id=f83c5185-9ff5-49cc-b1d9-ebb41a03cb5a name=/runtime.v1.RuntimeService/StartContainer
	Dec 27 10:04:21 no-preload-021144 crio[656]: time="2025-12-27T10:04:21.205528632Z" level=info msg="Started container" PID=1731 containerID=bc7f7a469469335774383c3c8e2a4c8a94b08630a54e8d542062c409fece5186 description=kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-9hmrk/dashboard-metrics-scraper id=f83c5185-9ff5-49cc-b1d9-ebb41a03cb5a name=/runtime.v1.RuntimeService/StartContainer sandboxID=ff19dd9f32e5a39cf3d1d75aaf91a9acccc137ec00d90c7ecfe818cbdd125d47
	Dec 27 10:04:21 no-preload-021144 conmon[1729]: conmon bc7f7a46946933577438 <ninfo>: container 1731 exited with status 1
	Dec 27 10:04:21 no-preload-021144 crio[656]: time="2025-12-27T10:04:21.32940047Z" level=info msg="Removing container: 09f499235917c35bfe79389d2cddcff1d6f64f5c64a8f8fec8343053238b46ff" id=2e4ea3b7-331b-4fab-bc2b-4a009b1e49df name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 27 10:04:21 no-preload-021144 crio[656]: time="2025-12-27T10:04:21.340555579Z" level=info msg="Error loading conmon cgroup of container 09f499235917c35bfe79389d2cddcff1d6f64f5c64a8f8fec8343053238b46ff: cgroup deleted" id=2e4ea3b7-331b-4fab-bc2b-4a009b1e49df name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 27 10:04:21 no-preload-021144 crio[656]: time="2025-12-27T10:04:21.347009157Z" level=info msg="Removed container 09f499235917c35bfe79389d2cddcff1d6f64f5c64a8f8fec8343053238b46ff: kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-9hmrk/dashboard-metrics-scraper" id=2e4ea3b7-331b-4fab-bc2b-4a009b1e49df name=/runtime.v1.RuntimeService/RemoveContainer
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                          NAMESPACE
	bc7f7a4694693       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           10 seconds ago      Exited              dashboard-metrics-scraper   3                   ff19dd9f32e5a       dashboard-metrics-scraper-867fb5f87b-9hmrk   kubernetes-dashboard
	6293953f907e5       66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51                                           22 seconds ago      Running             storage-provisioner         2                   ecff762185516       storage-provisioner                          kube-system
	aeca985cca2e2       docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf   41 seconds ago      Running             kubernetes-dashboard        0                   e2f9162cf13ac       kubernetes-dashboard-b84665fb8-khhmw         kubernetes-dashboard
	c4fa9ef9befbf       e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf                                           53 seconds ago      Running             coredns                     1                   8b6bee8ccda9f       coredns-7d764666f9-p7h6b                     kube-system
	d1c27e30bf37a       1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c                                           53 seconds ago      Running             busybox                     1                   f883c95daa7ad       busybox                                      default
	45ecaf16299f9       66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51                                           53 seconds ago      Exited              storage-provisioner         1                   ecff762185516       storage-provisioner                          kube-system
	352daa6cd4a6a       c96ee3c17498748ccc544ba99ee8ffeb020fc335b230b43cd28bf43bed229a13                                           53 seconds ago      Running             kindnet-cni                 1                   959150f7ef3e8       kindnet-hnnqk                                kube-system
	5bf3302e4122f       de369f46c2ff55c31ea783a663eb203caa820f3db1f9b9c935e79e7d1e9fd9e5                                           53 seconds ago      Running             kube-proxy                  1                   9d145304a59a0       kube-proxy-gzt2m                             kube-system
	826fac1ed8726       88898f1d1a62a3ea9db5d4d099dee7aa52ebe8191016c5b3c721388a309983e0                                           57 seconds ago      Running             kube-controller-manager     1                   513d710e39756       kube-controller-manager-no-preload-021144    kube-system
	8dcf711110ef0       ddc8422d4d35a6fc66c34be61e24df795e5cebf197eb546f62740d0bafef874f                                           57 seconds ago      Running             kube-scheduler              1                   1383bc7496a3c       kube-scheduler-no-preload-021144             kube-system
	327ad0c5ea77e       c3fcf259c473a57a5d7da116e29161904491091743512d27467c907c5516f856                                           57 seconds ago      Running             kube-apiserver              1                   db4fd6ced6b29       kube-apiserver-no-preload-021144             kube-system
	de4d8646c03a2       271e49a0ebc56647476845128fcd2a73bb138beeca3878cc3bf52b4ff1172a57                                           57 seconds ago      Running             etcd                        1                   9e2d9af88cad5       etcd-no-preload-021144                       kube-system
	
	
	==> coredns [c4fa9ef9befbfd98ffa6e9119e6de96da382909198101360695a996f61df6014] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.13.1
	linux/arm64, go1.25.2, 1db4568
	[INFO] 127.0.0.1:36207 - 50508 "HINFO IN 8237353146647902975.9173068558679202732. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.021961259s
	[INFO] plugin/ready: Plugins not ready: "kubernetes"
	[INFO] plugin/ready: Plugins not ready: "kubernetes"
	[INFO] plugin/ready: Plugins not ready: "kubernetes"
	[INFO] plugin/ready: Plugins not ready: "kubernetes"
	[ERROR] plugin/kubernetes: Failed to watch
	[ERROR] plugin/kubernetes: Failed to watch
	[ERROR] plugin/kubernetes: Failed to watch
	
	
	==> describe nodes <==
	Name:               no-preload-021144
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=no-preload-021144
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=a2daf445edf4872fd9586416ba5dbf507613db86
	                    minikube.k8s.io/name=no-preload-021144
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_27T10_02_41_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 27 Dec 2025 10:02:38 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-021144
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 27 Dec 2025 10:04:28 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 27 Dec 2025 10:04:08 +0000   Sat, 27 Dec 2025 10:02:35 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 27 Dec 2025 10:04:08 +0000   Sat, 27 Dec 2025 10:02:35 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 27 Dec 2025 10:04:08 +0000   Sat, 27 Dec 2025 10:02:35 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 27 Dec 2025 10:04:08 +0000   Sat, 27 Dec 2025 10:02:59 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    no-preload-021144
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022304Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022304Ki
	  pods:               110
	System Info:
	  Machine ID:                 56495bcb733b49eb3642bf15694bbc8c
	  System UUID:                cb1e511d-4a03-4ff7-9ae5-96dca8c8e0f7
	  Boot ID:                    bc2ae459-717b-455e-ba6b-b9cf89c5e3c8
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.3
	  Kubelet Version:            v1.35.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         88s
	  kube-system                 coredns-7d764666f9-p7h6b                      100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     105s
	  kube-system                 etcd-no-preload-021144                        100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         113s
	  kube-system                 kindnet-hnnqk                                 100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      106s
	  kube-system                 kube-apiserver-no-preload-021144              250m (12%)    0 (0%)      0 (0%)           0 (0%)         110s
	  kube-system                 kube-controller-manager-no-preload-021144     200m (10%)    0 (0%)      0 (0%)           0 (0%)         113s
	  kube-system                 kube-proxy-gzt2m                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         106s
	  kube-system                 kube-scheduler-no-preload-021144              100m (5%)     0 (0%)      0 (0%)           0 (0%)         110s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         104s
	  kubernetes-dashboard        dashboard-metrics-scraper-867fb5f87b-9hmrk    0 (0%)        0 (0%)      0 (0%)           0 (0%)         50s
	  kubernetes-dashboard        kubernetes-dashboard-b84665fb8-khhmw          0 (0%)        0 (0%)      0 (0%)           0 (0%)         50s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type    Reason          Age   From             Message
	  ----    ------          ----  ----             -------
	  Normal  RegisteredNode  107s  node-controller  Node no-preload-021144 event: Registered Node no-preload-021144 in Controller
	  Normal  RegisteredNode  51s   node-controller  Node no-preload-021144 event: Registered Node no-preload-021144 in Controller
	
	
	==> dmesg <==
	[Dec27 09:29] overlayfs: idmapped layers are currently not supported
	[Dec27 09:32] overlayfs: idmapped layers are currently not supported
	[Dec27 09:33] overlayfs: idmapped layers are currently not supported
	[Dec27 09:34] overlayfs: idmapped layers are currently not supported
	[Dec27 09:35] overlayfs: idmapped layers are currently not supported
	[ +35.855481] overlayfs: idmapped layers are currently not supported
	[Dec27 09:36] overlayfs: idmapped layers are currently not supported
	[Dec27 09:37] overlayfs: idmapped layers are currently not supported
	[  +4.962986] overlayfs: idmapped layers are currently not supported
	[Dec27 09:38] overlayfs: idmapped layers are currently not supported
	[ +23.051169] overlayfs: idmapped layers are currently not supported
	[Dec27 09:39] overlayfs: idmapped layers are currently not supported
	[ +34.383287] overlayfs: idmapped layers are currently not supported
	[Dec27 09:41] overlayfs: idmapped layers are currently not supported
	[Dec27 09:43] overlayfs: idmapped layers are currently not supported
	[ +35.522529] overlayfs: idmapped layers are currently not supported
	[Dec27 09:48] overlayfs: idmapped layers are currently not supported
	[Dec27 09:49] overlayfs: idmapped layers are currently not supported
	[Dec27 09:51] overlayfs: idmapped layers are currently not supported
	[Dec27 09:52] overlayfs: idmapped layers are currently not supported
	[Dec27 09:59] overlayfs: idmapped layers are currently not supported
	[ +37.649191] overlayfs: idmapped layers are currently not supported
	[Dec27 10:01] overlayfs: idmapped layers are currently not supported
	[Dec27 10:02] overlayfs: idmapped layers are currently not supported
	[Dec27 10:03] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [de4d8646c03a270d9b795d812404b843b39536ef99277aa58fc56f50232ffd89] <==
	{"level":"info","ts":"2025-12-27T10:03:33.797484Z","caller":"fileutil/purge.go:49","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-12-27T10:03:33.797520Z","caller":"fileutil/purge.go:49","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-12-27T10:03:33.797997Z","caller":"embed/etcd.go:766","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2025-12-27T10:03:33.798434Z","caller":"embed/etcd.go:292","msg":"now serving peer/client/metrics","local-member-id":"9f0758e1c58a86ed","initial-advertise-peer-urls":["https://192.168.85.2:2380"],"listen-peer-urls":["https://192.168.85.2:2380"],"advertise-client-urls":["https://192.168.85.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.85.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2025-12-27T10:03:33.798500Z","caller":"embed/etcd.go:890","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2025-12-27T10:03:33.798680Z","caller":"embed/etcd.go:640","msg":"serving peer traffic","address":"192.168.85.2:2380"}
	{"level":"info","ts":"2025-12-27T10:03:33.798736Z","caller":"embed/etcd.go:611","msg":"cmux::serve","address":"192.168.85.2:2380"}
	{"level":"info","ts":"2025-12-27T10:03:34.269911Z","logger":"raft","caller":"v3@v3.6.0/raft.go:988","msg":"9f0758e1c58a86ed is starting a new election at term 2"}
	{"level":"info","ts":"2025-12-27T10:03:34.269964Z","logger":"raft","caller":"v3@v3.6.0/raft.go:930","msg":"9f0758e1c58a86ed became pre-candidate at term 2"}
	{"level":"info","ts":"2025-12-27T10:03:34.270017Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1077","msg":"9f0758e1c58a86ed received MsgPreVoteResp from 9f0758e1c58a86ed at term 2"}
	{"level":"info","ts":"2025-12-27T10:03:34.270036Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1693","msg":"9f0758e1c58a86ed has received 1 MsgPreVoteResp votes and 0 vote rejections"}
	{"level":"info","ts":"2025-12-27T10:03:34.270052Z","logger":"raft","caller":"v3@v3.6.0/raft.go:912","msg":"9f0758e1c58a86ed became candidate at term 3"}
	{"level":"info","ts":"2025-12-27T10:03:34.274267Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1077","msg":"9f0758e1c58a86ed received MsgVoteResp from 9f0758e1c58a86ed at term 3"}
	{"level":"info","ts":"2025-12-27T10:03:34.274338Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1693","msg":"9f0758e1c58a86ed has received 1 MsgVoteResp votes and 0 vote rejections"}
	{"level":"info","ts":"2025-12-27T10:03:34.274380Z","logger":"raft","caller":"v3@v3.6.0/raft.go:970","msg":"9f0758e1c58a86ed became leader at term 3"}
	{"level":"info","ts":"2025-12-27T10:03:34.274427Z","logger":"raft","caller":"v3@v3.6.0/node.go:370","msg":"raft.node: 9f0758e1c58a86ed elected leader 9f0758e1c58a86ed at term 3"}
	{"level":"info","ts":"2025-12-27T10:03:34.282430Z","caller":"etcdserver/server.go:1820","msg":"published local member to cluster through raft","local-member-id":"9f0758e1c58a86ed","local-member-attributes":"{Name:no-preload-021144 ClientURLs:[https://192.168.85.2:2379]}","cluster-id":"68eaea490fab4e05","publish-timeout":"7s"}
	{"level":"info","ts":"2025-12-27T10:03:34.282596Z","caller":"embed/serve.go:138","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-12-27T10:03:34.282715Z","caller":"embed/serve.go:138","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-12-27T10:03:34.282775Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-12-27T10:03:34.282785Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-12-27T10:03:34.283597Z","caller":"v3rpc/health.go:63","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-12-27T10:03:34.285287Z","caller":"embed/serve.go:283","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.85.2:2379"}
	{"level":"info","ts":"2025-12-27T10:03:34.285986Z","caller":"v3rpc/health.go:63","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-12-27T10:03:34.305442Z","caller":"embed/serve.go:283","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	
	
	==> kernel <==
	 10:04:31 up  2:47,  0 user,  load average: 1.93, 1.81, 2.00
	Linux no-preload-021144 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [352daa6cd4a6a1964f4f855817ee1e9291a42a8caa5882022bce5a87b5ef38e7] <==
	I1227 10:03:38.721527       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1227 10:03:38.721888       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1227 10:03:38.722107       1 main.go:148] setting mtu 1500 for CNI 
	I1227 10:03:38.722191       1 main.go:178] kindnetd IP family: "ipv4"
	I1227 10:03:38.722232       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-12-27T10:03:38Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1227 10:03:38.927361       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1227 10:03:38.927992       1 controller.go:381] "Waiting for informer caches to sync"
	I1227 10:03:38.928097       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1227 10:03:38.928284       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1227 10:04:08.927038       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1227 10:04:08.928265       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1227 10:04:08.928428       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	E1227 10:04:08.928511       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	I1227 10:04:10.428761       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1227 10:04:10.428873       1 metrics.go:72] Registering metrics
	I1227 10:04:10.428968       1 controller.go:711] "Syncing nftables rules"
	I1227 10:04:18.927356       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1227 10:04:18.927408       1 main.go:301] handling current node
	I1227 10:04:28.927327       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1227 10:04:28.927358       1 main.go:301] handling current node
	
	
	==> kube-apiserver [327ad0c5ea77e5cb07dbc495716c696dbfb5bd8050c9432839733c1be978ab8f] <==
	I1227 10:03:37.324513       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1227 10:03:37.348979       1 shared_informer.go:377] "Caches are synced"
	I1227 10:03:37.356918       1 shared_informer.go:356] "Caches are synced" controller="crd-autoregister"
	I1227 10:03:37.356934       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1227 10:03:37.357113       1 cidrallocator.go:302] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1227 10:03:37.358777       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1227 10:03:37.358886       1 shared_informer.go:377] "Caches are synced"
	I1227 10:03:37.358827       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1227 10:03:37.361382       1 aggregator.go:187] initial CRD sync complete...
	I1227 10:03:37.361965       1 autoregister_controller.go:144] Starting autoregister controller
	I1227 10:03:37.361971       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1227 10:03:37.361978       1 cache.go:39] Caches are synced for autoregister controller
	I1227 10:03:37.371021       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	E1227 10:03:37.452112       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1227 10:03:37.823911       1 controller.go:667] quota admission added evaluator for: namespaces
	I1227 10:03:37.872520       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1227 10:03:37.901840       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1227 10:03:37.913606       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1227 10:03:37.924745       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1227 10:03:37.959004       1 storage_scheduling.go:139] all system priority classes are created successfully or already exist.
	I1227 10:03:38.006681       1 alloc.go:329] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.97.150.105"}
	I1227 10:03:38.034177       1 alloc.go:329] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.110.154.58"}
	I1227 10:03:40.966480       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1227 10:03:41.016155       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1227 10:03:41.066584       1 controller.go:667] quota admission added evaluator for: endpoints
	
	
	==> kube-controller-manager [826fac1ed8726ef3a42c94e0e83f18ada09304c6805c9f89f7b3c2d04e4c1a04] <==
	I1227 10:03:40.420741       1 shared_informer.go:377] "Caches are synced"
	I1227 10:03:40.420780       1 shared_informer.go:377] "Caches are synced"
	I1227 10:03:40.420808       1 shared_informer.go:377] "Caches are synced"
	I1227 10:03:40.421047       1 node_lifecycle_controller.go:886] "Missing timestamp for Node. Assuming now as a timestamp" node="no-preload-021144"
	I1227 10:03:40.421331       1 node_lifecycle_controller.go:1080] "Controller detected that zone is now in new state" zone="" newState="Normal"
	I1227 10:03:40.421900       1 range_allocator.go:181] "Starting range CIDR allocator"
	I1227 10:03:40.421952       1 shared_informer.go:370] "Waiting for caches to sync"
	I1227 10:03:40.421982       1 shared_informer.go:377] "Caches are synced"
	I1227 10:03:40.422092       1 shared_informer.go:377] "Caches are synced"
	I1227 10:03:40.422225       1 shared_informer.go:377] "Caches are synced"
	I1227 10:03:40.422282       1 shared_informer.go:377] "Caches are synced"
	I1227 10:03:40.422358       1 shared_informer.go:377] "Caches are synced"
	I1227 10:03:40.422537       1 shared_informer.go:377] "Caches are synced"
	I1227 10:03:40.422625       1 shared_informer.go:377] "Caches are synced"
	I1227 10:03:40.423079       1 shared_informer.go:377] "Caches are synced"
	I1227 10:03:40.423410       1 shared_informer.go:377] "Caches are synced"
	I1227 10:03:40.423572       1 shared_informer.go:377] "Caches are synced"
	I1227 10:03:40.424955       1 shared_informer.go:377] "Caches are synced"
	I1227 10:03:40.433850       1 shared_informer.go:377] "Caches are synced"
	I1227 10:03:40.441399       1 shared_informer.go:370] "Waiting for caches to sync"
	I1227 10:03:40.450282       1 shared_informer.go:377] "Caches are synced"
	I1227 10:03:40.531222       1 shared_informer.go:377] "Caches are synced"
	I1227 10:03:40.531324       1 garbagecollector.go:166] "Garbage collector: all resource monitors have synced"
	I1227 10:03:40.531356       1 garbagecollector.go:169] "Proceeding to collect garbage"
	I1227 10:03:40.545409       1 shared_informer.go:377] "Caches are synced"
	
	
	==> kube-proxy [5bf3302e4122f8d606058ec1bcb193df3d81177fb1f03e2221b04f64f3be159b] <==
	I1227 10:03:38.715165       1 server_linux.go:53] "Using iptables proxy"
	I1227 10:03:38.810214       1 shared_informer.go:370] "Waiting for caches to sync"
	I1227 10:03:38.910905       1 shared_informer.go:377] "Caches are synced"
	I1227 10:03:38.910945       1 server.go:218] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1227 10:03:38.911011       1 server.go:255] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1227 10:03:38.951963       1 server.go:264] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1227 10:03:38.952077       1 server_linux.go:136] "Using iptables Proxier"
	I1227 10:03:38.955916       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1227 10:03:38.956296       1 server.go:529] "Version info" version="v1.35.0"
	I1227 10:03:38.956746       1 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1227 10:03:38.957995       1 config.go:200] "Starting service config controller"
	I1227 10:03:38.962997       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1227 10:03:38.958134       1 config.go:106] "Starting endpoint slice config controller"
	I1227 10:03:38.963130       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1227 10:03:38.958241       1 config.go:403] "Starting serviceCIDR config controller"
	I1227 10:03:38.963206       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1227 10:03:38.958875       1 config.go:309] "Starting node config controller"
	I1227 10:03:38.963263       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1227 10:03:38.963291       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1227 10:03:39.063255       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1227 10:03:39.063269       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1227 10:03:39.063282       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-scheduler [8dcf711110ef0028adc15392e388eb3ea778715b2b9bac9fb0a2657eff4887a0] <==
	I1227 10:03:35.130285       1 serving.go:386] Generated self-signed cert in-memory
	W1227 10:03:37.188116       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1227 10:03:37.188150       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1227 10:03:37.188160       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1227 10:03:37.188167       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1227 10:03:37.296374       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.35.0"
	I1227 10:03:37.296407       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1227 10:03:37.327078       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1227 10:03:37.327225       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1227 10:03:37.327246       1 shared_informer.go:370] "Waiting for caches to sync"
	I1227 10:03:37.327262       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1227 10:03:37.450399       1 shared_informer.go:377] "Caches are synced"
	
	
	==> kubelet <==
	Dec 27 10:03:55 no-preload-021144 kubelet[781]: I1227 10:03:55.498647     781 scope.go:122] "RemoveContainer" containerID="0018c3dadf334ca3e015cca26eccaa4eb0ea9ad26e451cceaf6c4e0f36fd817e"
	Dec 27 10:03:55 no-preload-021144 kubelet[781]: E1227 10:03:55.498906     781 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-867fb5f87b-9hmrk_kubernetes-dashboard(598c4343-1832-4daf-826a-447e53b60c06)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-9hmrk" podUID="598c4343-1832-4daf-826a-447e53b60c06"
	Dec 27 10:03:57 no-preload-021144 kubelet[781]: E1227 10:03:57.144758     781 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-9hmrk" containerName="dashboard-metrics-scraper"
	Dec 27 10:03:57 no-preload-021144 kubelet[781]: I1227 10:03:57.145365     781 scope.go:122] "RemoveContainer" containerID="0018c3dadf334ca3e015cca26eccaa4eb0ea9ad26e451cceaf6c4e0f36fd817e"
	Dec 27 10:03:57 no-preload-021144 kubelet[781]: I1227 10:03:57.261647     781 scope.go:122] "RemoveContainer" containerID="0018c3dadf334ca3e015cca26eccaa4eb0ea9ad26e451cceaf6c4e0f36fd817e"
	Dec 27 10:03:57 no-preload-021144 kubelet[781]: E1227 10:03:57.261961     781 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-9hmrk" containerName="dashboard-metrics-scraper"
	Dec 27 10:03:57 no-preload-021144 kubelet[781]: I1227 10:03:57.261987     781 scope.go:122] "RemoveContainer" containerID="09f499235917c35bfe79389d2cddcff1d6f64f5c64a8f8fec8343053238b46ff"
	Dec 27 10:03:57 no-preload-021144 kubelet[781]: E1227 10:03:57.262183     781 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-867fb5f87b-9hmrk_kubernetes-dashboard(598c4343-1832-4daf-826a-447e53b60c06)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-9hmrk" podUID="598c4343-1832-4daf-826a-447e53b60c06"
	Dec 27 10:04:05 no-preload-021144 kubelet[781]: E1227 10:04:05.498948     781 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-9hmrk" containerName="dashboard-metrics-scraper"
	Dec 27 10:04:05 no-preload-021144 kubelet[781]: I1227 10:04:05.499444     781 scope.go:122] "RemoveContainer" containerID="09f499235917c35bfe79389d2cddcff1d6f64f5c64a8f8fec8343053238b46ff"
	Dec 27 10:04:05 no-preload-021144 kubelet[781]: E1227 10:04:05.499691     781 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-867fb5f87b-9hmrk_kubernetes-dashboard(598c4343-1832-4daf-826a-447e53b60c06)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-9hmrk" podUID="598c4343-1832-4daf-826a-447e53b60c06"
	Dec 27 10:04:09 no-preload-021144 kubelet[781]: I1227 10:04:09.294019     781 scope.go:122] "RemoveContainer" containerID="45ecaf16299f98151be059174b564a2b8e41d59011fe0dc39e4f49fe2d671775"
	Dec 27 10:04:14 no-preload-021144 kubelet[781]: E1227 10:04:14.665149     781 prober_manager.go:209] "Readiness probe already exists for container" pod="kube-system/coredns-7d764666f9-p7h6b" containerName="coredns"
	Dec 27 10:04:21 no-preload-021144 kubelet[781]: E1227 10:04:21.144886     781 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-9hmrk" containerName="dashboard-metrics-scraper"
	Dec 27 10:04:21 no-preload-021144 kubelet[781]: I1227 10:04:21.144927     781 scope.go:122] "RemoveContainer" containerID="09f499235917c35bfe79389d2cddcff1d6f64f5c64a8f8fec8343053238b46ff"
	Dec 27 10:04:21 no-preload-021144 kubelet[781]: I1227 10:04:21.328082     781 scope.go:122] "RemoveContainer" containerID="09f499235917c35bfe79389d2cddcff1d6f64f5c64a8f8fec8343053238b46ff"
	Dec 27 10:04:22 no-preload-021144 kubelet[781]: E1227 10:04:22.332227     781 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-9hmrk" containerName="dashboard-metrics-scraper"
	Dec 27 10:04:22 no-preload-021144 kubelet[781]: I1227 10:04:22.332262     781 scope.go:122] "RemoveContainer" containerID="bc7f7a469469335774383c3c8e2a4c8a94b08630a54e8d542062c409fece5186"
	Dec 27 10:04:22 no-preload-021144 kubelet[781]: E1227 10:04:22.332403     781 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-867fb5f87b-9hmrk_kubernetes-dashboard(598c4343-1832-4daf-826a-447e53b60c06)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-9hmrk" podUID="598c4343-1832-4daf-826a-447e53b60c06"
	Dec 27 10:04:25 no-preload-021144 kubelet[781]: E1227 10:04:25.498770     781 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-9hmrk" containerName="dashboard-metrics-scraper"
	Dec 27 10:04:25 no-preload-021144 kubelet[781]: I1227 10:04:25.498820     781 scope.go:122] "RemoveContainer" containerID="bc7f7a469469335774383c3c8e2a4c8a94b08630a54e8d542062c409fece5186"
	Dec 27 10:04:25 no-preload-021144 kubelet[781]: E1227 10:04:25.498983     781 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-867fb5f87b-9hmrk_kubernetes-dashboard(598c4343-1832-4daf-826a-447e53b60c06)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-9hmrk" podUID="598c4343-1832-4daf-826a-447e53b60c06"
	Dec 27 10:04:28 no-preload-021144 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Dec 27 10:04:28 no-preload-021144 systemd[1]: kubelet.service: Deactivated successfully.
	Dec 27 10:04:28 no-preload-021144 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	
	
	==> kubernetes-dashboard [aeca985cca2e204c06a9df88fcfd42a200defdc60b6d97b4d4d19192e4a14d30] <==
	2025/12/27 10:03:49 Using namespace: kubernetes-dashboard
	2025/12/27 10:03:49 Using in-cluster config to connect to apiserver
	2025/12/27 10:03:49 Using secret token for csrf signing
	2025/12/27 10:03:49 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/12/27 10:03:49 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/12/27 10:03:49 Successful initial request to the apiserver, version: v1.35.0
	2025/12/27 10:03:49 Generating JWE encryption key
	2025/12/27 10:03:49 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/12/27 10:03:49 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/12/27 10:03:50 Initializing JWE encryption key from synchronized object
	2025/12/27 10:03:50 Creating in-cluster Sidecar client
	2025/12/27 10:03:50 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/12/27 10:03:50 Serving insecurely on HTTP port: 9090
	2025/12/27 10:04:20 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/12/27 10:03:49 Starting overwatch
	
	
	==> storage-provisioner [45ecaf16299f98151be059174b564a2b8e41d59011fe0dc39e4f49fe2d671775] <==
	I1227 10:03:38.620988       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1227 10:04:08.622554       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [6293953f907e5528b67e0d977b446baf30b852aa274e59762369a92e0ab91949] <==
	I1227 10:04:09.358001       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1227 10:04:09.372042       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1227 10:04:09.372091       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1227 10:04:09.375403       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1227 10:04:12.830917       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1227 10:04:17.092286       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1227 10:04:20.690895       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1227 10:04:23.745151       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1227 10:04:26.768717       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1227 10:04:26.775953       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1227 10:04:26.776099       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1227 10:04:26.776278       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-021144_3c7a24a1-f341-4c0a-8a61-f4ee43c50ac4!
	I1227 10:04:26.777179       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"9f5df63c-860d-4bc0-ad23-f9b0ec7df9e5", APIVersion:"v1", ResourceVersion:"649", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-021144_3c7a24a1-f341-4c0a-8a61-f4ee43c50ac4 became leader
	W1227 10:04:26.785171       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1227 10:04:26.791235       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1227 10:04:26.877086       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-021144_3c7a24a1-f341-4c0a-8a61-f4ee43c50ac4!
	W1227 10:04:28.795085       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1227 10:04:28.805716       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1227 10:04:30.809453       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1227 10:04:30.825669       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-021144 -n no-preload-021144
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-021144 -n no-preload-021144: exit status 2 (439.504892ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:270: (dbg) Run:  kubectl --context no-preload-021144 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:294: <<< TestStartStop/group/no-preload/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:295: ---------------------/post-mortem---------------------------------
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect no-preload-021144
helpers_test.go:244: (dbg) docker inspect no-preload-021144:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "ab89938537bbdf10f981cdbb065149ea236bebfe2c08f7d5ff90bb70bae01ff1",
	        "Created": "2025-12-27T10:02:08.318546254Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 505388,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-27T10:03:26.338401631Z",
	            "FinishedAt": "2025-12-27T10:03:25.487493066Z"
	        },
	        "Image": "sha256:713d59eed1cb7ea89d31e3cbef8f6274a6ab7509a421e96cf0197c334e76398d",
	        "ResolvConfPath": "/var/lib/docker/containers/ab89938537bbdf10f981cdbb065149ea236bebfe2c08f7d5ff90bb70bae01ff1/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/ab89938537bbdf10f981cdbb065149ea236bebfe2c08f7d5ff90bb70bae01ff1/hostname",
	        "HostsPath": "/var/lib/docker/containers/ab89938537bbdf10f981cdbb065149ea236bebfe2c08f7d5ff90bb70bae01ff1/hosts",
	        "LogPath": "/var/lib/docker/containers/ab89938537bbdf10f981cdbb065149ea236bebfe2c08f7d5ff90bb70bae01ff1/ab89938537bbdf10f981cdbb065149ea236bebfe2c08f7d5ff90bb70bae01ff1-json.log",
	        "Name": "/no-preload-021144",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "no-preload-021144:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "no-preload-021144",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "ab89938537bbdf10f981cdbb065149ea236bebfe2c08f7d5ff90bb70bae01ff1",
	                "LowerDir": "/var/lib/docker/overlay2/bd2f19d759acf34ab41ac7962a845e4705e5c52a8a2d5b4fa791e70efb755ef7-init/diff:/var/lib/docker/overlay2/888349771a41a46b09e7de676064af9effbe2f5ae2a8ba49ad062335fb2a70e5/diff",
	                "MergedDir": "/var/lib/docker/overlay2/bd2f19d759acf34ab41ac7962a845e4705e5c52a8a2d5b4fa791e70efb755ef7/merged",
	                "UpperDir": "/var/lib/docker/overlay2/bd2f19d759acf34ab41ac7962a845e4705e5c52a8a2d5b4fa791e70efb755ef7/diff",
	                "WorkDir": "/var/lib/docker/overlay2/bd2f19d759acf34ab41ac7962a845e4705e5c52a8a2d5b4fa791e70efb755ef7/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "no-preload-021144",
	                "Source": "/var/lib/docker/volumes/no-preload-021144/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-021144",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-021144",
	                "name.minikube.sigs.k8s.io": "no-preload-021144",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "2ea25ee6f2642cd3dc4a48560eac2f565fea4d065b6c5a05c2b11faea202ac58",
	            "SandboxKey": "/var/run/docker/netns/2ea25ee6f264",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33436"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33437"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33440"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33438"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33439"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "no-preload-021144": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "4a:e2:23:5e:0f:93",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "580e567ffdb0a3108b9672089c71417e29baa569ff9d213d3d1dd6886e00e475",
	                    "EndpointID": "8dd162f939f08a9cc82d8e0b6ff21d75544497532448984d85cf245495e013ee",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "no-preload-021144",
	                        "ab89938537bb"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-021144 -n no-preload-021144
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-021144 -n no-preload-021144: exit status 2 (409.534262ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestStartStop/group/no-preload/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p no-preload-021144 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-arm64 -p no-preload-021144 logs -n 25: (1.583324658s)
helpers_test.go:261: TestStartStop/group/no-preload/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────
────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │          PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────
────┤
	│ start   │ -p force-systemd-flag-779725 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                                                                                                                   │ force-systemd-flag-779725 │ jenkins │ v1.37.0 │ 27 Dec 25 09:55 UTC │                     │
	│ delete  │ -p force-systemd-env-029895                                                                                                                                                                                                                   │ force-systemd-env-029895  │ jenkins │ v1.37.0 │ 27 Dec 25 09:58 UTC │ 27 Dec 25 09:58 UTC │
	│ start   │ -p cert-options-057459 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio                     │ cert-options-057459       │ jenkins │ v1.37.0 │ 27 Dec 25 09:58 UTC │ 27 Dec 25 09:59 UTC │
	│ ssh     │ cert-options-057459 ssh openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt                                                                                                                                                   │ cert-options-057459       │ jenkins │ v1.37.0 │ 27 Dec 25 09:59 UTC │ 27 Dec 25 09:59 UTC │
	│ ssh     │ -p cert-options-057459 -- sudo cat /etc/kubernetes/admin.conf                                                                                                                                                                                 │ cert-options-057459       │ jenkins │ v1.37.0 │ 27 Dec 25 09:59 UTC │ 27 Dec 25 09:59 UTC │
	│ delete  │ -p cert-options-057459                                                                                                                                                                                                                        │ cert-options-057459       │ jenkins │ v1.37.0 │ 27 Dec 25 09:59 UTC │ 27 Dec 25 09:59 UTC │
	│ start   │ -p old-k8s-version-156305 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-156305    │ jenkins │ v1.37.0 │ 27 Dec 25 09:59 UTC │ 27 Dec 25 10:00 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-156305 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-156305    │ jenkins │ v1.37.0 │ 27 Dec 25 10:00 UTC │                     │
	│ stop    │ -p old-k8s-version-156305 --alsologtostderr -v=3                                                                                                                                                                                              │ old-k8s-version-156305    │ jenkins │ v1.37.0 │ 27 Dec 25 10:00 UTC │ 27 Dec 25 10:00 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-156305 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-156305    │ jenkins │ v1.37.0 │ 27 Dec 25 10:00 UTC │ 27 Dec 25 10:00 UTC │
	│ start   │ -p old-k8s-version-156305 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-156305    │ jenkins │ v1.37.0 │ 27 Dec 25 10:00 UTC │ 27 Dec 25 10:01 UTC │
	│ image   │ old-k8s-version-156305 image list --format=json                                                                                                                                                                                               │ old-k8s-version-156305    │ jenkins │ v1.37.0 │ 27 Dec 25 10:01 UTC │ 27 Dec 25 10:01 UTC │
	│ pause   │ -p old-k8s-version-156305 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-156305    │ jenkins │ v1.37.0 │ 27 Dec 25 10:01 UTC │                     │
	│ delete  │ -p old-k8s-version-156305                                                                                                                                                                                                                     │ old-k8s-version-156305    │ jenkins │ v1.37.0 │ 27 Dec 25 10:02 UTC │ 27 Dec 25 10:02 UTC │
	│ delete  │ -p old-k8s-version-156305                                                                                                                                                                                                                     │ old-k8s-version-156305    │ jenkins │ v1.37.0 │ 27 Dec 25 10:02 UTC │ 27 Dec 25 10:02 UTC │
	│ start   │ -p no-preload-021144 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0                                                                                       │ no-preload-021144         │ jenkins │ v1.37.0 │ 27 Dec 25 10:02 UTC │ 27 Dec 25 10:03 UTC │
	│ addons  │ enable metrics-server -p no-preload-021144 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-021144         │ jenkins │ v1.37.0 │ 27 Dec 25 10:03 UTC │                     │
	│ stop    │ -p no-preload-021144 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-021144         │ jenkins │ v1.37.0 │ 27 Dec 25 10:03 UTC │ 27 Dec 25 10:03 UTC │
	│ addons  │ enable dashboard -p no-preload-021144 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-021144         │ jenkins │ v1.37.0 │ 27 Dec 25 10:03 UTC │ 27 Dec 25 10:03 UTC │
	│ start   │ -p no-preload-021144 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0                                                                                       │ no-preload-021144         │ jenkins │ v1.37.0 │ 27 Dec 25 10:03 UTC │ 27 Dec 25 10:04 UTC │
	│ ssh     │ force-systemd-flag-779725 ssh cat /etc/crio/crio.conf.d/02-crio.conf                                                                                                                                                                          │ force-systemd-flag-779725 │ jenkins │ v1.37.0 │ 27 Dec 25 10:04 UTC │ 27 Dec 25 10:04 UTC │
	│ delete  │ -p force-systemd-flag-779725                                                                                                                                                                                                                  │ force-systemd-flag-779725 │ jenkins │ v1.37.0 │ 27 Dec 25 10:04 UTC │ 27 Dec 25 10:04 UTC │
	│ start   │ -p embed-certs-017122 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0                                                                                        │ embed-certs-017122        │ jenkins │ v1.37.0 │ 27 Dec 25 10:04 UTC │                     │
	│ image   │ no-preload-021144 image list --format=json                                                                                                                                                                                                    │ no-preload-021144         │ jenkins │ v1.37.0 │ 27 Dec 25 10:04 UTC │ 27 Dec 25 10:04 UTC │
	│ pause   │ -p no-preload-021144 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-021144         │ jenkins │ v1.37.0 │ 27 Dec 25 10:04 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────
────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/27 10:04:17
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1227 10:04:17.739658  508478 out.go:360] Setting OutFile to fd 1 ...
	I1227 10:04:17.739865  508478 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 10:04:17.739893  508478 out.go:374] Setting ErrFile to fd 2...
	I1227 10:04:17.739913  508478 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 10:04:17.740289  508478 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22344-301174/.minikube/bin
	I1227 10:04:17.740839  508478 out.go:368] Setting JSON to false
	I1227 10:04:17.741819  508478 start.go:133] hostinfo: {"hostname":"ip-172-31-31-251","uptime":10007,"bootTime":1766819851,"procs":183,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1227 10:04:17.741929  508478 start.go:143] virtualization:  
	I1227 10:04:17.745436  508478 out.go:179] * [embed-certs-017122] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1227 10:04:17.749746  508478 out.go:179]   - MINIKUBE_LOCATION=22344
	I1227 10:04:17.749832  508478 notify.go:221] Checking for updates...
	I1227 10:04:17.756202  508478 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1227 10:04:17.759561  508478 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22344-301174/kubeconfig
	I1227 10:04:17.762723  508478 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22344-301174/.minikube
	I1227 10:04:17.765845  508478 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1227 10:04:17.768780  508478 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1227 10:04:17.772502  508478 config.go:182] Loaded profile config "no-preload-021144": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1227 10:04:17.772608  508478 driver.go:422] Setting default libvirt URI to qemu:///system
	I1227 10:04:17.809977  508478 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1227 10:04:17.811148  508478 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1227 10:04:17.868906  508478 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-27 10:04:17.858435415 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1227 10:04:17.869014  508478 docker.go:319] overlay module found
	I1227 10:04:17.872307  508478 out.go:179] * Using the docker driver based on user configuration
	I1227 10:04:17.875316  508478 start.go:309] selected driver: docker
	I1227 10:04:17.875341  508478 start.go:928] validating driver "docker" against <nil>
	I1227 10:04:17.875356  508478 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1227 10:04:17.876121  508478 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1227 10:04:17.941997  508478 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-27 10:04:17.931979405 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1227 10:04:17.942242  508478 start_flags.go:333] no existing cluster config was found, will generate one from the flags 
	I1227 10:04:17.942479  508478 start_flags.go:1019] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1227 10:04:17.945549  508478 out.go:179] * Using Docker driver with root privileges
	I1227 10:04:17.948426  508478 cni.go:84] Creating CNI manager for ""
	I1227 10:04:17.948506  508478 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1227 10:04:17.948520  508478 start_flags.go:342] Found "CNI" CNI - setting NetworkPlugin=cni
	I1227 10:04:17.948606  508478 start.go:353] cluster config:
	{Name:embed-certs-017122 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:embed-certs-017122 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Contain
erRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPI
D:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1227 10:04:17.951774  508478 out.go:179] * Starting "embed-certs-017122" primary control-plane node in "embed-certs-017122" cluster
	I1227 10:04:17.954592  508478 cache.go:134] Beginning downloading kic base image for docker with crio
	I1227 10:04:17.957538  508478 out.go:179] * Pulling base image v0.0.48-1766570851-22316 ...
	I1227 10:04:17.960474  508478 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime crio
	I1227 10:04:17.960527  508478 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22344-301174/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-cri-o-overlay-arm64.tar.lz4
	I1227 10:04:17.960538  508478 cache.go:65] Caching tarball of preloaded images
	I1227 10:04:17.960577  508478 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a in local docker daemon
	I1227 10:04:17.960654  508478 preload.go:251] Found /home/jenkins/minikube-integration/22344-301174/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1227 10:04:17.960665  508478 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0 on crio
	I1227 10:04:17.960785  508478 profile.go:143] Saving config to /home/jenkins/minikube-integration/22344-301174/.minikube/profiles/embed-certs-017122/config.json ...
	I1227 10:04:17.960803  508478 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22344-301174/.minikube/profiles/embed-certs-017122/config.json: {Name:mkad2255aee1f11f52b5c34344b6a9598626841f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 10:04:17.981300  508478 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a in local docker daemon, skipping pull
	I1227 10:04:17.981325  508478 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a exists in daemon, skipping load
	I1227 10:04:17.981347  508478 cache.go:243] Successfully downloaded all kic artifacts
	I1227 10:04:17.981381  508478 start.go:360] acquireMachinesLock for embed-certs-017122: {Name:mkc5c6a144bc51d843c500d769feb1ef839b15a5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1227 10:04:17.981559  508478 start.go:364] duration metric: took 155.382µs to acquireMachinesLock for "embed-certs-017122"
	I1227 10:04:17.981593  508478 start.go:93] Provisioning new machine with config: &{Name:embed-certs-017122 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:embed-certs-017122 Namespace:default APIServerHAVIP: APIServe
rName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmw
arePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false} &{Name: IP: Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1227 10:04:17.981665  508478 start.go:125] createHost starting for "" (driver="docker")
	I1227 10:04:17.985192  508478 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1227 10:04:17.985587  508478 start.go:159] libmachine.API.Create for "embed-certs-017122" (driver="docker")
	I1227 10:04:17.985621  508478 client.go:173] LocalClient.Create starting
	I1227 10:04:17.985760  508478 main.go:144] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22344-301174/.minikube/certs/ca.pem
	I1227 10:04:17.985857  508478 main.go:144] libmachine: Decoding PEM data...
	I1227 10:04:17.985919  508478 main.go:144] libmachine: Parsing certificate...
	I1227 10:04:17.986013  508478 main.go:144] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22344-301174/.minikube/certs/cert.pem
	I1227 10:04:17.986092  508478 main.go:144] libmachine: Decoding PEM data...
	I1227 10:04:17.986109  508478 main.go:144] libmachine: Parsing certificate...
	I1227 10:04:17.986689  508478 cli_runner.go:164] Run: docker network inspect embed-certs-017122 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1227 10:04:18.006447  508478 cli_runner.go:211] docker network inspect embed-certs-017122 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1227 10:04:18.006540  508478 network_create.go:284] running [docker network inspect embed-certs-017122] to gather additional debugging logs...
	I1227 10:04:18.006563  508478 cli_runner.go:164] Run: docker network inspect embed-certs-017122
	W1227 10:04:18.025821  508478 cli_runner.go:211] docker network inspect embed-certs-017122 returned with exit code 1
	I1227 10:04:18.025855  508478 network_create.go:287] error running [docker network inspect embed-certs-017122]: docker network inspect embed-certs-017122: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network embed-certs-017122 not found
	I1227 10:04:18.025868  508478 network_create.go:289] output of [docker network inspect embed-certs-017122]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network embed-certs-017122 not found
	
	** /stderr **
	I1227 10:04:18.025961  508478 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1227 10:04:18.043441  508478 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-cc45c0939b74 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:be:a9:eb:a6:c1:01} reservation:<nil>}
	I1227 10:04:18.043828  508478 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-b122a856da6d IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:2e:eb:89:95:c1:62} reservation:<nil>}
	I1227 10:04:18.044196  508478 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-a1a16649dea9 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:9e:51:2b:90:d1:ea} reservation:<nil>}
	I1227 10:04:18.044642  508478 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001a139b0}
	I1227 10:04:18.044665  508478 network_create.go:124] attempt to create docker network embed-certs-017122 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I1227 10:04:18.044731  508478 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=embed-certs-017122 embed-certs-017122
	I1227 10:04:18.110996  508478 network_create.go:108] docker network embed-certs-017122 192.168.76.0/24 created
	I1227 10:04:18.111041  508478 kic.go:121] calculated static IP "192.168.76.2" for the "embed-certs-017122" container
	I1227 10:04:18.111115  508478 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1227 10:04:18.132608  508478 cli_runner.go:164] Run: docker volume create embed-certs-017122 --label name.minikube.sigs.k8s.io=embed-certs-017122 --label created_by.minikube.sigs.k8s.io=true
	I1227 10:04:18.151597  508478 oci.go:103] Successfully created a docker volume embed-certs-017122
	I1227 10:04:18.151691  508478 cli_runner.go:164] Run: docker run --rm --name embed-certs-017122-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=embed-certs-017122 --entrypoint /usr/bin/test -v embed-certs-017122:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a -d /var/lib
	I1227 10:04:18.680605  508478 oci.go:107] Successfully prepared a docker volume embed-certs-017122
	I1227 10:04:18.680677  508478 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime crio
	I1227 10:04:18.680692  508478 kic.go:194] Starting extracting preloaded images to volume ...
	I1227 10:04:18.680792  508478 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22344-301174/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v embed-certs-017122:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a -I lz4 -xf /preloaded.tar -C /extractDir
	I1227 10:04:22.605359  508478 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22344-301174/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v embed-certs-017122:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a -I lz4 -xf /preloaded.tar -C /extractDir: (3.924525795s)
	I1227 10:04:22.605396  508478 kic.go:203] duration metric: took 3.92470005s to extract preloaded images to volume ...
	W1227 10:04:22.605549  508478 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1227 10:04:22.605679  508478 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1227 10:04:22.667004  508478 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname embed-certs-017122 --name embed-certs-017122 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=embed-certs-017122 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=embed-certs-017122 --network embed-certs-017122 --ip 192.168.76.2 --volume embed-certs-017122:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a
	I1227 10:04:22.973098  508478 cli_runner.go:164] Run: docker container inspect embed-certs-017122 --format={{.State.Running}}
	I1227 10:04:23.000002  508478 cli_runner.go:164] Run: docker container inspect embed-certs-017122 --format={{.State.Status}}
	I1227 10:04:23.030061  508478 cli_runner.go:164] Run: docker exec embed-certs-017122 stat /var/lib/dpkg/alternatives/iptables
	I1227 10:04:23.079658  508478 oci.go:144] the created container "embed-certs-017122" has a running status.
	I1227 10:04:23.079686  508478 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/22344-301174/.minikube/machines/embed-certs-017122/id_rsa...
	I1227 10:04:23.487247  508478 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22344-301174/.minikube/machines/embed-certs-017122/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1227 10:04:23.512968  508478 cli_runner.go:164] Run: docker container inspect embed-certs-017122 --format={{.State.Status}}
	I1227 10:04:23.543388  508478 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1227 10:04:23.543409  508478 kic_runner.go:114] Args: [docker exec --privileged embed-certs-017122 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1227 10:04:23.623028  508478 cli_runner.go:164] Run: docker container inspect embed-certs-017122 --format={{.State.Status}}
	I1227 10:04:23.655947  508478 machine.go:94] provisionDockerMachine start ...
	I1227 10:04:23.656057  508478 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-017122
	I1227 10:04:23.683901  508478 main.go:144] libmachine: Using SSH client type: native
	I1227 10:04:23.684239  508478 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dbbe0] 0x3de0e0 <nil>  [] 0s} 127.0.0.1 33441 <nil> <nil>}
	I1227 10:04:23.684248  508478 main.go:144] libmachine: About to run SSH command:
	hostname
	I1227 10:04:23.684911  508478 main.go:144] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1227 10:04:26.829827  508478 main.go:144] libmachine: SSH cmd err, output: <nil>: embed-certs-017122
	
	I1227 10:04:26.829864  508478 ubuntu.go:182] provisioning hostname "embed-certs-017122"
	I1227 10:04:26.829944  508478 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-017122
	I1227 10:04:26.848484  508478 main.go:144] libmachine: Using SSH client type: native
	I1227 10:04:26.848806  508478 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dbbe0] 0x3de0e0 <nil>  [] 0s} 127.0.0.1 33441 <nil> <nil>}
	I1227 10:04:26.848817  508478 main.go:144] libmachine: About to run SSH command:
	sudo hostname embed-certs-017122 && echo "embed-certs-017122" | sudo tee /etc/hostname
	I1227 10:04:27.005008  508478 main.go:144] libmachine: SSH cmd err, output: <nil>: embed-certs-017122
	
	I1227 10:04:27.005109  508478 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-017122
	I1227 10:04:27.024304  508478 main.go:144] libmachine: Using SSH client type: native
	I1227 10:04:27.024624  508478 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dbbe0] 0x3de0e0 <nil>  [] 0s} 127.0.0.1 33441 <nil> <nil>}
	I1227 10:04:27.024640  508478 main.go:144] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-017122' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-017122/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-017122' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1227 10:04:27.166618  508478 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	I1227 10:04:27.166687  508478 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22344-301174/.minikube CaCertPath:/home/jenkins/minikube-integration/22344-301174/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22344-301174/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22344-301174/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22344-301174/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22344-301174/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22344-301174/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22344-301174/.minikube}
	I1227 10:04:27.166731  508478 ubuntu.go:190] setting up certificates
	I1227 10:04:27.166770  508478 provision.go:84] configureAuth start
	I1227 10:04:27.166859  508478 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-017122
	I1227 10:04:27.185246  508478 provision.go:143] copyHostCerts
	I1227 10:04:27.185327  508478 exec_runner.go:144] found /home/jenkins/minikube-integration/22344-301174/.minikube/ca.pem, removing ...
	I1227 10:04:27.185349  508478 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22344-301174/.minikube/ca.pem
	I1227 10:04:27.185429  508478 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22344-301174/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22344-301174/.minikube/ca.pem (1082 bytes)
	I1227 10:04:27.185534  508478 exec_runner.go:144] found /home/jenkins/minikube-integration/22344-301174/.minikube/cert.pem, removing ...
	I1227 10:04:27.185543  508478 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22344-301174/.minikube/cert.pem
	I1227 10:04:27.185576  508478 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22344-301174/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22344-301174/.minikube/cert.pem (1123 bytes)
	I1227 10:04:27.185639  508478 exec_runner.go:144] found /home/jenkins/minikube-integration/22344-301174/.minikube/key.pem, removing ...
	I1227 10:04:27.185662  508478 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22344-301174/.minikube/key.pem
	I1227 10:04:27.185694  508478 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22344-301174/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22344-301174/.minikube/key.pem (1675 bytes)
	I1227 10:04:27.185755  508478 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22344-301174/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22344-301174/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22344-301174/.minikube/certs/ca-key.pem org=jenkins.embed-certs-017122 san=[127.0.0.1 192.168.76.2 embed-certs-017122 localhost minikube]
	I1227 10:04:28.113037  508478 provision.go:177] copyRemoteCerts
	I1227 10:04:28.113174  508478 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1227 10:04:28.113259  508478 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-017122
	I1227 10:04:28.131769  508478 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33441 SSHKeyPath:/home/jenkins/minikube-integration/22344-301174/.minikube/machines/embed-certs-017122/id_rsa Username:docker}
	I1227 10:04:28.243263  508478 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22344-301174/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1227 10:04:28.267550  508478 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22344-301174/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1227 10:04:28.291847  508478 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22344-301174/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1227 10:04:28.313892  508478 provision.go:87] duration metric: took 1.147090535s to configureAuth
	I1227 10:04:28.313926  508478 ubuntu.go:206] setting minikube options for container-runtime
	I1227 10:04:28.314131  508478 config.go:182] Loaded profile config "embed-certs-017122": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1227 10:04:28.314714  508478 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-017122
	I1227 10:04:28.347712  508478 main.go:144] libmachine: Using SSH client type: native
	I1227 10:04:28.348026  508478 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dbbe0] 0x3de0e0 <nil>  [] 0s} 127.0.0.1 33441 <nil> <nil>}
	I1227 10:04:28.348048  508478 main.go:144] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1227 10:04:28.699722  508478 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1227 10:04:28.699797  508478 machine.go:97] duration metric: took 5.043817631s to provisionDockerMachine
	I1227 10:04:28.699821  508478 client.go:176] duration metric: took 10.714194264s to LocalClient.Create
	I1227 10:04:28.699855  508478 start.go:167] duration metric: took 10.714276217s to libmachine.API.Create "embed-certs-017122"
	I1227 10:04:28.699893  508478 start.go:293] postStartSetup for "embed-certs-017122" (driver="docker")
	I1227 10:04:28.699918  508478 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1227 10:04:28.700054  508478 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1227 10:04:28.700118  508478 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-017122
	I1227 10:04:28.727495  508478 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33441 SSHKeyPath:/home/jenkins/minikube-integration/22344-301174/.minikube/machines/embed-certs-017122/id_rsa Username:docker}
	I1227 10:04:28.827673  508478 ssh_runner.go:195] Run: cat /etc/os-release
	I1227 10:04:28.831255  508478 main.go:144] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1227 10:04:28.831285  508478 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1227 10:04:28.831297  508478 filesync.go:126] Scanning /home/jenkins/minikube-integration/22344-301174/.minikube/addons for local assets ...
	I1227 10:04:28.831355  508478 filesync.go:126] Scanning /home/jenkins/minikube-integration/22344-301174/.minikube/files for local assets ...
	I1227 10:04:28.831441  508478 filesync.go:149] local asset: /home/jenkins/minikube-integration/22344-301174/.minikube/files/etc/ssl/certs/3030432.pem -> 3030432.pem in /etc/ssl/certs
	I1227 10:04:28.831548  508478 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1227 10:04:28.840712  508478 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22344-301174/.minikube/files/etc/ssl/certs/3030432.pem --> /etc/ssl/certs/3030432.pem (1708 bytes)
	I1227 10:04:28.868965  508478 start.go:296] duration metric: took 169.02811ms for postStartSetup
	I1227 10:04:28.869350  508478 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-017122
	I1227 10:04:28.889534  508478 profile.go:143] Saving config to /home/jenkins/minikube-integration/22344-301174/.minikube/profiles/embed-certs-017122/config.json ...
	I1227 10:04:28.889827  508478 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1227 10:04:28.889878  508478 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-017122
	I1227 10:04:28.911983  508478 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33441 SSHKeyPath:/home/jenkins/minikube-integration/22344-301174/.minikube/machines/embed-certs-017122/id_rsa Username:docker}
	I1227 10:04:29.011630  508478 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1227 10:04:29.016867  508478 start.go:128] duration metric: took 11.035184784s to createHost
	I1227 10:04:29.016893  508478 start.go:83] releasing machines lock for "embed-certs-017122", held for 11.035319203s
	I1227 10:04:29.016975  508478 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-017122
	I1227 10:04:29.034813  508478 ssh_runner.go:195] Run: cat /version.json
	I1227 10:04:29.034853  508478 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1227 10:04:29.034871  508478 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-017122
	I1227 10:04:29.034914  508478 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-017122
	I1227 10:04:29.052778  508478 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33441 SSHKeyPath:/home/jenkins/minikube-integration/22344-301174/.minikube/machines/embed-certs-017122/id_rsa Username:docker}
	I1227 10:04:29.062277  508478 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33441 SSHKeyPath:/home/jenkins/minikube-integration/22344-301174/.minikube/machines/embed-certs-017122/id_rsa Username:docker}
	I1227 10:04:29.254264  508478 ssh_runner.go:195] Run: systemctl --version
	I1227 10:04:29.266887  508478 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1227 10:04:29.325555  508478 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1227 10:04:29.332462  508478 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1227 10:04:29.332536  508478 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1227 10:04:29.370497  508478 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1227 10:04:29.370524  508478 start.go:496] detecting cgroup driver to use...
	I1227 10:04:29.370560  508478 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1227 10:04:29.370615  508478 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1227 10:04:29.395640  508478 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1227 10:04:29.410720  508478 docker.go:218] disabling cri-docker service (if available) ...
	I1227 10:04:29.410788  508478 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1227 10:04:29.432472  508478 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1227 10:04:29.454832  508478 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1227 10:04:29.587722  508478 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1227 10:04:29.713897  508478 docker.go:234] disabling docker service ...
	I1227 10:04:29.713974  508478 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1227 10:04:29.735341  508478 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1227 10:04:29.749499  508478 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1227 10:04:29.878880  508478 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1227 10:04:30.052006  508478 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1227 10:04:30.075471  508478 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1227 10:04:30.096067  508478 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1227 10:04:30.096193  508478 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 10:04:30.107312  508478 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1227 10:04:30.107466  508478 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 10:04:30.117893  508478 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 10:04:30.128196  508478 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 10:04:30.141596  508478 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1227 10:04:30.155733  508478 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 10:04:30.167807  508478 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 10:04:30.190521  508478 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 10:04:30.203864  508478 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1227 10:04:30.213330  508478 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1227 10:04:30.224216  508478 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1227 10:04:30.389282  508478 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1227 10:04:30.570177  508478 start.go:553] Will wait 60s for socket path /var/run/crio/crio.sock
	I1227 10:04:30.570251  508478 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1227 10:04:30.574783  508478 start.go:574] Will wait 60s for crictl version
	I1227 10:04:30.574847  508478 ssh_runner.go:195] Run: which crictl
	I1227 10:04:30.579682  508478 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1227 10:04:30.606090  508478 start.go:590] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1227 10:04:30.606204  508478 ssh_runner.go:195] Run: crio --version
	I1227 10:04:30.640961  508478 ssh_runner.go:195] Run: crio --version
	I1227 10:04:30.679772  508478 out.go:179] * Preparing Kubernetes v1.35.0 on CRI-O 1.34.3 ...
	I1227 10:04:30.682420  508478 cli_runner.go:164] Run: docker network inspect embed-certs-017122 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1227 10:04:30.707793  508478 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1227 10:04:30.712380  508478 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1227 10:04:30.726805  508478 kubeadm.go:884] updating cluster {Name:embed-certs-017122 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:embed-certs-017122 Namespace:default APIServerHAVIP: APIServerName:minikubeCA AP
IServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath
: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false} ...
	I1227 10:04:30.726948  508478 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime crio
	I1227 10:04:30.727022  508478 ssh_runner.go:195] Run: sudo crictl images --output json
	I1227 10:04:30.789545  508478 crio.go:561] all images are preloaded for cri-o runtime.
	I1227 10:04:30.789571  508478 crio.go:433] Images already preloaded, skipping extraction
	I1227 10:04:30.789627  508478 ssh_runner.go:195] Run: sudo crictl images --output json
	I1227 10:04:30.839298  508478 crio.go:561] all images are preloaded for cri-o runtime.
	I1227 10:04:30.839329  508478 cache_images.go:86] Images are preloaded, skipping loading
	I1227 10:04:30.839340  508478 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.35.0 crio true true} ...
	I1227 10:04:30.839427  508478 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=embed-certs-017122 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0 ClusterName:embed-certs-017122 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1227 10:04:30.839524  508478 ssh_runner.go:195] Run: crio config
	I1227 10:04:30.925610  508478 cni.go:84] Creating CNI manager for ""
	I1227 10:04:30.925634  508478 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1227 10:04:30.925653  508478 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1227 10:04:30.925679  508478 kubeadm.go:197] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.35.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-017122 NodeName:embed-certs-017122 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/e
tc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock failCgroupV1:false hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1227 10:04:30.925821  508478 kubeadm.go:203] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-017122"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	failCgroupV1: false
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1227 10:04:30.925904  508478 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0
	I1227 10:04:30.934866  508478 binaries.go:51] Found k8s binaries, skipping transfer
	I1227 10:04:30.934941  508478 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1227 10:04:30.943939  508478 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (368 bytes)
	I1227 10:04:30.971827  508478 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1227 10:04:30.988102  508478 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2235 bytes)
	I1227 10:04:31.004593  508478 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1227 10:04:31.009527  508478 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1227 10:04:31.023957  508478 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1227 10:04:31.169285  508478 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1227 10:04:31.194596  508478 certs.go:69] Setting up /home/jenkins/minikube-integration/22344-301174/.minikube/profiles/embed-certs-017122 for IP: 192.168.76.2
	I1227 10:04:31.194615  508478 certs.go:195] generating shared ca certs ...
	I1227 10:04:31.194630  508478 certs.go:227] acquiring lock for ca certs: {Name:mk8bd99999eb300c33129cb11fd55a1be3001328 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 10:04:31.194768  508478 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22344-301174/.minikube/ca.key
	I1227 10:04:31.194809  508478 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22344-301174/.minikube/proxy-client-ca.key
	I1227 10:04:31.194816  508478 certs.go:257] generating profile certs ...
	I1227 10:04:31.194872  508478 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22344-301174/.minikube/profiles/embed-certs-017122/client.key
	I1227 10:04:31.194892  508478 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22344-301174/.minikube/profiles/embed-certs-017122/client.crt with IP's: []
	I1227 10:04:31.526808  508478 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22344-301174/.minikube/profiles/embed-certs-017122/client.crt ...
	I1227 10:04:31.526839  508478 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22344-301174/.minikube/profiles/embed-certs-017122/client.crt: {Name:mkb280ccc46edae05fd980fcc8972cd0984e8f9c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 10:04:31.527054  508478 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22344-301174/.minikube/profiles/embed-certs-017122/client.key ...
	I1227 10:04:31.527068  508478 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22344-301174/.minikube/profiles/embed-certs-017122/client.key: {Name:mkb069d2b513c059174d1d054d7c6c7fca1393af Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 10:04:31.527172  508478 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22344-301174/.minikube/profiles/embed-certs-017122/apiserver.key.e957edd8
	I1227 10:04:31.527190  508478 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22344-301174/.minikube/profiles/embed-certs-017122/apiserver.crt.e957edd8 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.76.2]
	I1227 10:04:31.732789  508478 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22344-301174/.minikube/profiles/embed-certs-017122/apiserver.crt.e957edd8 ...
	I1227 10:04:31.732825  508478 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22344-301174/.minikube/profiles/embed-certs-017122/apiserver.crt.e957edd8: {Name:mk5b3c821c40a60cf770d95bf5bcf41e7549ec44 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 10:04:31.733049  508478 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22344-301174/.minikube/profiles/embed-certs-017122/apiserver.key.e957edd8 ...
	I1227 10:04:31.733065  508478 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22344-301174/.minikube/profiles/embed-certs-017122/apiserver.key.e957edd8: {Name:mk469003ca95ff95d3ed7c94e5b8f026861da5f5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 10:04:31.733156  508478 certs.go:382] copying /home/jenkins/minikube-integration/22344-301174/.minikube/profiles/embed-certs-017122/apiserver.crt.e957edd8 -> /home/jenkins/minikube-integration/22344-301174/.minikube/profiles/embed-certs-017122/apiserver.crt
	I1227 10:04:31.733245  508478 certs.go:386] copying /home/jenkins/minikube-integration/22344-301174/.minikube/profiles/embed-certs-017122/apiserver.key.e957edd8 -> /home/jenkins/minikube-integration/22344-301174/.minikube/profiles/embed-certs-017122/apiserver.key
	I1227 10:04:31.733323  508478 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22344-301174/.minikube/profiles/embed-certs-017122/proxy-client.key
	I1227 10:04:31.733342  508478 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22344-301174/.minikube/profiles/embed-certs-017122/proxy-client.crt with IP's: []
	I1227 10:04:32.163508  508478 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22344-301174/.minikube/profiles/embed-certs-017122/proxy-client.crt ...
	I1227 10:04:32.163540  508478 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22344-301174/.minikube/profiles/embed-certs-017122/proxy-client.crt: {Name:mk17c8cb3a05f7dfc35bcb266da4f62fbb1f3ec1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 10:04:32.163758  508478 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22344-301174/.minikube/profiles/embed-certs-017122/proxy-client.key ...
	I1227 10:04:32.163774  508478 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22344-301174/.minikube/profiles/embed-certs-017122/proxy-client.key: {Name:mk7f040aa4981b49c1bd02d91da0cb993f60d598 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 10:04:32.163971  508478 certs.go:484] found cert: /home/jenkins/minikube-integration/22344-301174/.minikube/certs/303043.pem (1338 bytes)
	W1227 10:04:32.164019  508478 certs.go:480] ignoring /home/jenkins/minikube-integration/22344-301174/.minikube/certs/303043_empty.pem, impossibly tiny 0 bytes
	I1227 10:04:32.164034  508478 certs.go:484] found cert: /home/jenkins/minikube-integration/22344-301174/.minikube/certs/ca-key.pem (1675 bytes)
	I1227 10:04:32.164061  508478 certs.go:484] found cert: /home/jenkins/minikube-integration/22344-301174/.minikube/certs/ca.pem (1082 bytes)
	I1227 10:04:32.164091  508478 certs.go:484] found cert: /home/jenkins/minikube-integration/22344-301174/.minikube/certs/cert.pem (1123 bytes)
	I1227 10:04:32.164123  508478 certs.go:484] found cert: /home/jenkins/minikube-integration/22344-301174/.minikube/certs/key.pem (1675 bytes)
	I1227 10:04:32.164173  508478 certs.go:484] found cert: /home/jenkins/minikube-integration/22344-301174/.minikube/files/etc/ssl/certs/3030432.pem (1708 bytes)
	I1227 10:04:32.164736  508478 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22344-301174/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1227 10:04:32.193552  508478 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22344-301174/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1227 10:04:32.222994  508478 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22344-301174/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1227 10:04:32.263166  508478 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22344-301174/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1227 10:04:32.291419  508478 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22344-301174/.minikube/profiles/embed-certs-017122/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I1227 10:04:32.316209  508478 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22344-301174/.minikube/profiles/embed-certs-017122/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1227 10:04:32.338134  508478 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22344-301174/.minikube/profiles/embed-certs-017122/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1227 10:04:32.362641  508478 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22344-301174/.minikube/profiles/embed-certs-017122/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1227 10:04:32.389432  508478 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22344-301174/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1227 10:04:32.412320  508478 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22344-301174/.minikube/certs/303043.pem --> /usr/share/ca-certificates/303043.pem (1338 bytes)
	I1227 10:04:32.435825  508478 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22344-301174/.minikube/files/etc/ssl/certs/3030432.pem --> /usr/share/ca-certificates/3030432.pem (1708 bytes)
	I1227 10:04:32.471297  508478 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (722 bytes)
	I1227 10:04:32.493070  508478 ssh_runner.go:195] Run: openssl version
	I1227 10:04:32.499948  508478 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1227 10:04:32.510393  508478 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1227 10:04:32.522862  508478 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1227 10:04:32.528595  508478 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 27 09:13 /usr/share/ca-certificates/minikubeCA.pem
	I1227 10:04:32.528668  508478 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1227 10:04:32.574413  508478 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1227 10:04:32.583651  508478 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1227 10:04:32.596529  508478 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/303043.pem
	I1227 10:04:32.608372  508478 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/303043.pem /etc/ssl/certs/303043.pem
	I1227 10:04:32.617581  508478 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/303043.pem
	I1227 10:04:32.622144  508478 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 27 09:17 /usr/share/ca-certificates/303043.pem
	I1227 10:04:32.622228  508478 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/303043.pem
	I1227 10:04:32.667929  508478 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1227 10:04:32.676543  508478 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/303043.pem /etc/ssl/certs/51391683.0
	I1227 10:04:32.685045  508478 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/3030432.pem
	I1227 10:04:32.693935  508478 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/3030432.pem /etc/ssl/certs/3030432.pem
	I1227 10:04:32.702106  508478 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3030432.pem
	I1227 10:04:32.707845  508478 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 27 09:17 /usr/share/ca-certificates/3030432.pem
	I1227 10:04:32.707913  508478 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3030432.pem
	I1227 10:04:32.756945  508478 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1227 10:04:32.767617  508478 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/3030432.pem /etc/ssl/certs/3ec20f2e.0
	I1227 10:04:32.777020  508478 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1227 10:04:32.781758  508478 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1227 10:04:32.781814  508478 kubeadm.go:401] StartCluster: {Name:embed-certs-017122 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:embed-certs-017122 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: S
ocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1227 10:04:32.781898  508478 cri.go:61] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1227 10:04:32.781958  508478 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1227 10:04:32.831588  508478 cri.go:96] found id: ""
	I1227 10:04:32.831660  508478 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1227 10:04:32.846379  508478 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1227 10:04:32.857255  508478 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1227 10:04:32.857326  508478 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1227 10:04:32.872791  508478 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1227 10:04:32.872818  508478 kubeadm.go:158] found existing configuration files:
	
	I1227 10:04:32.872873  508478 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1227 10:04:32.885543  508478 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1227 10:04:32.885614  508478 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1227 10:04:32.899275  508478 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1227 10:04:32.922535  508478 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1227 10:04:32.922609  508478 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1227 10:04:32.958815  508478 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1227 10:04:32.976254  508478 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1227 10:04:32.976317  508478 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1227 10:04:32.987482  508478 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1227 10:04:32.997316  508478 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1227 10:04:32.997375  508478 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1227 10:04:33.008869  508478 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1227 10:04:33.065271  508478 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0
	I1227 10:04:33.065645  508478 kubeadm.go:319] [preflight] Running pre-flight checks
	I1227 10:04:33.182138  508478 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1227 10:04:33.182227  508478 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1227 10:04:33.182263  508478 kubeadm.go:319] OS: Linux
	I1227 10:04:33.182309  508478 kubeadm.go:319] CGROUPS_CPU: enabled
	I1227 10:04:33.182366  508478 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1227 10:04:33.182421  508478 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1227 10:04:33.182474  508478 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1227 10:04:33.182526  508478 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1227 10:04:33.182584  508478 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1227 10:04:33.182634  508478 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1227 10:04:33.182686  508478 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1227 10:04:33.182735  508478 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1227 10:04:33.262171  508478 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1227 10:04:33.262287  508478 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1227 10:04:33.262392  508478 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1227 10:04:33.277906  508478 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	
	
	==> CRI-O <==
	Dec 27 10:04:18 no-preload-021144 crio[656]: time="2025-12-27T10:04:18.927647236Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Dec 27 10:04:18 no-preload-021144 crio[656]: time="2025-12-27T10:04:18.931403246Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 27 10:04:18 no-preload-021144 crio[656]: time="2025-12-27T10:04:18.931571838Z" level=info msg="Updated default CNI network name to kindnet"
	Dec 27 10:04:18 no-preload-021144 crio[656]: time="2025-12-27T10:04:18.931654761Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Dec 27 10:04:18 no-preload-021144 crio[656]: time="2025-12-27T10:04:18.94089641Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 27 10:04:18 no-preload-021144 crio[656]: time="2025-12-27T10:04:18.941060564Z" level=info msg="Updated default CNI network name to kindnet"
	Dec 27 10:04:18 no-preload-021144 crio[656]: time="2025-12-27T10:04:18.941139843Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Dec 27 10:04:18 no-preload-021144 crio[656]: time="2025-12-27T10:04:18.945744667Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 27 10:04:18 no-preload-021144 crio[656]: time="2025-12-27T10:04:18.945909412Z" level=info msg="Updated default CNI network name to kindnet"
	Dec 27 10:04:18 no-preload-021144 crio[656]: time="2025-12-27T10:04:18.94598458Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Dec 27 10:04:18 no-preload-021144 crio[656]: time="2025-12-27T10:04:18.949817842Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 27 10:04:18 no-preload-021144 crio[656]: time="2025-12-27T10:04:18.949974267Z" level=info msg="Updated default CNI network name to kindnet"
	Dec 27 10:04:21 no-preload-021144 crio[656]: time="2025-12-27T10:04:21.145580488Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=68487fb3-0408-4a66-93b3-039d4b040ddd name=/runtime.v1.ImageService/ImageStatus
	Dec 27 10:04:21 no-preload-021144 crio[656]: time="2025-12-27T10:04:21.147417893Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=99f7fc65-4b70-4a67-9bf2-53d154b55344 name=/runtime.v1.ImageService/ImageStatus
	Dec 27 10:04:21 no-preload-021144 crio[656]: time="2025-12-27T10:04:21.149183831Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-9hmrk/dashboard-metrics-scraper" id=0c5cc63f-002f-4cd6-af55-aced53c90731 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 27 10:04:21 no-preload-021144 crio[656]: time="2025-12-27T10:04:21.149310807Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 27 10:04:21 no-preload-021144 crio[656]: time="2025-12-27T10:04:21.167374557Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 27 10:04:21 no-preload-021144 crio[656]: time="2025-12-27T10:04:21.167945206Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 27 10:04:21 no-preload-021144 crio[656]: time="2025-12-27T10:04:21.201633798Z" level=info msg="Created container bc7f7a469469335774383c3c8e2a4c8a94b08630a54e8d542062c409fece5186: kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-9hmrk/dashboard-metrics-scraper" id=0c5cc63f-002f-4cd6-af55-aced53c90731 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 27 10:04:21 no-preload-021144 crio[656]: time="2025-12-27T10:04:21.203885781Z" level=info msg="Starting container: bc7f7a469469335774383c3c8e2a4c8a94b08630a54e8d542062c409fece5186" id=f83c5185-9ff5-49cc-b1d9-ebb41a03cb5a name=/runtime.v1.RuntimeService/StartContainer
	Dec 27 10:04:21 no-preload-021144 crio[656]: time="2025-12-27T10:04:21.205528632Z" level=info msg="Started container" PID=1731 containerID=bc7f7a469469335774383c3c8e2a4c8a94b08630a54e8d542062c409fece5186 description=kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-9hmrk/dashboard-metrics-scraper id=f83c5185-9ff5-49cc-b1d9-ebb41a03cb5a name=/runtime.v1.RuntimeService/StartContainer sandboxID=ff19dd9f32e5a39cf3d1d75aaf91a9acccc137ec00d90c7ecfe818cbdd125d47
	Dec 27 10:04:21 no-preload-021144 conmon[1729]: conmon bc7f7a46946933577438 <ninfo>: container 1731 exited with status 1
	Dec 27 10:04:21 no-preload-021144 crio[656]: time="2025-12-27T10:04:21.32940047Z" level=info msg="Removing container: 09f499235917c35bfe79389d2cddcff1d6f64f5c64a8f8fec8343053238b46ff" id=2e4ea3b7-331b-4fab-bc2b-4a009b1e49df name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 27 10:04:21 no-preload-021144 crio[656]: time="2025-12-27T10:04:21.340555579Z" level=info msg="Error loading conmon cgroup of container 09f499235917c35bfe79389d2cddcff1d6f64f5c64a8f8fec8343053238b46ff: cgroup deleted" id=2e4ea3b7-331b-4fab-bc2b-4a009b1e49df name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 27 10:04:21 no-preload-021144 crio[656]: time="2025-12-27T10:04:21.347009157Z" level=info msg="Removed container 09f499235917c35bfe79389d2cddcff1d6f64f5c64a8f8fec8343053238b46ff: kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-9hmrk/dashboard-metrics-scraper" id=2e4ea3b7-331b-4fab-bc2b-4a009b1e49df name=/runtime.v1.RuntimeService/RemoveContainer
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED              STATE               NAME                        ATTEMPT             POD ID              POD                                          NAMESPACE
	bc7f7a4694693       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           13 seconds ago       Exited              dashboard-metrics-scraper   3                   ff19dd9f32e5a       dashboard-metrics-scraper-867fb5f87b-9hmrk   kubernetes-dashboard
	6293953f907e5       66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51                                           24 seconds ago       Running             storage-provisioner         2                   ecff762185516       storage-provisioner                          kube-system
	aeca985cca2e2       docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf   44 seconds ago       Running             kubernetes-dashboard        0                   e2f9162cf13ac       kubernetes-dashboard-b84665fb8-khhmw         kubernetes-dashboard
	c4fa9ef9befbf       e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf                                           55 seconds ago       Running             coredns                     1                   8b6bee8ccda9f       coredns-7d764666f9-p7h6b                     kube-system
	d1c27e30bf37a       1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c                                           55 seconds ago       Running             busybox                     1                   f883c95daa7ad       busybox                                      default
	45ecaf16299f9       66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51                                           55 seconds ago       Exited              storage-provisioner         1                   ecff762185516       storage-provisioner                          kube-system
	352daa6cd4a6a       c96ee3c17498748ccc544ba99ee8ffeb020fc335b230b43cd28bf43bed229a13                                           55 seconds ago       Running             kindnet-cni                 1                   959150f7ef3e8       kindnet-hnnqk                                kube-system
	5bf3302e4122f       de369f46c2ff55c31ea783a663eb203caa820f3db1f9b9c935e79e7d1e9fd9e5                                           55 seconds ago       Running             kube-proxy                  1                   9d145304a59a0       kube-proxy-gzt2m                             kube-system
	826fac1ed8726       88898f1d1a62a3ea9db5d4d099dee7aa52ebe8191016c5b3c721388a309983e0                                           About a minute ago   Running             kube-controller-manager     1                   513d710e39756       kube-controller-manager-no-preload-021144    kube-system
	8dcf711110ef0       ddc8422d4d35a6fc66c34be61e24df795e5cebf197eb546f62740d0bafef874f                                           About a minute ago   Running             kube-scheduler              1                   1383bc7496a3c       kube-scheduler-no-preload-021144             kube-system
	327ad0c5ea77e       c3fcf259c473a57a5d7da116e29161904491091743512d27467c907c5516f856                                           About a minute ago   Running             kube-apiserver              1                   db4fd6ced6b29       kube-apiserver-no-preload-021144             kube-system
	de4d8646c03a2       271e49a0ebc56647476845128fcd2a73bb138beeca3878cc3bf52b4ff1172a57                                           About a minute ago   Running             etcd                        1                   9e2d9af88cad5       etcd-no-preload-021144                       kube-system
	
	
	==> coredns [c4fa9ef9befbfd98ffa6e9119e6de96da382909198101360695a996f61df6014] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.13.1
	linux/arm64, go1.25.2, 1db4568
	[INFO] 127.0.0.1:36207 - 50508 "HINFO IN 8237353146647902975.9173068558679202732. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.021961259s
	[INFO] plugin/ready: Plugins not ready: "kubernetes"
	[INFO] plugin/ready: Plugins not ready: "kubernetes"
	[INFO] plugin/ready: Plugins not ready: "kubernetes"
	[INFO] plugin/ready: Plugins not ready: "kubernetes"
	[ERROR] plugin/kubernetes: Failed to watch
	[ERROR] plugin/kubernetes: Failed to watch
	[ERROR] plugin/kubernetes: Failed to watch
	
	
	==> describe nodes <==
	Name:               no-preload-021144
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=no-preload-021144
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=a2daf445edf4872fd9586416ba5dbf507613db86
	                    minikube.k8s.io/name=no-preload-021144
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_27T10_02_41_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 27 Dec 2025 10:02:38 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-021144
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 27 Dec 2025 10:04:28 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 27 Dec 2025 10:04:08 +0000   Sat, 27 Dec 2025 10:02:35 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 27 Dec 2025 10:04:08 +0000   Sat, 27 Dec 2025 10:02:35 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 27 Dec 2025 10:04:08 +0000   Sat, 27 Dec 2025 10:02:35 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 27 Dec 2025 10:04:08 +0000   Sat, 27 Dec 2025 10:02:59 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    no-preload-021144
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022304Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022304Ki
	  pods:               110
	System Info:
	  Machine ID:                 56495bcb733b49eb3642bf15694bbc8c
	  System UUID:                cb1e511d-4a03-4ff7-9ae5-96dca8c8e0f7
	  Boot ID:                    bc2ae459-717b-455e-ba6b-b9cf89c5e3c8
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.3
	  Kubelet Version:            v1.35.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         91s
	  kube-system                 coredns-7d764666f9-p7h6b                      100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     108s
	  kube-system                 etcd-no-preload-021144                        100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         116s
	  kube-system                 kindnet-hnnqk                                 100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      109s
	  kube-system                 kube-apiserver-no-preload-021144              250m (12%)    0 (0%)      0 (0%)           0 (0%)         113s
	  kube-system                 kube-controller-manager-no-preload-021144     200m (10%)    0 (0%)      0 (0%)           0 (0%)         116s
	  kube-system                 kube-proxy-gzt2m                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         109s
	  kube-system                 kube-scheduler-no-preload-021144              100m (5%)     0 (0%)      0 (0%)           0 (0%)         113s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         107s
	  kubernetes-dashboard        dashboard-metrics-scraper-867fb5f87b-9hmrk    0 (0%)        0 (0%)      0 (0%)           0 (0%)         53s
	  kubernetes-dashboard        kubernetes-dashboard-b84665fb8-khhmw          0 (0%)        0 (0%)      0 (0%)           0 (0%)         53s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type    Reason          Age   From             Message
	  ----    ------          ----  ----             -------
	  Normal  RegisteredNode  110s  node-controller  Node no-preload-021144 event: Registered Node no-preload-021144 in Controller
	  Normal  RegisteredNode  54s   node-controller  Node no-preload-021144 event: Registered Node no-preload-021144 in Controller
	
	
	==> dmesg <==
	[Dec27 09:29] overlayfs: idmapped layers are currently not supported
	[Dec27 09:32] overlayfs: idmapped layers are currently not supported
	[Dec27 09:33] overlayfs: idmapped layers are currently not supported
	[Dec27 09:34] overlayfs: idmapped layers are currently not supported
	[Dec27 09:35] overlayfs: idmapped layers are currently not supported
	[ +35.855481] overlayfs: idmapped layers are currently not supported
	[Dec27 09:36] overlayfs: idmapped layers are currently not supported
	[Dec27 09:37] overlayfs: idmapped layers are currently not supported
	[  +4.962986] overlayfs: idmapped layers are currently not supported
	[Dec27 09:38] overlayfs: idmapped layers are currently not supported
	[ +23.051169] overlayfs: idmapped layers are currently not supported
	[Dec27 09:39] overlayfs: idmapped layers are currently not supported
	[ +34.383287] overlayfs: idmapped layers are currently not supported
	[Dec27 09:41] overlayfs: idmapped layers are currently not supported
	[Dec27 09:43] overlayfs: idmapped layers are currently not supported
	[ +35.522529] overlayfs: idmapped layers are currently not supported
	[Dec27 09:48] overlayfs: idmapped layers are currently not supported
	[Dec27 09:49] overlayfs: idmapped layers are currently not supported
	[Dec27 09:51] overlayfs: idmapped layers are currently not supported
	[Dec27 09:52] overlayfs: idmapped layers are currently not supported
	[Dec27 09:59] overlayfs: idmapped layers are currently not supported
	[ +37.649191] overlayfs: idmapped layers are currently not supported
	[Dec27 10:01] overlayfs: idmapped layers are currently not supported
	[Dec27 10:02] overlayfs: idmapped layers are currently not supported
	[Dec27 10:03] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [de4d8646c03a270d9b795d812404b843b39536ef99277aa58fc56f50232ffd89] <==
	{"level":"info","ts":"2025-12-27T10:03:33.797484Z","caller":"fileutil/purge.go:49","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-12-27T10:03:33.797520Z","caller":"fileutil/purge.go:49","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-12-27T10:03:33.797997Z","caller":"embed/etcd.go:766","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2025-12-27T10:03:33.798434Z","caller":"embed/etcd.go:292","msg":"now serving peer/client/metrics","local-member-id":"9f0758e1c58a86ed","initial-advertise-peer-urls":["https://192.168.85.2:2380"],"listen-peer-urls":["https://192.168.85.2:2380"],"advertise-client-urls":["https://192.168.85.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.85.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2025-12-27T10:03:33.798500Z","caller":"embed/etcd.go:890","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2025-12-27T10:03:33.798680Z","caller":"embed/etcd.go:640","msg":"serving peer traffic","address":"192.168.85.2:2380"}
	{"level":"info","ts":"2025-12-27T10:03:33.798736Z","caller":"embed/etcd.go:611","msg":"cmux::serve","address":"192.168.85.2:2380"}
	{"level":"info","ts":"2025-12-27T10:03:34.269911Z","logger":"raft","caller":"v3@v3.6.0/raft.go:988","msg":"9f0758e1c58a86ed is starting a new election at term 2"}
	{"level":"info","ts":"2025-12-27T10:03:34.269964Z","logger":"raft","caller":"v3@v3.6.0/raft.go:930","msg":"9f0758e1c58a86ed became pre-candidate at term 2"}
	{"level":"info","ts":"2025-12-27T10:03:34.270017Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1077","msg":"9f0758e1c58a86ed received MsgPreVoteResp from 9f0758e1c58a86ed at term 2"}
	{"level":"info","ts":"2025-12-27T10:03:34.270036Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1693","msg":"9f0758e1c58a86ed has received 1 MsgPreVoteResp votes and 0 vote rejections"}
	{"level":"info","ts":"2025-12-27T10:03:34.270052Z","logger":"raft","caller":"v3@v3.6.0/raft.go:912","msg":"9f0758e1c58a86ed became candidate at term 3"}
	{"level":"info","ts":"2025-12-27T10:03:34.274267Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1077","msg":"9f0758e1c58a86ed received MsgVoteResp from 9f0758e1c58a86ed at term 3"}
	{"level":"info","ts":"2025-12-27T10:03:34.274338Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1693","msg":"9f0758e1c58a86ed has received 1 MsgVoteResp votes and 0 vote rejections"}
	{"level":"info","ts":"2025-12-27T10:03:34.274380Z","logger":"raft","caller":"v3@v3.6.0/raft.go:970","msg":"9f0758e1c58a86ed became leader at term 3"}
	{"level":"info","ts":"2025-12-27T10:03:34.274427Z","logger":"raft","caller":"v3@v3.6.0/node.go:370","msg":"raft.node: 9f0758e1c58a86ed elected leader 9f0758e1c58a86ed at term 3"}
	{"level":"info","ts":"2025-12-27T10:03:34.282430Z","caller":"etcdserver/server.go:1820","msg":"published local member to cluster through raft","local-member-id":"9f0758e1c58a86ed","local-member-attributes":"{Name:no-preload-021144 ClientURLs:[https://192.168.85.2:2379]}","cluster-id":"68eaea490fab4e05","publish-timeout":"7s"}
	{"level":"info","ts":"2025-12-27T10:03:34.282596Z","caller":"embed/serve.go:138","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-12-27T10:03:34.282715Z","caller":"embed/serve.go:138","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-12-27T10:03:34.282775Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-12-27T10:03:34.282785Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-12-27T10:03:34.283597Z","caller":"v3rpc/health.go:63","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-12-27T10:03:34.285287Z","caller":"embed/serve.go:283","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.85.2:2379"}
	{"level":"info","ts":"2025-12-27T10:03:34.285986Z","caller":"v3rpc/health.go:63","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-12-27T10:03:34.305442Z","caller":"embed/serve.go:283","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	
	
	==> kernel <==
	 10:04:34 up  2:47,  0 user,  load average: 1.94, 1.81, 2.00
	Linux no-preload-021144 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [352daa6cd4a6a1964f4f855817ee1e9291a42a8caa5882022bce5a87b5ef38e7] <==
	I1227 10:03:38.721527       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1227 10:03:38.721888       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1227 10:03:38.722107       1 main.go:148] setting mtu 1500 for CNI 
	I1227 10:03:38.722191       1 main.go:178] kindnetd IP family: "ipv4"
	I1227 10:03:38.722232       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-12-27T10:03:38Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1227 10:03:38.927361       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1227 10:03:38.927992       1 controller.go:381] "Waiting for informer caches to sync"
	I1227 10:03:38.928097       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1227 10:03:38.928284       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1227 10:04:08.927038       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1227 10:04:08.928265       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1227 10:04:08.928428       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	E1227 10:04:08.928511       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	I1227 10:04:10.428761       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1227 10:04:10.428873       1 metrics.go:72] Registering metrics
	I1227 10:04:10.428968       1 controller.go:711] "Syncing nftables rules"
	I1227 10:04:18.927356       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1227 10:04:18.927408       1 main.go:301] handling current node
	I1227 10:04:28.927327       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1227 10:04:28.927358       1 main.go:301] handling current node
	
	
	==> kube-apiserver [327ad0c5ea77e5cb07dbc495716c696dbfb5bd8050c9432839733c1be978ab8f] <==
	I1227 10:03:37.324513       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1227 10:03:37.348979       1 shared_informer.go:377] "Caches are synced"
	I1227 10:03:37.356918       1 shared_informer.go:356] "Caches are synced" controller="crd-autoregister"
	I1227 10:03:37.356934       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1227 10:03:37.357113       1 cidrallocator.go:302] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1227 10:03:37.358777       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1227 10:03:37.358886       1 shared_informer.go:377] "Caches are synced"
	I1227 10:03:37.358827       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1227 10:03:37.361382       1 aggregator.go:187] initial CRD sync complete...
	I1227 10:03:37.361965       1 autoregister_controller.go:144] Starting autoregister controller
	I1227 10:03:37.361971       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1227 10:03:37.361978       1 cache.go:39] Caches are synced for autoregister controller
	I1227 10:03:37.371021       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	E1227 10:03:37.452112       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1227 10:03:37.823911       1 controller.go:667] quota admission added evaluator for: namespaces
	I1227 10:03:37.872520       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1227 10:03:37.901840       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1227 10:03:37.913606       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1227 10:03:37.924745       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1227 10:03:37.959004       1 storage_scheduling.go:139] all system priority classes are created successfully or already exist.
	I1227 10:03:38.006681       1 alloc.go:329] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.97.150.105"}
	I1227 10:03:38.034177       1 alloc.go:329] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.110.154.58"}
	I1227 10:03:40.966480       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1227 10:03:41.016155       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1227 10:03:41.066584       1 controller.go:667] quota admission added evaluator for: endpoints
	
	
	==> kube-controller-manager [826fac1ed8726ef3a42c94e0e83f18ada09304c6805c9f89f7b3c2d04e4c1a04] <==
	I1227 10:03:40.420741       1 shared_informer.go:377] "Caches are synced"
	I1227 10:03:40.420780       1 shared_informer.go:377] "Caches are synced"
	I1227 10:03:40.420808       1 shared_informer.go:377] "Caches are synced"
	I1227 10:03:40.421047       1 node_lifecycle_controller.go:886] "Missing timestamp for Node. Assuming now as a timestamp" node="no-preload-021144"
	I1227 10:03:40.421331       1 node_lifecycle_controller.go:1080] "Controller detected that zone is now in new state" zone="" newState="Normal"
	I1227 10:03:40.421900       1 range_allocator.go:181] "Starting range CIDR allocator"
	I1227 10:03:40.421952       1 shared_informer.go:370] "Waiting for caches to sync"
	I1227 10:03:40.421982       1 shared_informer.go:377] "Caches are synced"
	I1227 10:03:40.422092       1 shared_informer.go:377] "Caches are synced"
	I1227 10:03:40.422225       1 shared_informer.go:377] "Caches are synced"
	I1227 10:03:40.422282       1 shared_informer.go:377] "Caches are synced"
	I1227 10:03:40.422358       1 shared_informer.go:377] "Caches are synced"
	I1227 10:03:40.422537       1 shared_informer.go:377] "Caches are synced"
	I1227 10:03:40.422625       1 shared_informer.go:377] "Caches are synced"
	I1227 10:03:40.423079       1 shared_informer.go:377] "Caches are synced"
	I1227 10:03:40.423410       1 shared_informer.go:377] "Caches are synced"
	I1227 10:03:40.423572       1 shared_informer.go:377] "Caches are synced"
	I1227 10:03:40.424955       1 shared_informer.go:377] "Caches are synced"
	I1227 10:03:40.433850       1 shared_informer.go:377] "Caches are synced"
	I1227 10:03:40.441399       1 shared_informer.go:370] "Waiting for caches to sync"
	I1227 10:03:40.450282       1 shared_informer.go:377] "Caches are synced"
	I1227 10:03:40.531222       1 shared_informer.go:377] "Caches are synced"
	I1227 10:03:40.531324       1 garbagecollector.go:166] "Garbage collector: all resource monitors have synced"
	I1227 10:03:40.531356       1 garbagecollector.go:169] "Proceeding to collect garbage"
	I1227 10:03:40.545409       1 shared_informer.go:377] "Caches are synced"
	
	
	==> kube-proxy [5bf3302e4122f8d606058ec1bcb193df3d81177fb1f03e2221b04f64f3be159b] <==
	I1227 10:03:38.715165       1 server_linux.go:53] "Using iptables proxy"
	I1227 10:03:38.810214       1 shared_informer.go:370] "Waiting for caches to sync"
	I1227 10:03:38.910905       1 shared_informer.go:377] "Caches are synced"
	I1227 10:03:38.910945       1 server.go:218] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1227 10:03:38.911011       1 server.go:255] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1227 10:03:38.951963       1 server.go:264] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1227 10:03:38.952077       1 server_linux.go:136] "Using iptables Proxier"
	I1227 10:03:38.955916       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1227 10:03:38.956296       1 server.go:529] "Version info" version="v1.35.0"
	I1227 10:03:38.956746       1 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1227 10:03:38.957995       1 config.go:200] "Starting service config controller"
	I1227 10:03:38.962997       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1227 10:03:38.958134       1 config.go:106] "Starting endpoint slice config controller"
	I1227 10:03:38.963130       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1227 10:03:38.958241       1 config.go:403] "Starting serviceCIDR config controller"
	I1227 10:03:38.963206       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1227 10:03:38.958875       1 config.go:309] "Starting node config controller"
	I1227 10:03:38.963263       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1227 10:03:38.963291       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1227 10:03:39.063255       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1227 10:03:39.063269       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1227 10:03:39.063282       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-scheduler [8dcf711110ef0028adc15392e388eb3ea778715b2b9bac9fb0a2657eff4887a0] <==
	I1227 10:03:35.130285       1 serving.go:386] Generated self-signed cert in-memory
	W1227 10:03:37.188116       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1227 10:03:37.188150       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1227 10:03:37.188160       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1227 10:03:37.188167       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1227 10:03:37.296374       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.35.0"
	I1227 10:03:37.296407       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1227 10:03:37.327078       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1227 10:03:37.327225       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1227 10:03:37.327246       1 shared_informer.go:370] "Waiting for caches to sync"
	I1227 10:03:37.327262       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1227 10:03:37.450399       1 shared_informer.go:377] "Caches are synced"
	
	
	==> kubelet <==
	Dec 27 10:03:55 no-preload-021144 kubelet[781]: I1227 10:03:55.498647     781 scope.go:122] "RemoveContainer" containerID="0018c3dadf334ca3e015cca26eccaa4eb0ea9ad26e451cceaf6c4e0f36fd817e"
	Dec 27 10:03:55 no-preload-021144 kubelet[781]: E1227 10:03:55.498906     781 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-867fb5f87b-9hmrk_kubernetes-dashboard(598c4343-1832-4daf-826a-447e53b60c06)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-9hmrk" podUID="598c4343-1832-4daf-826a-447e53b60c06"
	Dec 27 10:03:57 no-preload-021144 kubelet[781]: E1227 10:03:57.144758     781 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-9hmrk" containerName="dashboard-metrics-scraper"
	Dec 27 10:03:57 no-preload-021144 kubelet[781]: I1227 10:03:57.145365     781 scope.go:122] "RemoveContainer" containerID="0018c3dadf334ca3e015cca26eccaa4eb0ea9ad26e451cceaf6c4e0f36fd817e"
	Dec 27 10:03:57 no-preload-021144 kubelet[781]: I1227 10:03:57.261647     781 scope.go:122] "RemoveContainer" containerID="0018c3dadf334ca3e015cca26eccaa4eb0ea9ad26e451cceaf6c4e0f36fd817e"
	Dec 27 10:03:57 no-preload-021144 kubelet[781]: E1227 10:03:57.261961     781 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-9hmrk" containerName="dashboard-metrics-scraper"
	Dec 27 10:03:57 no-preload-021144 kubelet[781]: I1227 10:03:57.261987     781 scope.go:122] "RemoveContainer" containerID="09f499235917c35bfe79389d2cddcff1d6f64f5c64a8f8fec8343053238b46ff"
	Dec 27 10:03:57 no-preload-021144 kubelet[781]: E1227 10:03:57.262183     781 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-867fb5f87b-9hmrk_kubernetes-dashboard(598c4343-1832-4daf-826a-447e53b60c06)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-9hmrk" podUID="598c4343-1832-4daf-826a-447e53b60c06"
	Dec 27 10:04:05 no-preload-021144 kubelet[781]: E1227 10:04:05.498948     781 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-9hmrk" containerName="dashboard-metrics-scraper"
	Dec 27 10:04:05 no-preload-021144 kubelet[781]: I1227 10:04:05.499444     781 scope.go:122] "RemoveContainer" containerID="09f499235917c35bfe79389d2cddcff1d6f64f5c64a8f8fec8343053238b46ff"
	Dec 27 10:04:05 no-preload-021144 kubelet[781]: E1227 10:04:05.499691     781 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-867fb5f87b-9hmrk_kubernetes-dashboard(598c4343-1832-4daf-826a-447e53b60c06)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-9hmrk" podUID="598c4343-1832-4daf-826a-447e53b60c06"
	Dec 27 10:04:09 no-preload-021144 kubelet[781]: I1227 10:04:09.294019     781 scope.go:122] "RemoveContainer" containerID="45ecaf16299f98151be059174b564a2b8e41d59011fe0dc39e4f49fe2d671775"
	Dec 27 10:04:14 no-preload-021144 kubelet[781]: E1227 10:04:14.665149     781 prober_manager.go:209] "Readiness probe already exists for container" pod="kube-system/coredns-7d764666f9-p7h6b" containerName="coredns"
	Dec 27 10:04:21 no-preload-021144 kubelet[781]: E1227 10:04:21.144886     781 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-9hmrk" containerName="dashboard-metrics-scraper"
	Dec 27 10:04:21 no-preload-021144 kubelet[781]: I1227 10:04:21.144927     781 scope.go:122] "RemoveContainer" containerID="09f499235917c35bfe79389d2cddcff1d6f64f5c64a8f8fec8343053238b46ff"
	Dec 27 10:04:21 no-preload-021144 kubelet[781]: I1227 10:04:21.328082     781 scope.go:122] "RemoveContainer" containerID="09f499235917c35bfe79389d2cddcff1d6f64f5c64a8f8fec8343053238b46ff"
	Dec 27 10:04:22 no-preload-021144 kubelet[781]: E1227 10:04:22.332227     781 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-9hmrk" containerName="dashboard-metrics-scraper"
	Dec 27 10:04:22 no-preload-021144 kubelet[781]: I1227 10:04:22.332262     781 scope.go:122] "RemoveContainer" containerID="bc7f7a469469335774383c3c8e2a4c8a94b08630a54e8d542062c409fece5186"
	Dec 27 10:04:22 no-preload-021144 kubelet[781]: E1227 10:04:22.332403     781 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-867fb5f87b-9hmrk_kubernetes-dashboard(598c4343-1832-4daf-826a-447e53b60c06)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-9hmrk" podUID="598c4343-1832-4daf-826a-447e53b60c06"
	Dec 27 10:04:25 no-preload-021144 kubelet[781]: E1227 10:04:25.498770     781 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-9hmrk" containerName="dashboard-metrics-scraper"
	Dec 27 10:04:25 no-preload-021144 kubelet[781]: I1227 10:04:25.498820     781 scope.go:122] "RemoveContainer" containerID="bc7f7a469469335774383c3c8e2a4c8a94b08630a54e8d542062c409fece5186"
	Dec 27 10:04:25 no-preload-021144 kubelet[781]: E1227 10:04:25.498983     781 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-867fb5f87b-9hmrk_kubernetes-dashboard(598c4343-1832-4daf-826a-447e53b60c06)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-9hmrk" podUID="598c4343-1832-4daf-826a-447e53b60c06"
	Dec 27 10:04:28 no-preload-021144 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Dec 27 10:04:28 no-preload-021144 systemd[1]: kubelet.service: Deactivated successfully.
	Dec 27 10:04:28 no-preload-021144 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	
	
	==> kubernetes-dashboard [aeca985cca2e204c06a9df88fcfd42a200defdc60b6d97b4d4d19192e4a14d30] <==
	2025/12/27 10:03:49 Using namespace: kubernetes-dashboard
	2025/12/27 10:03:49 Using in-cluster config to connect to apiserver
	2025/12/27 10:03:49 Using secret token for csrf signing
	2025/12/27 10:03:49 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/12/27 10:03:49 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/12/27 10:03:49 Successful initial request to the apiserver, version: v1.35.0
	2025/12/27 10:03:49 Generating JWE encryption key
	2025/12/27 10:03:49 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/12/27 10:03:49 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/12/27 10:03:50 Initializing JWE encryption key from synchronized object
	2025/12/27 10:03:50 Creating in-cluster Sidecar client
	2025/12/27 10:03:50 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/12/27 10:03:50 Serving insecurely on HTTP port: 9090
	2025/12/27 10:04:20 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/12/27 10:03:49 Starting overwatch
	
	
	==> storage-provisioner [45ecaf16299f98151be059174b564a2b8e41d59011fe0dc39e4f49fe2d671775] <==
	I1227 10:03:38.620988       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1227 10:04:08.622554       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [6293953f907e5528b67e0d977b446baf30b852aa274e59762369a92e0ab91949] <==
	I1227 10:04:09.358001       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1227 10:04:09.372042       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1227 10:04:09.372091       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1227 10:04:09.375403       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1227 10:04:12.830917       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1227 10:04:17.092286       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1227 10:04:20.690895       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1227 10:04:23.745151       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1227 10:04:26.768717       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1227 10:04:26.775953       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1227 10:04:26.776099       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1227 10:04:26.776278       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-021144_3c7a24a1-f341-4c0a-8a61-f4ee43c50ac4!
	I1227 10:04:26.777179       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"9f5df63c-860d-4bc0-ad23-f9b0ec7df9e5", APIVersion:"v1", ResourceVersion:"649", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-021144_3c7a24a1-f341-4c0a-8a61-f4ee43c50ac4 became leader
	W1227 10:04:26.785171       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1227 10:04:26.791235       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1227 10:04:26.877086       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-021144_3c7a24a1-f341-4c0a-8a61-f4ee43c50ac4!
	W1227 10:04:28.795085       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1227 10:04:28.805716       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1227 10:04:30.809453       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1227 10:04:30.825669       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1227 10:04:32.833206       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1227 10:04:32.839282       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1227 10:04:34.849621       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1227 10:04:34.868853       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-021144 -n no-preload-021144
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-021144 -n no-preload-021144: exit status 2 (434.584741ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:270: (dbg) Run:  kubectl --context no-preload-021144 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:294: <<< TestStartStop/group/no-preload/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:295: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/no-preload/serial/Pause (7.45s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (2.42s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p embed-certs-017122 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable metrics-server -p embed-certs-017122 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 11 (256.606029ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-27T10:05:21Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-arm64 addons enable metrics-server -p embed-certs-017122 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 11
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context embed-certs-017122 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:213: (dbg) Non-zero exit: kubectl --context embed-certs-017122 describe deploy/metrics-server -n kube-system: exit status 1 (80.665515ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): deployments.apps "metrics-server" not found

                                                
                                                
** /stderr **
start_stop_delete_test.go:215: failed to get info on auto-pause deployments. args "kubectl --context embed-certs-017122 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:219: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/embed-certs/serial/EnableAddonWhileActive]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/embed-certs/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect embed-certs-017122
helpers_test.go:244: (dbg) docker inspect embed-certs-017122:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "f2b20a6dc27416978f803efe5894b77a3f380f056e00a1283ea3861f5ac2afb4",
	        "Created": "2025-12-27T10:04:22.683463694Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 508916,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-27T10:04:22.745220198Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:713d59eed1cb7ea89d31e3cbef8f6274a6ab7509a421e96cf0197c334e76398d",
	        "ResolvConfPath": "/var/lib/docker/containers/f2b20a6dc27416978f803efe5894b77a3f380f056e00a1283ea3861f5ac2afb4/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/f2b20a6dc27416978f803efe5894b77a3f380f056e00a1283ea3861f5ac2afb4/hostname",
	        "HostsPath": "/var/lib/docker/containers/f2b20a6dc27416978f803efe5894b77a3f380f056e00a1283ea3861f5ac2afb4/hosts",
	        "LogPath": "/var/lib/docker/containers/f2b20a6dc27416978f803efe5894b77a3f380f056e00a1283ea3861f5ac2afb4/f2b20a6dc27416978f803efe5894b77a3f380f056e00a1283ea3861f5ac2afb4-json.log",
	        "Name": "/embed-certs-017122",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "embed-certs-017122:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "embed-certs-017122",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "f2b20a6dc27416978f803efe5894b77a3f380f056e00a1283ea3861f5ac2afb4",
	                "LowerDir": "/var/lib/docker/overlay2/052d61c91b3bdbfd03ac674bac933f73adb21bf22ba18d065f91f0746a20b7f4-init/diff:/var/lib/docker/overlay2/888349771a41a46b09e7de676064af9effbe2f5ae2a8ba49ad062335fb2a70e5/diff",
	                "MergedDir": "/var/lib/docker/overlay2/052d61c91b3bdbfd03ac674bac933f73adb21bf22ba18d065f91f0746a20b7f4/merged",
	                "UpperDir": "/var/lib/docker/overlay2/052d61c91b3bdbfd03ac674bac933f73adb21bf22ba18d065f91f0746a20b7f4/diff",
	                "WorkDir": "/var/lib/docker/overlay2/052d61c91b3bdbfd03ac674bac933f73adb21bf22ba18d065f91f0746a20b7f4/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "embed-certs-017122",
	                "Source": "/var/lib/docker/volumes/embed-certs-017122/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "embed-certs-017122",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "embed-certs-017122",
	                "name.minikube.sigs.k8s.io": "embed-certs-017122",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "72b44d8c724ebebadbf18f0db983c4a9c4d6255e8e7bd5bf32575acc564344f7",
	            "SandboxKey": "/var/run/docker/netns/72b44d8c724e",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33441"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33442"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33445"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33443"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33444"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "embed-certs-017122": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "1a:b7:2c:09:08:f9",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "3ffc320fafa322491008f70d428c80b42cc8ee40dadd5618a8bbe80fddaf33d5",
	                    "EndpointID": "93916095c2ba20f490501e648deafc7144caf3d2c3236a8d170376a4ec30858c",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "embed-certs-017122",
	                        "f2b20a6dc274"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-017122 -n embed-certs-017122
helpers_test.go:253: <<< TestStartStop/group/embed-certs/serial/EnableAddonWhileActive FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/embed-certs/serial/EnableAddonWhileActive]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p embed-certs-017122 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-arm64 -p embed-certs-017122 logs -n 25: (1.164508266s)
helpers_test.go:261: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ delete  │ -p cert-options-057459                                                                                                                                                                                                                        │ cert-options-057459          │ jenkins │ v1.37.0 │ 27 Dec 25 09:59 UTC │ 27 Dec 25 09:59 UTC │
	│ start   │ -p old-k8s-version-156305 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-156305       │ jenkins │ v1.37.0 │ 27 Dec 25 09:59 UTC │ 27 Dec 25 10:00 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-156305 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-156305       │ jenkins │ v1.37.0 │ 27 Dec 25 10:00 UTC │                     │
	│ stop    │ -p old-k8s-version-156305 --alsologtostderr -v=3                                                                                                                                                                                              │ old-k8s-version-156305       │ jenkins │ v1.37.0 │ 27 Dec 25 10:00 UTC │ 27 Dec 25 10:00 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-156305 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-156305       │ jenkins │ v1.37.0 │ 27 Dec 25 10:00 UTC │ 27 Dec 25 10:00 UTC │
	│ start   │ -p old-k8s-version-156305 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-156305       │ jenkins │ v1.37.0 │ 27 Dec 25 10:00 UTC │ 27 Dec 25 10:01 UTC │
	│ image   │ old-k8s-version-156305 image list --format=json                                                                                                                                                                                               │ old-k8s-version-156305       │ jenkins │ v1.37.0 │ 27 Dec 25 10:01 UTC │ 27 Dec 25 10:01 UTC │
	│ pause   │ -p old-k8s-version-156305 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-156305       │ jenkins │ v1.37.0 │ 27 Dec 25 10:01 UTC │                     │
	│ delete  │ -p old-k8s-version-156305                                                                                                                                                                                                                     │ old-k8s-version-156305       │ jenkins │ v1.37.0 │ 27 Dec 25 10:02 UTC │ 27 Dec 25 10:02 UTC │
	│ delete  │ -p old-k8s-version-156305                                                                                                                                                                                                                     │ old-k8s-version-156305       │ jenkins │ v1.37.0 │ 27 Dec 25 10:02 UTC │ 27 Dec 25 10:02 UTC │
	│ start   │ -p no-preload-021144 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0                                                                                       │ no-preload-021144            │ jenkins │ v1.37.0 │ 27 Dec 25 10:02 UTC │ 27 Dec 25 10:03 UTC │
	│ addons  │ enable metrics-server -p no-preload-021144 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-021144            │ jenkins │ v1.37.0 │ 27 Dec 25 10:03 UTC │                     │
	│ stop    │ -p no-preload-021144 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-021144            │ jenkins │ v1.37.0 │ 27 Dec 25 10:03 UTC │ 27 Dec 25 10:03 UTC │
	│ addons  │ enable dashboard -p no-preload-021144 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-021144            │ jenkins │ v1.37.0 │ 27 Dec 25 10:03 UTC │ 27 Dec 25 10:03 UTC │
	│ start   │ -p no-preload-021144 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0                                                                                       │ no-preload-021144            │ jenkins │ v1.37.0 │ 27 Dec 25 10:03 UTC │ 27 Dec 25 10:04 UTC │
	│ ssh     │ force-systemd-flag-779725 ssh cat /etc/crio/crio.conf.d/02-crio.conf                                                                                                                                                                          │ force-systemd-flag-779725    │ jenkins │ v1.37.0 │ 27 Dec 25 10:04 UTC │ 27 Dec 25 10:04 UTC │
	│ delete  │ -p force-systemd-flag-779725                                                                                                                                                                                                                  │ force-systemd-flag-779725    │ jenkins │ v1.37.0 │ 27 Dec 25 10:04 UTC │ 27 Dec 25 10:04 UTC │
	│ start   │ -p embed-certs-017122 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0                                                                                        │ embed-certs-017122           │ jenkins │ v1.37.0 │ 27 Dec 25 10:04 UTC │ 27 Dec 25 10:05 UTC │
	│ image   │ no-preload-021144 image list --format=json                                                                                                                                                                                                    │ no-preload-021144            │ jenkins │ v1.37.0 │ 27 Dec 25 10:04 UTC │ 27 Dec 25 10:04 UTC │
	│ pause   │ -p no-preload-021144 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-021144            │ jenkins │ v1.37.0 │ 27 Dec 25 10:04 UTC │                     │
	│ delete  │ -p no-preload-021144                                                                                                                                                                                                                          │ no-preload-021144            │ jenkins │ v1.37.0 │ 27 Dec 25 10:04 UTC │ 27 Dec 25 10:04 UTC │
	│ delete  │ -p no-preload-021144                                                                                                                                                                                                                          │ no-preload-021144            │ jenkins │ v1.37.0 │ 27 Dec 25 10:04 UTC │ 27 Dec 25 10:04 UTC │
	│ delete  │ -p disable-driver-mounts-242374                                                                                                                                                                                                               │ disable-driver-mounts-242374 │ jenkins │ v1.37.0 │ 27 Dec 25 10:04 UTC │ 27 Dec 25 10:04 UTC │
	│ start   │ -p default-k8s-diff-port-681744 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0                                                                      │ default-k8s-diff-port-681744 │ jenkins │ v1.37.0 │ 27 Dec 25 10:04 UTC │                     │
	│ addons  │ enable metrics-server -p embed-certs-017122 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-017122           │ jenkins │ v1.37.0 │ 27 Dec 25 10:05 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/27 10:04:39
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1227 10:04:39.048291  511805 out.go:360] Setting OutFile to fd 1 ...
	I1227 10:04:39.048390  511805 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 10:04:39.048396  511805 out.go:374] Setting ErrFile to fd 2...
	I1227 10:04:39.048401  511805 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 10:04:39.048739  511805 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22344-301174/.minikube/bin
	I1227 10:04:39.049352  511805 out.go:368] Setting JSON to false
	I1227 10:04:39.050529  511805 start.go:133] hostinfo: {"hostname":"ip-172-31-31-251","uptime":10028,"bootTime":1766819851,"procs":176,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1227 10:04:39.050595  511805 start.go:143] virtualization:  
	I1227 10:04:39.054315  511805 out.go:179] * [default-k8s-diff-port-681744] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1227 10:04:39.057545  511805 out.go:179]   - MINIKUBE_LOCATION=22344
	I1227 10:04:39.059617  511805 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1227 10:04:39.059809  511805 notify.go:221] Checking for updates...
	I1227 10:04:39.065539  511805 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22344-301174/kubeconfig
	I1227 10:04:39.068567  511805 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22344-301174/.minikube
	I1227 10:04:39.071571  511805 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1227 10:04:39.074537  511805 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1227 10:04:39.078069  511805 config.go:182] Loaded profile config "embed-certs-017122": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1227 10:04:39.078267  511805 driver.go:422] Setting default libvirt URI to qemu:///system
	I1227 10:04:39.108701  511805 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1227 10:04:39.108815  511805 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1227 10:04:39.210891  511805 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:53 SystemTime:2025-12-27 10:04:39.200650315 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1227 10:04:39.210999  511805 docker.go:319] overlay module found
	I1227 10:04:39.214232  511805 out.go:179] * Using the docker driver based on user configuration
	I1227 10:04:39.217114  511805 start.go:309] selected driver: docker
	I1227 10:04:39.217141  511805 start.go:928] validating driver "docker" against <nil>
	I1227 10:04:39.217155  511805 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1227 10:04:39.217850  511805 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1227 10:04:39.295647  511805 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:53 SystemTime:2025-12-27 10:04:39.281767872 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1227 10:04:39.295786  511805 start_flags.go:333] no existing cluster config was found, will generate one from the flags 
	I1227 10:04:39.296061  511805 start_flags.go:1019] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1227 10:04:39.298963  511805 out.go:179] * Using Docker driver with root privileges
	I1227 10:04:39.301837  511805 cni.go:84] Creating CNI manager for ""
	I1227 10:04:39.301906  511805 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1227 10:04:39.301915  511805 start_flags.go:342] Found "CNI" CNI - setting NetworkPlugin=cni
	I1227 10:04:39.302000  511805 start.go:353] cluster config:
	{Name:default-k8s-diff-port-681744 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:default-k8s-diff-port-681744 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:
cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SS
HAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1227 10:04:39.305241  511805 out.go:179] * Starting "default-k8s-diff-port-681744" primary control-plane node in "default-k8s-diff-port-681744" cluster
	I1227 10:04:39.308214  511805 cache.go:134] Beginning downloading kic base image for docker with crio
	I1227 10:04:39.311110  511805 out.go:179] * Pulling base image v0.0.48-1766570851-22316 ...
	I1227 10:04:39.313986  511805 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime crio
	I1227 10:04:39.314030  511805 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22344-301174/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-cri-o-overlay-arm64.tar.lz4
	I1227 10:04:39.314054  511805 cache.go:65] Caching tarball of preloaded images
	I1227 10:04:39.314140  511805 preload.go:251] Found /home/jenkins/minikube-integration/22344-301174/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1227 10:04:39.314225  511805 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0 on crio
	I1227 10:04:39.314354  511805 profile.go:143] Saving config to /home/jenkins/minikube-integration/22344-301174/.minikube/profiles/default-k8s-diff-port-681744/config.json ...
	I1227 10:04:39.314373  511805 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22344-301174/.minikube/profiles/default-k8s-diff-port-681744/config.json: {Name:mk78422831fbadac0dc1b452ed004d85d0612709 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 10:04:39.314524  511805 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a in local docker daemon
	I1227 10:04:39.341093  511805 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a in local docker daemon, skipping pull
	I1227 10:04:39.341118  511805 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a exists in daemon, skipping load
	I1227 10:04:39.341141  511805 cache.go:243] Successfully downloaded all kic artifacts
	I1227 10:04:39.341170  511805 start.go:360] acquireMachinesLock for default-k8s-diff-port-681744: {Name:mk8a28038e1b078aa1c0d3cea0d9a4fa9fc07d3f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1227 10:04:39.341283  511805 start.go:364] duration metric: took 92.079µs to acquireMachinesLock for "default-k8s-diff-port-681744"
	I1227 10:04:39.341320  511805 start.go:93] Provisioning new machine with config: &{Name:default-k8s-diff-port-681744 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:default-k8s-diff-port-681744 Namespace:default API
ServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:
false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false} &{Name: IP: Port:8444 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1227 10:04:39.341384  511805 start.go:125] createHost starting for "" (driver="docker")
	I1227 10:04:38.324554  508478 out.go:252]   - Booting up control plane ...
	I1227 10:04:38.324678  508478 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1227 10:04:38.324765  508478 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1227 10:04:38.325928  508478 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1227 10:04:38.352415  508478 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1227 10:04:38.352608  508478 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1227 10:04:38.366404  508478 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1227 10:04:38.366520  508478 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1227 10:04:38.366561  508478 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1227 10:04:38.528638  508478 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1227 10:04:38.528766  508478 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1227 10:04:40.030967  508478 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.502751649s
	I1227 10:04:40.044314  508478 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1227 10:04:40.044424  508478 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.76.2:8443/livez
	I1227 10:04:40.044522  508478 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1227 10:04:40.044605  508478 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1227 10:04:39.344706  511805 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1227 10:04:39.344947  511805 start.go:159] libmachine.API.Create for "default-k8s-diff-port-681744" (driver="docker")
	I1227 10:04:39.344980  511805 client.go:173] LocalClient.Create starting
	I1227 10:04:39.345042  511805 main.go:144] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22344-301174/.minikube/certs/ca.pem
	I1227 10:04:39.345087  511805 main.go:144] libmachine: Decoding PEM data...
	I1227 10:04:39.345106  511805 main.go:144] libmachine: Parsing certificate...
	I1227 10:04:39.345157  511805 main.go:144] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22344-301174/.minikube/certs/cert.pem
	I1227 10:04:39.345179  511805 main.go:144] libmachine: Decoding PEM data...
	I1227 10:04:39.345190  511805 main.go:144] libmachine: Parsing certificate...
	I1227 10:04:39.345539  511805 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-681744 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1227 10:04:39.375725  511805 cli_runner.go:211] docker network inspect default-k8s-diff-port-681744 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1227 10:04:39.375809  511805 network_create.go:284] running [docker network inspect default-k8s-diff-port-681744] to gather additional debugging logs...
	I1227 10:04:39.375829  511805 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-681744
	W1227 10:04:39.393798  511805 cli_runner.go:211] docker network inspect default-k8s-diff-port-681744 returned with exit code 1
	I1227 10:04:39.393830  511805 network_create.go:287] error running [docker network inspect default-k8s-diff-port-681744]: docker network inspect default-k8s-diff-port-681744: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network default-k8s-diff-port-681744 not found
	I1227 10:04:39.393843  511805 network_create.go:289] output of [docker network inspect default-k8s-diff-port-681744]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network default-k8s-diff-port-681744 not found
	
	** /stderr **
	I1227 10:04:39.393947  511805 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1227 10:04:39.428113  511805 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-cc45c0939b74 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:be:a9:eb:a6:c1:01} reservation:<nil>}
	I1227 10:04:39.428504  511805 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-b122a856da6d IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:2e:eb:89:95:c1:62} reservation:<nil>}
	I1227 10:04:39.428739  511805 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-a1a16649dea9 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:9e:51:2b:90:d1:ea} reservation:<nil>}
	I1227 10:04:39.429034  511805 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-3ffc320fafa3 IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:6e:f9:a0:95:0a:b2} reservation:<nil>}
	I1227 10:04:39.429464  511805 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x40019c7d30}
	I1227 10:04:39.429480  511805 network_create.go:124] attempt to create docker network default-k8s-diff-port-681744 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I1227 10:04:39.429540  511805 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=default-k8s-diff-port-681744 default-k8s-diff-port-681744
	I1227 10:04:39.507643  511805 network_create.go:108] docker network default-k8s-diff-port-681744 192.168.85.0/24 created
	I1227 10:04:39.507676  511805 kic.go:121] calculated static IP "192.168.85.2" for the "default-k8s-diff-port-681744" container
	I1227 10:04:39.507771  511805 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1227 10:04:39.525801  511805 cli_runner.go:164] Run: docker volume create default-k8s-diff-port-681744 --label name.minikube.sigs.k8s.io=default-k8s-diff-port-681744 --label created_by.minikube.sigs.k8s.io=true
	I1227 10:04:39.546334  511805 oci.go:103] Successfully created a docker volume default-k8s-diff-port-681744
	I1227 10:04:39.546413  511805 cli_runner.go:164] Run: docker run --rm --name default-k8s-diff-port-681744-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=default-k8s-diff-port-681744 --entrypoint /usr/bin/test -v default-k8s-diff-port-681744:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a -d /var/lib
	I1227 10:04:40.198229  511805 oci.go:107] Successfully prepared a docker volume default-k8s-diff-port-681744
	I1227 10:04:40.198295  511805 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime crio
	I1227 10:04:40.198305  511805 kic.go:194] Starting extracting preloaded images to volume ...
	I1227 10:04:40.198384  511805 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22344-301174/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v default-k8s-diff-port-681744:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a -I lz4 -xf /preloaded.tar -C /extractDir
	I1227 10:04:43.049222  508478 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 3.013556001s
	I1227 10:04:44.577526  508478 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 4.542288978s
	I1227 10:04:46.537862  508478 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 6.501461102s
	I1227 10:04:46.576524  508478 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1227 10:04:46.589793  508478 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1227 10:04:46.603093  508478 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1227 10:04:46.603296  508478 kubeadm.go:319] [mark-control-plane] Marking the node embed-certs-017122 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1227 10:04:46.617223  508478 kubeadm.go:319] [bootstrap-token] Using token: ef8vik.xmqtvw1m56llr9lo
	I1227 10:04:46.620883  508478 out.go:252]   - Configuring RBAC rules ...
	I1227 10:04:46.621029  508478 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1227 10:04:46.625794  508478 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1227 10:04:46.636034  508478 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1227 10:04:46.640843  508478 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1227 10:04:46.645964  508478 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1227 10:04:46.652482  508478 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1227 10:04:46.944634  508478 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1227 10:04:47.379973  508478 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1227 10:04:47.945285  508478 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1227 10:04:47.946717  508478 kubeadm.go:319] 
	I1227 10:04:47.946790  508478 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1227 10:04:47.946795  508478 kubeadm.go:319] 
	I1227 10:04:47.946869  508478 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1227 10:04:47.946873  508478 kubeadm.go:319] 
	I1227 10:04:47.946897  508478 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1227 10:04:47.946952  508478 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1227 10:04:47.947000  508478 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1227 10:04:47.947004  508478 kubeadm.go:319] 
	I1227 10:04:47.947055  508478 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1227 10:04:47.947058  508478 kubeadm.go:319] 
	I1227 10:04:47.947104  508478 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1227 10:04:47.947107  508478 kubeadm.go:319] 
	I1227 10:04:47.947156  508478 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1227 10:04:47.947227  508478 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1227 10:04:47.947292  508478 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1227 10:04:47.947295  508478 kubeadm.go:319] 
	I1227 10:04:47.947375  508478 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1227 10:04:47.947447  508478 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1227 10:04:47.947452  508478 kubeadm.go:319] 
	I1227 10:04:47.947531  508478 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token ef8vik.xmqtvw1m56llr9lo \
	I1227 10:04:47.947628  508478 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:08d8899242412948893d7bdf12a68d0584eea0f77b00577c527378f6462e133c \
	I1227 10:04:47.947647  508478 kubeadm.go:319] 	--control-plane 
	I1227 10:04:47.947652  508478 kubeadm.go:319] 
	I1227 10:04:47.947732  508478 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1227 10:04:47.947736  508478 kubeadm.go:319] 
	I1227 10:04:47.947813  508478 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token ef8vik.xmqtvw1m56llr9lo \
	I1227 10:04:47.947910  508478 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:08d8899242412948893d7bdf12a68d0584eea0f77b00577c527378f6462e133c 
	I1227 10:04:47.952655  508478 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1227 10:04:47.953206  508478 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1227 10:04:47.953360  508478 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1227 10:04:47.953392  508478 cni.go:84] Creating CNI manager for ""
	I1227 10:04:47.953400  508478 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1227 10:04:47.958451  508478 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1227 10:04:44.613587  511805 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22344-301174/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v default-k8s-diff-port-681744:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a -I lz4 -xf /preloaded.tar -C /extractDir: (4.415162339s)
	I1227 10:04:44.613622  511805 kic.go:203] duration metric: took 4.41531352s to extract preloaded images to volume ...
	W1227 10:04:44.613760  511805 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1227 10:04:44.613878  511805 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1227 10:04:44.713833  511805 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname default-k8s-diff-port-681744 --name default-k8s-diff-port-681744 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=default-k8s-diff-port-681744 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=default-k8s-diff-port-681744 --network default-k8s-diff-port-681744 --ip 192.168.85.2 --volume default-k8s-diff-port-681744:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8444 --publish=127.0.0.1::8444 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a
	I1227 10:04:45.224113  511805 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-681744 --format={{.State.Running}}
	I1227 10:04:45.266619  511805 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-681744 --format={{.State.Status}}
	I1227 10:04:45.304106  511805 cli_runner.go:164] Run: docker exec default-k8s-diff-port-681744 stat /var/lib/dpkg/alternatives/iptables
	I1227 10:04:45.372858  511805 oci.go:144] the created container "default-k8s-diff-port-681744" has a running status.
	I1227 10:04:45.372891  511805 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/22344-301174/.minikube/machines/default-k8s-diff-port-681744/id_rsa...
	I1227 10:04:45.548771  511805 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22344-301174/.minikube/machines/default-k8s-diff-port-681744/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1227 10:04:45.579248  511805 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-681744 --format={{.State.Status}}
	I1227 10:04:45.607403  511805 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1227 10:04:45.607426  511805 kic_runner.go:114] Args: [docker exec --privileged default-k8s-diff-port-681744 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1227 10:04:45.671230  511805 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-681744 --format={{.State.Status}}
	I1227 10:04:45.702845  511805 machine.go:94] provisionDockerMachine start ...
	I1227 10:04:45.702945  511805 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-681744
	I1227 10:04:45.738027  511805 main.go:144] libmachine: Using SSH client type: native
	I1227 10:04:45.738420  511805 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dbbe0] 0x3de0e0 <nil>  [] 0s} 127.0.0.1 33446 <nil> <nil>}
	I1227 10:04:45.738434  511805 main.go:144] libmachine: About to run SSH command:
	hostname
	I1227 10:04:45.739018  511805 main.go:144] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:44160->127.0.0.1:33446: read: connection reset by peer
	I1227 10:04:48.893927  511805 main.go:144] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-681744
	
	I1227 10:04:48.893956  511805 ubuntu.go:182] provisioning hostname "default-k8s-diff-port-681744"
	I1227 10:04:48.894059  511805 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-681744
	I1227 10:04:48.913596  511805 main.go:144] libmachine: Using SSH client type: native
	I1227 10:04:48.913967  511805 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dbbe0] 0x3de0e0 <nil>  [] 0s} 127.0.0.1 33446 <nil> <nil>}
	I1227 10:04:48.913986  511805 main.go:144] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-681744 && echo "default-k8s-diff-port-681744" | sudo tee /etc/hostname
	I1227 10:04:49.073780  511805 main.go:144] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-681744
	
	I1227 10:04:49.073862  511805 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-681744
	I1227 10:04:49.094717  511805 main.go:144] libmachine: Using SSH client type: native
	I1227 10:04:49.095041  511805 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dbbe0] 0x3de0e0 <nil>  [] 0s} 127.0.0.1 33446 <nil> <nil>}
	I1227 10:04:49.095064  511805 main.go:144] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-681744' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-681744/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-681744' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1227 10:04:49.255401  511805 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	I1227 10:04:49.255431  511805 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22344-301174/.minikube CaCertPath:/home/jenkins/minikube-integration/22344-301174/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22344-301174/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22344-301174/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22344-301174/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22344-301174/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22344-301174/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22344-301174/.minikube}
	I1227 10:04:49.255450  511805 ubuntu.go:190] setting up certificates
	I1227 10:04:49.255459  511805 provision.go:84] configureAuth start
	I1227 10:04:49.255530  511805 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-681744
	I1227 10:04:49.272704  511805 provision.go:143] copyHostCerts
	I1227 10:04:49.272781  511805 exec_runner.go:144] found /home/jenkins/minikube-integration/22344-301174/.minikube/ca.pem, removing ...
	I1227 10:04:49.272795  511805 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22344-301174/.minikube/ca.pem
	I1227 10:04:49.272886  511805 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22344-301174/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22344-301174/.minikube/ca.pem (1082 bytes)
	I1227 10:04:49.272988  511805 exec_runner.go:144] found /home/jenkins/minikube-integration/22344-301174/.minikube/cert.pem, removing ...
	I1227 10:04:49.273000  511805 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22344-301174/.minikube/cert.pem
	I1227 10:04:49.273028  511805 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22344-301174/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22344-301174/.minikube/cert.pem (1123 bytes)
	I1227 10:04:49.273144  511805 exec_runner.go:144] found /home/jenkins/minikube-integration/22344-301174/.minikube/key.pem, removing ...
	I1227 10:04:49.273178  511805 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22344-301174/.minikube/key.pem
	I1227 10:04:49.273213  511805 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22344-301174/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22344-301174/.minikube/key.pem (1675 bytes)
	I1227 10:04:49.273308  511805 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22344-301174/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22344-301174/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22344-301174/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-681744 san=[127.0.0.1 192.168.85.2 default-k8s-diff-port-681744 localhost minikube]
	I1227 10:04:50.072091  511805 provision.go:177] copyRemoteCerts
	I1227 10:04:50.072164  511805 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1227 10:04:50.072214  511805 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-681744
	I1227 10:04:50.092990  511805 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33446 SSHKeyPath:/home/jenkins/minikube-integration/22344-301174/.minikube/machines/default-k8s-diff-port-681744/id_rsa Username:docker}
	I1227 10:04:50.194209  511805 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22344-301174/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1227 10:04:50.218698  511805 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22344-301174/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I1227 10:04:50.238402  511805 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22344-301174/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1227 10:04:50.261365  511805 provision.go:87] duration metric: took 1.005882603s to configureAuth
	I1227 10:04:50.261404  511805 ubuntu.go:206] setting minikube options for container-runtime
	I1227 10:04:50.261594  511805 config.go:182] Loaded profile config "default-k8s-diff-port-681744": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1227 10:04:50.261717  511805 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-681744
	I1227 10:04:50.280366  511805 main.go:144] libmachine: Using SSH client type: native
	I1227 10:04:50.280715  511805 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dbbe0] 0x3de0e0 <nil>  [] 0s} 127.0.0.1 33446 <nil> <nil>}
	I1227 10:04:50.280736  511805 main.go:144] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1227 10:04:50.602208  511805 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1227 10:04:50.602283  511805 machine.go:97] duration metric: took 4.899413273s to provisionDockerMachine
	I1227 10:04:50.602310  511805 client.go:176] duration metric: took 11.257319002s to LocalClient.Create
	I1227 10:04:50.602360  511805 start.go:167] duration metric: took 11.257414527s to libmachine.API.Create "default-k8s-diff-port-681744"
	I1227 10:04:50.602388  511805 start.go:293] postStartSetup for "default-k8s-diff-port-681744" (driver="docker")
	I1227 10:04:50.602415  511805 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1227 10:04:50.602505  511805 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1227 10:04:50.602616  511805 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-681744
	I1227 10:04:50.622925  511805 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33446 SSHKeyPath:/home/jenkins/minikube-integration/22344-301174/.minikube/machines/default-k8s-diff-port-681744/id_rsa Username:docker}
	I1227 10:04:50.726683  511805 ssh_runner.go:195] Run: cat /etc/os-release
	I1227 10:04:50.729922  511805 main.go:144] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1227 10:04:50.729951  511805 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1227 10:04:50.729964  511805 filesync.go:126] Scanning /home/jenkins/minikube-integration/22344-301174/.minikube/addons for local assets ...
	I1227 10:04:50.730032  511805 filesync.go:126] Scanning /home/jenkins/minikube-integration/22344-301174/.minikube/files for local assets ...
	I1227 10:04:50.730132  511805 filesync.go:149] local asset: /home/jenkins/minikube-integration/22344-301174/.minikube/files/etc/ssl/certs/3030432.pem -> 3030432.pem in /etc/ssl/certs
	I1227 10:04:50.730280  511805 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1227 10:04:50.738325  511805 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22344-301174/.minikube/files/etc/ssl/certs/3030432.pem --> /etc/ssl/certs/3030432.pem (1708 bytes)
	I1227 10:04:50.759621  511805 start.go:296] duration metric: took 157.201427ms for postStartSetup
	I1227 10:04:50.760003  511805 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-681744
	I1227 10:04:50.782371  511805 profile.go:143] Saving config to /home/jenkins/minikube-integration/22344-301174/.minikube/profiles/default-k8s-diff-port-681744/config.json ...
	I1227 10:04:50.782674  511805 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1227 10:04:50.782714  511805 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-681744
	I1227 10:04:50.804972  511805 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33446 SSHKeyPath:/home/jenkins/minikube-integration/22344-301174/.minikube/machines/default-k8s-diff-port-681744/id_rsa Username:docker}
	I1227 10:04:50.904184  511805 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1227 10:04:50.908932  511805 start.go:128] duration metric: took 11.567532617s to createHost
	I1227 10:04:50.909001  511805 start.go:83] releasing machines lock for "default-k8s-diff-port-681744", held for 11.567702177s
	I1227 10:04:50.909102  511805 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-681744
	I1227 10:04:50.926312  511805 ssh_runner.go:195] Run: cat /version.json
	I1227 10:04:50.926342  511805 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1227 10:04:50.926363  511805 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-681744
	I1227 10:04:50.926424  511805 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-681744
	I1227 10:04:50.959386  511805 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33446 SSHKeyPath:/home/jenkins/minikube-integration/22344-301174/.minikube/machines/default-k8s-diff-port-681744/id_rsa Username:docker}
	I1227 10:04:50.962255  511805 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33446 SSHKeyPath:/home/jenkins/minikube-integration/22344-301174/.minikube/machines/default-k8s-diff-port-681744/id_rsa Username:docker}
	I1227 10:04:51.174917  511805 ssh_runner.go:195] Run: systemctl --version
	I1227 10:04:51.181670  511805 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1227 10:04:51.228692  511805 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1227 10:04:51.233263  511805 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1227 10:04:51.233338  511805 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1227 10:04:51.264108  511805 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1227 10:04:51.264132  511805 start.go:496] detecting cgroup driver to use...
	I1227 10:04:51.264165  511805 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1227 10:04:51.264213  511805 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1227 10:04:51.282860  511805 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1227 10:04:51.295245  511805 docker.go:218] disabling cri-docker service (if available) ...
	I1227 10:04:51.295320  511805 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1227 10:04:51.314689  511805 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1227 10:04:51.333543  511805 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1227 10:04:51.456959  511805 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1227 10:04:51.623855  511805 docker.go:234] disabling docker service ...
	I1227 10:04:51.623945  511805 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1227 10:04:51.648929  511805 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1227 10:04:51.663349  511805 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1227 10:04:51.790452  511805 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1227 10:04:51.920376  511805 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1227 10:04:51.933297  511805 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1227 10:04:51.948667  511805 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1227 10:04:51.948737  511805 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 10:04:51.958116  511805 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1227 10:04:51.958213  511805 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 10:04:51.967143  511805 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 10:04:51.976102  511805 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 10:04:51.988278  511805 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1227 10:04:52.001521  511805 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 10:04:52.018356  511805 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 10:04:52.038928  511805 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 10:04:52.049914  511805 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1227 10:04:52.061838  511805 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1227 10:04:52.072899  511805 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1227 10:04:52.236202  511805 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1227 10:04:52.466130  511805 start.go:553] Will wait 60s for socket path /var/run/crio/crio.sock
	I1227 10:04:52.466280  511805 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1227 10:04:52.474635  511805 start.go:574] Will wait 60s for crictl version
	I1227 10:04:52.475684  511805 ssh_runner.go:195] Run: which crictl
	I1227 10:04:52.485545  511805 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1227 10:04:52.515955  511805 start.go:590] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1227 10:04:52.516118  511805 ssh_runner.go:195] Run: crio --version
	I1227 10:04:52.551684  511805 ssh_runner.go:195] Run: crio --version
	I1227 10:04:52.589902  511805 out.go:179] * Preparing Kubernetes v1.35.0 on CRI-O 1.34.3 ...
	I1227 10:04:47.961232  508478 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1227 10:04:47.965507  508478 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.35.0/kubectl ...
	I1227 10:04:47.965531  508478 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2620 bytes)
	I1227 10:04:47.979677  508478 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1227 10:04:48.300973  508478 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1227 10:04:48.301111  508478 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1227 10:04:48.301179  508478 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes embed-certs-017122 minikube.k8s.io/updated_at=2025_12_27T10_04_48_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=a2daf445edf4872fd9586416ba5dbf507613db86 minikube.k8s.io/name=embed-certs-017122 minikube.k8s.io/primary=true
	I1227 10:04:48.477994  508478 ops.go:34] apiserver oom_adj: -16
	I1227 10:04:48.478115  508478 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1227 10:04:48.978306  508478 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1227 10:04:49.478590  508478 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1227 10:04:49.978308  508478 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1227 10:04:50.478391  508478 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1227 10:04:50.978678  508478 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1227 10:04:51.479163  508478 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1227 10:04:51.978259  508478 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1227 10:04:52.478185  508478 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1227 10:04:52.978946  508478 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1227 10:04:53.479003  508478 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1227 10:04:53.872108  508478 kubeadm.go:1114] duration metric: took 5.571031975s to wait for elevateKubeSystemPrivileges
	I1227 10:04:53.872137  508478 kubeadm.go:403] duration metric: took 21.090326662s to StartCluster
	I1227 10:04:53.872155  508478 settings.go:142] acquiring lock: {Name:mkd3c0090464e277fd903dd0170c6ace5a6172ea Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 10:04:53.872227  508478 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22344-301174/kubeconfig
	I1227 10:04:53.873401  508478 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22344-301174/kubeconfig: {Name:mk3a3b85dc4ae4614cb3a9a7459a1ec0ee3f4d9f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 10:04:53.873684  508478 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1227 10:04:53.873801  508478 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1227 10:04:53.874089  508478 config.go:182] Loaded profile config "embed-certs-017122": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1227 10:04:53.874143  508478 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1227 10:04:53.874495  508478 addons.go:70] Setting storage-provisioner=true in profile "embed-certs-017122"
	I1227 10:04:53.874511  508478 addons.go:239] Setting addon storage-provisioner=true in "embed-certs-017122"
	I1227 10:04:53.874747  508478 host.go:66] Checking if "embed-certs-017122" exists ...
	I1227 10:04:53.875368  508478 cli_runner.go:164] Run: docker container inspect embed-certs-017122 --format={{.State.Status}}
	I1227 10:04:53.875572  508478 addons.go:70] Setting default-storageclass=true in profile "embed-certs-017122"
	I1227 10:04:53.875615  508478 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-017122"
	I1227 10:04:53.875894  508478 cli_runner.go:164] Run: docker container inspect embed-certs-017122 --format={{.State.Status}}
	I1227 10:04:53.880638  508478 out.go:179] * Verifying Kubernetes components...
	I1227 10:04:53.884762  508478 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1227 10:04:53.916930  508478 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1227 10:04:52.592835  511805 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-681744 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1227 10:04:52.614602  511805 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1227 10:04:52.619251  511805 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1227 10:04:52.629245  511805 kubeadm.go:884] updating cluster {Name:default-k8s-diff-port-681744 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:default-k8s-diff-port-681744 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false C
ustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false} ...
	I1227 10:04:52.629376  511805 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime crio
	I1227 10:04:52.629430  511805 ssh_runner.go:195] Run: sudo crictl images --output json
	I1227 10:04:52.666280  511805 crio.go:561] all images are preloaded for cri-o runtime.
	I1227 10:04:52.666303  511805 crio.go:433] Images already preloaded, skipping extraction
	I1227 10:04:52.666362  511805 ssh_runner.go:195] Run: sudo crictl images --output json
	I1227 10:04:52.695347  511805 crio.go:561] all images are preloaded for cri-o runtime.
	I1227 10:04:52.695380  511805 cache_images.go:86] Images are preloaded, skipping loading
	I1227 10:04:52.695388  511805 kubeadm.go:935] updating node { 192.168.85.2 8444 v1.35.0 crio true true} ...
	I1227 10:04:52.695471  511805 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=default-k8s-diff-port-681744 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0 ClusterName:default-k8s-diff-port-681744 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1227 10:04:52.695562  511805 ssh_runner.go:195] Run: crio config
	I1227 10:04:52.768539  511805 cni.go:84] Creating CNI manager for ""
	I1227 10:04:52.768565  511805 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1227 10:04:52.768590  511805 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1227 10:04:52.768623  511805 kubeadm.go:197] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8444 KubernetesVersion:v1.35.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-681744 NodeName:default-k8s-diff-port-681744 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.
crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock failCgroupV1:false hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1227 10:04:52.768755  511805 kubeadm.go:203] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-681744"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	failCgroupV1: false
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1227 10:04:52.768840  511805 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0
	I1227 10:04:52.779227  511805 binaries.go:51] Found k8s binaries, skipping transfer
	I1227 10:04:52.779298  511805 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1227 10:04:52.790727  511805 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (378 bytes)
	I1227 10:04:52.808276  511805 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1227 10:04:52.822932  511805 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2245 bytes)
	I1227 10:04:52.837401  511805 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1227 10:04:52.841373  511805 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1227 10:04:52.854397  511805 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1227 10:04:53.009027  511805 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1227 10:04:53.051555  511805 certs.go:69] Setting up /home/jenkins/minikube-integration/22344-301174/.minikube/profiles/default-k8s-diff-port-681744 for IP: 192.168.85.2
	I1227 10:04:53.051631  511805 certs.go:195] generating shared ca certs ...
	I1227 10:04:53.051662  511805 certs.go:227] acquiring lock for ca certs: {Name:mk8bd99999eb300c33129cb11fd55a1be3001328 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 10:04:53.051878  511805 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22344-301174/.minikube/ca.key
	I1227 10:04:53.051971  511805 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22344-301174/.minikube/proxy-client-ca.key
	I1227 10:04:53.052001  511805 certs.go:257] generating profile certs ...
	I1227 10:04:53.052092  511805 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22344-301174/.minikube/profiles/default-k8s-diff-port-681744/client.key
	I1227 10:04:53.052147  511805 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22344-301174/.minikube/profiles/default-k8s-diff-port-681744/client.crt with IP's: []
	I1227 10:04:53.345135  511805 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22344-301174/.minikube/profiles/default-k8s-diff-port-681744/client.crt ...
	I1227 10:04:53.345209  511805 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22344-301174/.minikube/profiles/default-k8s-diff-port-681744/client.crt: {Name:mk3036a8c2100abdf581aa413d66feba19c5c36a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 10:04:53.345457  511805 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22344-301174/.minikube/profiles/default-k8s-diff-port-681744/client.key ...
	I1227 10:04:53.345493  511805 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22344-301174/.minikube/profiles/default-k8s-diff-port-681744/client.key: {Name:mk303a4f8f9a3d414fb8918b0487688e9b18ce70 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 10:04:53.345655  511805 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22344-301174/.minikube/profiles/default-k8s-diff-port-681744/apiserver.key.263a07fe
	I1227 10:04:53.345697  511805 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22344-301174/.minikube/profiles/default-k8s-diff-port-681744/apiserver.crt.263a07fe with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.85.2]
	I1227 10:04:53.421846  511805 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22344-301174/.minikube/profiles/default-k8s-diff-port-681744/apiserver.crt.263a07fe ...
	I1227 10:04:53.421937  511805 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22344-301174/.minikube/profiles/default-k8s-diff-port-681744/apiserver.crt.263a07fe: {Name:mk031206a15b8fabe47be5bc9e37d3892304136d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 10:04:53.422210  511805 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22344-301174/.minikube/profiles/default-k8s-diff-port-681744/apiserver.key.263a07fe ...
	I1227 10:04:53.422258  511805 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22344-301174/.minikube/profiles/default-k8s-diff-port-681744/apiserver.key.263a07fe: {Name:mk52e9b5070dfef2ee7cb96f02c0f22671060cce Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 10:04:53.422388  511805 certs.go:382] copying /home/jenkins/minikube-integration/22344-301174/.minikube/profiles/default-k8s-diff-port-681744/apiserver.crt.263a07fe -> /home/jenkins/minikube-integration/22344-301174/.minikube/profiles/default-k8s-diff-port-681744/apiserver.crt
	I1227 10:04:53.422529  511805 certs.go:386] copying /home/jenkins/minikube-integration/22344-301174/.minikube/profiles/default-k8s-diff-port-681744/apiserver.key.263a07fe -> /home/jenkins/minikube-integration/22344-301174/.minikube/profiles/default-k8s-diff-port-681744/apiserver.key
	I1227 10:04:53.422647  511805 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22344-301174/.minikube/profiles/default-k8s-diff-port-681744/proxy-client.key
	I1227 10:04:53.422686  511805 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22344-301174/.minikube/profiles/default-k8s-diff-port-681744/proxy-client.crt with IP's: []
	I1227 10:04:53.778979  511805 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22344-301174/.minikube/profiles/default-k8s-diff-port-681744/proxy-client.crt ...
	I1227 10:04:53.779015  511805 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22344-301174/.minikube/profiles/default-k8s-diff-port-681744/proxy-client.crt: {Name:mk6fb26caca9fb58b69bb90e40d010f877664b43 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 10:04:53.779235  511805 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22344-301174/.minikube/profiles/default-k8s-diff-port-681744/proxy-client.key ...
	I1227 10:04:53.779254  511805 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22344-301174/.minikube/profiles/default-k8s-diff-port-681744/proxy-client.key: {Name:mke25053c18fb9bc7554004bf847514854c6df76 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 10:04:53.779456  511805 certs.go:484] found cert: /home/jenkins/minikube-integration/22344-301174/.minikube/certs/303043.pem (1338 bytes)
	W1227 10:04:53.779511  511805 certs.go:480] ignoring /home/jenkins/minikube-integration/22344-301174/.minikube/certs/303043_empty.pem, impossibly tiny 0 bytes
	I1227 10:04:53.779535  511805 certs.go:484] found cert: /home/jenkins/minikube-integration/22344-301174/.minikube/certs/ca-key.pem (1675 bytes)
	I1227 10:04:53.779563  511805 certs.go:484] found cert: /home/jenkins/minikube-integration/22344-301174/.minikube/certs/ca.pem (1082 bytes)
	I1227 10:04:53.779593  511805 certs.go:484] found cert: /home/jenkins/minikube-integration/22344-301174/.minikube/certs/cert.pem (1123 bytes)
	I1227 10:04:53.779624  511805 certs.go:484] found cert: /home/jenkins/minikube-integration/22344-301174/.minikube/certs/key.pem (1675 bytes)
	I1227 10:04:53.779677  511805 certs.go:484] found cert: /home/jenkins/minikube-integration/22344-301174/.minikube/files/etc/ssl/certs/3030432.pem (1708 bytes)
	I1227 10:04:53.780343  511805 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22344-301174/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1227 10:04:53.818254  511805 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22344-301174/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1227 10:04:53.862296  511805 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22344-301174/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1227 10:04:53.952245  511805 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22344-301174/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1227 10:04:54.030126  511805 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22344-301174/.minikube/profiles/default-k8s-diff-port-681744/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1227 10:04:54.086675  511805 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22344-301174/.minikube/profiles/default-k8s-diff-port-681744/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1227 10:04:54.123496  511805 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22344-301174/.minikube/profiles/default-k8s-diff-port-681744/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1227 10:04:54.170761  511805 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22344-301174/.minikube/profiles/default-k8s-diff-port-681744/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1227 10:04:54.209621  511805 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22344-301174/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1227 10:04:54.248825  511805 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22344-301174/.minikube/certs/303043.pem --> /usr/share/ca-certificates/303043.pem (1338 bytes)
	I1227 10:04:54.285443  511805 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22344-301174/.minikube/files/etc/ssl/certs/3030432.pem --> /usr/share/ca-certificates/3030432.pem (1708 bytes)
	I1227 10:04:54.322886  511805 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (722 bytes)
	I1227 10:04:54.345154  511805 ssh_runner.go:195] Run: openssl version
	I1227 10:04:54.360816  511805 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1227 10:04:54.378552  511805 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1227 10:04:54.391956  511805 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1227 10:04:54.397226  511805 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 27 09:13 /usr/share/ca-certificates/minikubeCA.pem
	I1227 10:04:54.397335  511805 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1227 10:04:54.464272  511805 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1227 10:04:54.474582  511805 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1227 10:04:54.487238  511805 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/303043.pem
	I1227 10:04:54.494821  511805 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/303043.pem /etc/ssl/certs/303043.pem
	I1227 10:04:54.504500  511805 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/303043.pem
	I1227 10:04:54.509904  511805 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 27 09:17 /usr/share/ca-certificates/303043.pem
	I1227 10:04:54.510023  511805 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/303043.pem
	I1227 10:04:54.562882  511805 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1227 10:04:54.571603  511805 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/303043.pem /etc/ssl/certs/51391683.0
	I1227 10:04:54.581772  511805 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/3030432.pem
	I1227 10:04:54.589558  511805 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/3030432.pem /etc/ssl/certs/3030432.pem
	I1227 10:04:54.597936  511805 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3030432.pem
	I1227 10:04:54.602661  511805 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 27 09:17 /usr/share/ca-certificates/3030432.pem
	I1227 10:04:54.602777  511805 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3030432.pem
	I1227 10:04:54.665002  511805 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1227 10:04:54.676506  511805 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/3030432.pem /etc/ssl/certs/3ec20f2e.0
	I1227 10:04:54.689932  511805 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1227 10:04:54.694738  511805 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1227 10:04:54.694857  511805 kubeadm.go:401] StartCluster: {Name:default-k8s-diff-port-681744 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:default-k8s-diff-port-681744 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false Cust
omQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1227 10:04:54.694963  511805 cri.go:61] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1227 10:04:54.695063  511805 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1227 10:04:54.757543  511805 cri.go:96] found id: ""
	I1227 10:04:54.757670  511805 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1227 10:04:54.770599  511805 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1227 10:04:54.782387  511805 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1227 10:04:54.782502  511805 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1227 10:04:54.803185  511805 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1227 10:04:54.803256  511805 kubeadm.go:158] found existing configuration files:
	
	I1227 10:04:54.803341  511805 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I1227 10:04:54.824232  511805 kubeadm.go:164] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1227 10:04:54.824343  511805 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1227 10:04:54.835065  511805 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I1227 10:04:54.848371  511805 kubeadm.go:164] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1227 10:04:54.848493  511805 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1227 10:04:54.863540  511805 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I1227 10:04:54.878239  511805 kubeadm.go:164] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1227 10:04:54.878362  511805 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1227 10:04:54.899284  511805 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I1227 10:04:54.916627  511805 kubeadm.go:164] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1227 10:04:54.916751  511805 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1227 10:04:54.932234  511805 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1227 10:04:55.080373  511805 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0
	I1227 10:04:55.080813  511805 kubeadm.go:319] [preflight] Running pre-flight checks
	I1227 10:04:55.222435  511805 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1227 10:04:55.222589  511805 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1227 10:04:55.222658  511805 kubeadm.go:319] OS: Linux
	I1227 10:04:55.222736  511805 kubeadm.go:319] CGROUPS_CPU: enabled
	I1227 10:04:55.222821  511805 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1227 10:04:55.222901  511805 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1227 10:04:55.222983  511805 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1227 10:04:55.223065  511805 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1227 10:04:55.223147  511805 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1227 10:04:55.223226  511805 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1227 10:04:55.223312  511805 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1227 10:04:55.223391  511805 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1227 10:04:55.338334  511805 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1227 10:04:55.338509  511805 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1227 10:04:55.338639  511805 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1227 10:04:55.356820  511805 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1227 10:04:53.922279  508478 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1227 10:04:53.922317  508478 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1227 10:04:53.922397  508478 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-017122
	I1227 10:04:53.923944  508478 addons.go:239] Setting addon default-storageclass=true in "embed-certs-017122"
	I1227 10:04:53.923987  508478 host.go:66] Checking if "embed-certs-017122" exists ...
	I1227 10:04:53.924412  508478 cli_runner.go:164] Run: docker container inspect embed-certs-017122 --format={{.State.Status}}
	I1227 10:04:53.974294  508478 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33441 SSHKeyPath:/home/jenkins/minikube-integration/22344-301174/.minikube/machines/embed-certs-017122/id_rsa Username:docker}
	I1227 10:04:53.984555  508478 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1227 10:04:53.984578  508478 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1227 10:04:53.984653  508478 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-017122
	I1227 10:04:54.016359  508478 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33441 SSHKeyPath:/home/jenkins/minikube-integration/22344-301174/.minikube/machines/embed-certs-017122/id_rsa Username:docker}
	I1227 10:04:54.511950  508478 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1227 10:04:54.516404  508478 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1227 10:04:54.615528  508478 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.76.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.35.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1227 10:04:54.615735  508478 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1227 10:04:56.145410  508478 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.62895967s)
	I1227 10:04:56.145714  508478 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (1.529953144s)
	I1227 10:04:56.145912  508478 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.76.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.35.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.530336254s)
	I1227 10:04:56.145945  508478 start.go:987] {"host.minikube.internal": 192.168.76.1} host record injected into CoreDNS's ConfigMap
	I1227 10:04:56.146764  508478 node_ready.go:35] waiting up to 6m0s for node "embed-certs-017122" to be "Ready" ...
	I1227 10:04:56.149238  508478 out.go:179] * Enabled addons: default-storageclass, storage-provisioner
	I1227 10:04:56.152058  508478 addons.go:530] duration metric: took 2.277903431s for enable addons: enabled=[default-storageclass storage-provisioner]
	I1227 10:04:56.652832  508478 kapi.go:214] "coredns" deployment in "kube-system" namespace and "embed-certs-017122" context rescaled to 1 replicas
	I1227 10:04:55.360256  511805 out.go:252]   - Generating certificates and keys ...
	I1227 10:04:55.360418  511805 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1227 10:04:55.360522  511805 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1227 10:04:55.855392  511805 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1227 10:04:56.237077  511805 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1227 10:04:56.383761  511805 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1227 10:04:56.852540  511805 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1227 10:04:57.063263  511805 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1227 10:04:57.063788  511805 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [default-k8s-diff-port-681744 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1227 10:04:57.841032  511805 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1227 10:04:57.841553  511805 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [default-k8s-diff-port-681744 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1227 10:04:58.098226  511805 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1227 10:04:58.377009  511805 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1227 10:04:58.699663  511805 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1227 10:04:58.699924  511805 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1227 10:04:58.949943  511805 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1227 10:04:59.404914  511805 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1227 10:04:59.531098  511805 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1227 10:04:59.709201  511805 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1227 10:05:00.090612  511805 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1227 10:05:00.090719  511805 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1227 10:05:00.108457  511805 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	W1227 10:04:58.151289  508478 node_ready.go:57] node "embed-certs-017122" has "Ready":"False" status (will retry)
	W1227 10:05:00.158447  508478 node_ready.go:57] node "embed-certs-017122" has "Ready":"False" status (will retry)
	W1227 10:05:02.650022  508478 node_ready.go:57] node "embed-certs-017122" has "Ready":"False" status (will retry)
	I1227 10:05:00.124650  511805 out.go:252]   - Booting up control plane ...
	I1227 10:05:00.124773  511805 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1227 10:05:00.134322  511805 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1227 10:05:00.134409  511805 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1227 10:05:00.158683  511805 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1227 10:05:00.158789  511805 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1227 10:05:00.165545  511805 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1227 10:05:00.165648  511805 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1227 10:05:00.165689  511805 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1227 10:05:00.476513  511805 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1227 10:05:00.476690  511805 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1227 10:05:01.478345  511805 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.001948112s
	I1227 10:05:01.482212  511805 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1227 10:05:01.482317  511805 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.85.2:8444/livez
	I1227 10:05:01.482416  511805 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1227 10:05:01.482495  511805 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1227 10:05:02.995407  511805 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 1.512197598s
	I1227 10:05:04.699835  511805 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 3.217670952s
	I1227 10:05:06.483851  511805 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 5.001636926s
	I1227 10:05:06.523130  511805 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1227 10:05:06.542064  511805 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1227 10:05:06.561905  511805 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1227 10:05:06.562139  511805 kubeadm.go:319] [mark-control-plane] Marking the node default-k8s-diff-port-681744 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1227 10:05:06.574200  511805 kubeadm.go:319] [bootstrap-token] Using token: 22kn8r.djys67t3gob43bfu
	W1227 10:05:04.650343  508478 node_ready.go:57] node "embed-certs-017122" has "Ready":"False" status (will retry)
	W1227 10:05:07.150233  508478 node_ready.go:57] node "embed-certs-017122" has "Ready":"False" status (will retry)
	I1227 10:05:06.577175  511805 out.go:252]   - Configuring RBAC rules ...
	I1227 10:05:06.577303  511805 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1227 10:05:06.581645  511805 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1227 10:05:06.591944  511805 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1227 10:05:06.598856  511805 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1227 10:05:06.602921  511805 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1227 10:05:06.607117  511805 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1227 10:05:06.897788  511805 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1227 10:05:07.390487  511805 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1227 10:05:07.897321  511805 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1227 10:05:07.898673  511805 kubeadm.go:319] 
	I1227 10:05:07.898747  511805 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1227 10:05:07.898752  511805 kubeadm.go:319] 
	I1227 10:05:07.898834  511805 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1227 10:05:07.898842  511805 kubeadm.go:319] 
	I1227 10:05:07.898868  511805 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1227 10:05:07.898927  511805 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1227 10:05:07.898977  511805 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1227 10:05:07.898981  511805 kubeadm.go:319] 
	I1227 10:05:07.899035  511805 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1227 10:05:07.899040  511805 kubeadm.go:319] 
	I1227 10:05:07.899087  511805 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1227 10:05:07.899091  511805 kubeadm.go:319] 
	I1227 10:05:07.899143  511805 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1227 10:05:07.899229  511805 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1227 10:05:07.899298  511805 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1227 10:05:07.899301  511805 kubeadm.go:319] 
	I1227 10:05:07.899386  511805 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1227 10:05:07.899463  511805 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1227 10:05:07.899466  511805 kubeadm.go:319] 
	I1227 10:05:07.899550  511805 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8444 --token 22kn8r.djys67t3gob43bfu \
	I1227 10:05:07.899655  511805 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:08d8899242412948893d7bdf12a68d0584eea0f77b00577c527378f6462e133c \
	I1227 10:05:07.899675  511805 kubeadm.go:319] 	--control-plane 
	I1227 10:05:07.899679  511805 kubeadm.go:319] 
	I1227 10:05:07.899764  511805 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1227 10:05:07.899768  511805 kubeadm.go:319] 
	I1227 10:05:07.899850  511805 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8444 --token 22kn8r.djys67t3gob43bfu \
	I1227 10:05:07.899953  511805 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:08d8899242412948893d7bdf12a68d0584eea0f77b00577c527378f6462e133c 
	I1227 10:05:07.903038  511805 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1227 10:05:07.903496  511805 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1227 10:05:07.903652  511805 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1227 10:05:07.903691  511805 cni.go:84] Creating CNI manager for ""
	I1227 10:05:07.903704  511805 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1227 10:05:07.908753  511805 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1227 10:05:07.911656  511805 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1227 10:05:07.916706  511805 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.35.0/kubectl ...
	I1227 10:05:07.916730  511805 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2620 bytes)
	I1227 10:05:07.932559  511805 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1227 10:05:08.292486  511805 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1227 10:05:08.292627  511805 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1227 10:05:08.292722  511805 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes default-k8s-diff-port-681744 minikube.k8s.io/updated_at=2025_12_27T10_05_08_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=a2daf445edf4872fd9586416ba5dbf507613db86 minikube.k8s.io/name=default-k8s-diff-port-681744 minikube.k8s.io/primary=true
	I1227 10:05:08.510025  511805 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1227 10:05:08.510103  511805 ops.go:34] apiserver oom_adj: -16
	I1227 10:05:09.010339  511805 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1227 10:05:08.651485  508478 node_ready.go:49] node "embed-certs-017122" is "Ready"
	I1227 10:05:08.651514  508478 node_ready.go:38] duration metric: took 12.504670662s for node "embed-certs-017122" to be "Ready" ...
	I1227 10:05:08.651528  508478 api_server.go:52] waiting for apiserver process to appear ...
	I1227 10:05:08.651585  508478 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 10:05:08.674172  508478 api_server.go:72] duration metric: took 14.800447375s to wait for apiserver process to appear ...
	I1227 10:05:08.674248  508478 api_server.go:88] waiting for apiserver healthz status ...
	I1227 10:05:08.674271  508478 api_server.go:299] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1227 10:05:08.683895  508478 api_server.go:325] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1227 10:05:08.685425  508478 api_server.go:141] control plane version: v1.35.0
	I1227 10:05:08.685451  508478 api_server.go:131] duration metric: took 11.193551ms to wait for apiserver health ...
	I1227 10:05:08.685461  508478 system_pods.go:43] waiting for kube-system pods to appear ...
	I1227 10:05:08.716836  508478 system_pods.go:59] 8 kube-system pods found
	I1227 10:05:08.716935  508478 system_pods.go:61] "coredns-7d764666f9-bdwpn" [7ab96e40-4206-4459-bc72-d2eed89b2d21] Pending
	I1227 10:05:08.716960  508478 system_pods.go:61] "etcd-embed-certs-017122" [b63798cd-139a-42e2-9a83-9716d2aec1eb] Running
	I1227 10:05:08.716984  508478 system_pods.go:61] "kindnet-7ts9b" [b7367ee9-b0b9-46f4-8178-63756396ad78] Running
	I1227 10:05:08.717019  508478 system_pods.go:61] "kube-apiserver-embed-certs-017122" [e817c2be-36d8-4131-814a-804b35a31458] Running
	I1227 10:05:08.717040  508478 system_pods.go:61] "kube-controller-manager-embed-certs-017122" [d9707594-9cc2-4487-8669-80eb93113598] Running
	I1227 10:05:08.717069  508478 system_pods.go:61] "kube-proxy-knmrq" [54629088-9ecc-4f33-bfe8-943aa7e0dcba] Running
	I1227 10:05:08.717093  508478 system_pods.go:61] "kube-scheduler-embed-certs-017122" [adee9190-f93c-4cf1-885f-d68efc883348] Running
	I1227 10:05:08.717124  508478 system_pods.go:61] "storage-provisioner" [d9484e97-87d3-4568-bc0e-929f8c2bac3e] Pending
	I1227 10:05:08.717157  508478 system_pods.go:74] duration metric: took 31.688485ms to wait for pod list to return data ...
	I1227 10:05:08.717180  508478 default_sa.go:34] waiting for default service account to be created ...
	I1227 10:05:08.726429  508478 default_sa.go:45] found service account: "default"
	I1227 10:05:08.726495  508478 default_sa.go:55] duration metric: took 9.291505ms for default service account to be created ...
	I1227 10:05:08.726520  508478 system_pods.go:116] waiting for k8s-apps to be running ...
	I1227 10:05:08.734601  508478 system_pods.go:86] 8 kube-system pods found
	I1227 10:05:08.734709  508478 system_pods.go:89] "coredns-7d764666f9-bdwpn" [7ab96e40-4206-4459-bc72-d2eed89b2d21] Pending
	I1227 10:05:08.734733  508478 system_pods.go:89] "etcd-embed-certs-017122" [b63798cd-139a-42e2-9a83-9716d2aec1eb] Running
	I1227 10:05:08.734758  508478 system_pods.go:89] "kindnet-7ts9b" [b7367ee9-b0b9-46f4-8178-63756396ad78] Running
	I1227 10:05:08.734803  508478 system_pods.go:89] "kube-apiserver-embed-certs-017122" [e817c2be-36d8-4131-814a-804b35a31458] Running
	I1227 10:05:08.734825  508478 system_pods.go:89] "kube-controller-manager-embed-certs-017122" [d9707594-9cc2-4487-8669-80eb93113598] Running
	I1227 10:05:08.734848  508478 system_pods.go:89] "kube-proxy-knmrq" [54629088-9ecc-4f33-bfe8-943aa7e0dcba] Running
	I1227 10:05:08.734883  508478 system_pods.go:89] "kube-scheduler-embed-certs-017122" [adee9190-f93c-4cf1-885f-d68efc883348] Running
	I1227 10:05:08.734919  508478 system_pods.go:89] "storage-provisioner" [d9484e97-87d3-4568-bc0e-929f8c2bac3e] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1227 10:05:08.734972  508478 retry.go:84] will retry after 200ms: missing components: kube-dns
	I1227 10:05:08.952878  508478 system_pods.go:86] 8 kube-system pods found
	I1227 10:05:08.952985  508478 system_pods.go:89] "coredns-7d764666f9-bdwpn" [7ab96e40-4206-4459-bc72-d2eed89b2d21] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1227 10:05:08.953018  508478 system_pods.go:89] "etcd-embed-certs-017122" [b63798cd-139a-42e2-9a83-9716d2aec1eb] Running
	I1227 10:05:08.953065  508478 system_pods.go:89] "kindnet-7ts9b" [b7367ee9-b0b9-46f4-8178-63756396ad78] Running
	I1227 10:05:08.953089  508478 system_pods.go:89] "kube-apiserver-embed-certs-017122" [e817c2be-36d8-4131-814a-804b35a31458] Running
	I1227 10:05:08.953134  508478 system_pods.go:89] "kube-controller-manager-embed-certs-017122" [d9707594-9cc2-4487-8669-80eb93113598] Running
	I1227 10:05:08.953154  508478 system_pods.go:89] "kube-proxy-knmrq" [54629088-9ecc-4f33-bfe8-943aa7e0dcba] Running
	I1227 10:05:08.953176  508478 system_pods.go:89] "kube-scheduler-embed-certs-017122" [adee9190-f93c-4cf1-885f-d68efc883348] Running
	I1227 10:05:08.953221  508478 system_pods.go:89] "storage-provisioner" [d9484e97-87d3-4568-bc0e-929f8c2bac3e] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1227 10:05:09.276193  508478 system_pods.go:86] 8 kube-system pods found
	I1227 10:05:09.276278  508478 system_pods.go:89] "coredns-7d764666f9-bdwpn" [7ab96e40-4206-4459-bc72-d2eed89b2d21] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1227 10:05:09.276303  508478 system_pods.go:89] "etcd-embed-certs-017122" [b63798cd-139a-42e2-9a83-9716d2aec1eb] Running
	I1227 10:05:09.276338  508478 system_pods.go:89] "kindnet-7ts9b" [b7367ee9-b0b9-46f4-8178-63756396ad78] Running
	I1227 10:05:09.276365  508478 system_pods.go:89] "kube-apiserver-embed-certs-017122" [e817c2be-36d8-4131-814a-804b35a31458] Running
	I1227 10:05:09.276387  508478 system_pods.go:89] "kube-controller-manager-embed-certs-017122" [d9707594-9cc2-4487-8669-80eb93113598] Running
	I1227 10:05:09.276419  508478 system_pods.go:89] "kube-proxy-knmrq" [54629088-9ecc-4f33-bfe8-943aa7e0dcba] Running
	I1227 10:05:09.276437  508478 system_pods.go:89] "kube-scheduler-embed-certs-017122" [adee9190-f93c-4cf1-885f-d68efc883348] Running
	I1227 10:05:09.276460  508478 system_pods.go:89] "storage-provisioner" [d9484e97-87d3-4568-bc0e-929f8c2bac3e] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1227 10:05:09.581333  508478 system_pods.go:86] 8 kube-system pods found
	I1227 10:05:09.581373  508478 system_pods.go:89] "coredns-7d764666f9-bdwpn" [7ab96e40-4206-4459-bc72-d2eed89b2d21] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1227 10:05:09.581381  508478 system_pods.go:89] "etcd-embed-certs-017122" [b63798cd-139a-42e2-9a83-9716d2aec1eb] Running
	I1227 10:05:09.581387  508478 system_pods.go:89] "kindnet-7ts9b" [b7367ee9-b0b9-46f4-8178-63756396ad78] Running
	I1227 10:05:09.581392  508478 system_pods.go:89] "kube-apiserver-embed-certs-017122" [e817c2be-36d8-4131-814a-804b35a31458] Running
	I1227 10:05:09.581398  508478 system_pods.go:89] "kube-controller-manager-embed-certs-017122" [d9707594-9cc2-4487-8669-80eb93113598] Running
	I1227 10:05:09.581410  508478 system_pods.go:89] "kube-proxy-knmrq" [54629088-9ecc-4f33-bfe8-943aa7e0dcba] Running
	I1227 10:05:09.581419  508478 system_pods.go:89] "kube-scheduler-embed-certs-017122" [adee9190-f93c-4cf1-885f-d68efc883348] Running
	I1227 10:05:09.581433  508478 system_pods.go:89] "storage-provisioner" [d9484e97-87d3-4568-bc0e-929f8c2bac3e] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1227 10:05:09.991751  508478 system_pods.go:86] 8 kube-system pods found
	I1227 10:05:09.991786  508478 system_pods.go:89] "coredns-7d764666f9-bdwpn" [7ab96e40-4206-4459-bc72-d2eed89b2d21] Running
	I1227 10:05:09.991795  508478 system_pods.go:89] "etcd-embed-certs-017122" [b63798cd-139a-42e2-9a83-9716d2aec1eb] Running
	I1227 10:05:09.991800  508478 system_pods.go:89] "kindnet-7ts9b" [b7367ee9-b0b9-46f4-8178-63756396ad78] Running
	I1227 10:05:09.991805  508478 system_pods.go:89] "kube-apiserver-embed-certs-017122" [e817c2be-36d8-4131-814a-804b35a31458] Running
	I1227 10:05:09.991811  508478 system_pods.go:89] "kube-controller-manager-embed-certs-017122" [d9707594-9cc2-4487-8669-80eb93113598] Running
	I1227 10:05:09.991815  508478 system_pods.go:89] "kube-proxy-knmrq" [54629088-9ecc-4f33-bfe8-943aa7e0dcba] Running
	I1227 10:05:09.991820  508478 system_pods.go:89] "kube-scheduler-embed-certs-017122" [adee9190-f93c-4cf1-885f-d68efc883348] Running
	I1227 10:05:09.991825  508478 system_pods.go:89] "storage-provisioner" [d9484e97-87d3-4568-bc0e-929f8c2bac3e] Running
	I1227 10:05:09.991833  508478 system_pods.go:126] duration metric: took 1.265292498s to wait for k8s-apps to be running ...
	I1227 10:05:09.991847  508478 system_svc.go:44] waiting for kubelet service to be running ....
	I1227 10:05:09.991907  508478 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1227 10:05:10.023436  508478 system_svc.go:56] duration metric: took 31.578051ms WaitForService to wait for kubelet
	I1227 10:05:10.023524  508478 kubeadm.go:587] duration metric: took 16.14980506s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1227 10:05:10.023564  508478 node_conditions.go:102] verifying NodePressure condition ...
	I1227 10:05:10.043079  508478 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1227 10:05:10.043158  508478 node_conditions.go:123] node cpu capacity is 2
	I1227 10:05:10.043190  508478 node_conditions.go:105] duration metric: took 19.588027ms to run NodePressure ...
	I1227 10:05:10.043235  508478 start.go:242] waiting for startup goroutines ...
	I1227 10:05:10.043260  508478 start.go:247] waiting for cluster config update ...
	I1227 10:05:10.043289  508478 start.go:256] writing updated cluster config ...
	I1227 10:05:10.043637  508478 ssh_runner.go:195] Run: rm -f paused
	I1227 10:05:10.048307  508478 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1227 10:05:10.052915  508478 pod_ready.go:83] waiting for pod "coredns-7d764666f9-bdwpn" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 10:05:10.059866  508478 pod_ready.go:94] pod "coredns-7d764666f9-bdwpn" is "Ready"
	I1227 10:05:10.059946  508478 pod_ready.go:86] duration metric: took 6.955254ms for pod "coredns-7d764666f9-bdwpn" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 10:05:10.063226  508478 pod_ready.go:83] waiting for pod "etcd-embed-certs-017122" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 10:05:10.075865  508478 pod_ready.go:94] pod "etcd-embed-certs-017122" is "Ready"
	I1227 10:05:10.075948  508478 pod_ready.go:86] duration metric: took 12.642841ms for pod "etcd-embed-certs-017122" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 10:05:10.079022  508478 pod_ready.go:83] waiting for pod "kube-apiserver-embed-certs-017122" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 10:05:10.084903  508478 pod_ready.go:94] pod "kube-apiserver-embed-certs-017122" is "Ready"
	I1227 10:05:10.084984  508478 pod_ready.go:86] duration metric: took 5.89024ms for pod "kube-apiserver-embed-certs-017122" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 10:05:10.088357  508478 pod_ready.go:83] waiting for pod "kube-controller-manager-embed-certs-017122" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 10:05:10.453052  508478 pod_ready.go:94] pod "kube-controller-manager-embed-certs-017122" is "Ready"
	I1227 10:05:10.453081  508478 pod_ready.go:86] duration metric: took 364.659568ms for pod "kube-controller-manager-embed-certs-017122" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 10:05:10.653303  508478 pod_ready.go:83] waiting for pod "kube-proxy-knmrq" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 10:05:11.054302  508478 pod_ready.go:94] pod "kube-proxy-knmrq" is "Ready"
	I1227 10:05:11.054408  508478 pod_ready.go:86] duration metric: took 401.020753ms for pod "kube-proxy-knmrq" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 10:05:11.253314  508478 pod_ready.go:83] waiting for pod "kube-scheduler-embed-certs-017122" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 10:05:11.652214  508478 pod_ready.go:94] pod "kube-scheduler-embed-certs-017122" is "Ready"
	I1227 10:05:11.652241  508478 pod_ready.go:86] duration metric: took 398.900849ms for pod "kube-scheduler-embed-certs-017122" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 10:05:11.652254  508478 pod_ready.go:40] duration metric: took 1.603860877s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1227 10:05:11.710677  508478 start.go:625] kubectl: 1.33.2, cluster: 1.35.0 (minor skew: 2)
	I1227 10:05:11.713916  508478 out.go:203] 
	W1227 10:05:11.717244  508478 out.go:285] ! /usr/local/bin/kubectl is version 1.33.2, which may have incompatibilities with Kubernetes 1.35.0.
	I1227 10:05:11.720215  508478 out.go:179]   - Want kubectl v1.35.0? Try 'minikube kubectl -- get pods -A'
	I1227 10:05:11.724083  508478 out.go:179] * Done! kubectl is now configured to use "embed-certs-017122" cluster and "default" namespace by default
	I1227 10:05:09.510221  511805 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1227 10:05:10.010261  511805 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1227 10:05:10.510714  511805 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1227 10:05:11.010445  511805 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1227 10:05:11.510917  511805 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1227 10:05:12.010608  511805 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1227 10:05:12.510297  511805 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1227 10:05:12.612779  511805 kubeadm.go:1114] duration metric: took 4.320204045s to wait for elevateKubeSystemPrivileges
	I1227 10:05:12.612812  511805 kubeadm.go:403] duration metric: took 17.917960382s to StartCluster
	I1227 10:05:12.612830  511805 settings.go:142] acquiring lock: {Name:mkd3c0090464e277fd903dd0170c6ace5a6172ea Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 10:05:12.612895  511805 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22344-301174/kubeconfig
	I1227 10:05:12.614660  511805 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22344-301174/kubeconfig: {Name:mk3a3b85dc4ae4614cb3a9a7459a1ec0ee3f4d9f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 10:05:12.614932  511805 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1227 10:05:12.614940  511805 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1227 10:05:12.615206  511805 config.go:182] Loaded profile config "default-k8s-diff-port-681744": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1227 10:05:12.615240  511805 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1227 10:05:12.615301  511805 addons.go:70] Setting storage-provisioner=true in profile "default-k8s-diff-port-681744"
	I1227 10:05:12.615322  511805 addons.go:239] Setting addon storage-provisioner=true in "default-k8s-diff-port-681744"
	I1227 10:05:12.615344  511805 host.go:66] Checking if "default-k8s-diff-port-681744" exists ...
	I1227 10:05:12.615804  511805 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-681744 --format={{.State.Status}}
	I1227 10:05:12.616246  511805 addons.go:70] Setting default-storageclass=true in profile "default-k8s-diff-port-681744"
	I1227 10:05:12.616282  511805 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-681744"
	I1227 10:05:12.616561  511805 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-681744 --format={{.State.Status}}
	I1227 10:05:12.619188  511805 out.go:179] * Verifying Kubernetes components...
	I1227 10:05:12.624026  511805 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1227 10:05:12.665798  511805 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1227 10:05:12.667724  511805 addons.go:239] Setting addon default-storageclass=true in "default-k8s-diff-port-681744"
	I1227 10:05:12.667802  511805 host.go:66] Checking if "default-k8s-diff-port-681744" exists ...
	I1227 10:05:12.668244  511805 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-681744 --format={{.State.Status}}
	I1227 10:05:12.668799  511805 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1227 10:05:12.668816  511805 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1227 10:05:12.668868  511805 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-681744
	I1227 10:05:12.703173  511805 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1227 10:05:12.703196  511805 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1227 10:05:12.703256  511805 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-681744
	I1227 10:05:12.715906  511805 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33446 SSHKeyPath:/home/jenkins/minikube-integration/22344-301174/.minikube/machines/default-k8s-diff-port-681744/id_rsa Username:docker}
	I1227 10:05:12.751493  511805 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33446 SSHKeyPath:/home/jenkins/minikube-integration/22344-301174/.minikube/machines/default-k8s-diff-port-681744/id_rsa Username:docker}
	I1227 10:05:12.983149  511805 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.85.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.35.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1227 10:05:12.983258  511805 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1227 10:05:12.991521  511805 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1227 10:05:13.095334  511805 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1227 10:05:13.639640  511805 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-681744" to be "Ready" ...
	I1227 10:05:13.639960  511805 start.go:987] {"host.minikube.internal": 192.168.85.1} host record injected into CoreDNS's ConfigMap
	I1227 10:05:14.000461  511805 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.00885072s)
	I1227 10:05:14.038647  511805 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1227 10:05:14.042505  511805 addons.go:530] duration metric: took 1.427252268s for enable addons: enabled=[storage-provisioner default-storageclass]
	I1227 10:05:14.147349  511805 kapi.go:214] "coredns" deployment in "kube-system" namespace and "default-k8s-diff-port-681744" context rescaled to 1 replicas
	W1227 10:05:15.643388  511805 node_ready.go:57] node "default-k8s-diff-port-681744" has "Ready":"False" status (will retry)
	W1227 10:05:18.142418  511805 node_ready.go:57] node "default-k8s-diff-port-681744" has "Ready":"False" status (will retry)
	
	
	==> CRI-O <==
	Dec 27 10:05:09 embed-certs-017122 crio[834]: time="2025-12-27T10:05:09.129416388Z" level=info msg="Created container df5d3519bc870807a4c0ccc81a60ad10c6889977c24a26946a05cfe7e3e5e2bd: kube-system/coredns-7d764666f9-bdwpn/coredns" id=0429e25a-82a6-471d-aa45-e812f3c94c7c name=/runtime.v1.RuntimeService/CreateContainer
	Dec 27 10:05:09 embed-certs-017122 crio[834]: time="2025-12-27T10:05:09.139873544Z" level=info msg="Starting container: df5d3519bc870807a4c0ccc81a60ad10c6889977c24a26946a05cfe7e3e5e2bd" id=528f8e3c-d699-457a-ac61-8b59497ab9df name=/runtime.v1.RuntimeService/StartContainer
	Dec 27 10:05:09 embed-certs-017122 crio[834]: time="2025-12-27T10:05:09.163542237Z" level=info msg="Started container" PID=1783 containerID=df5d3519bc870807a4c0ccc81a60ad10c6889977c24a26946a05cfe7e3e5e2bd description=kube-system/coredns-7d764666f9-bdwpn/coredns id=528f8e3c-d699-457a-ac61-8b59497ab9df name=/runtime.v1.RuntimeService/StartContainer sandboxID=2879f2516d2ee03f88f86ad2dee3b2245e811fe6ad3401ee8381ac02898822f9
	Dec 27 10:05:12 embed-certs-017122 crio[834]: time="2025-12-27T10:05:12.2890075Z" level=info msg="Running pod sandbox: default/busybox/POD" id=33055d9b-a3c5-48ea-936b-9a28998e9b7e name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 27 10:05:12 embed-certs-017122 crio[834]: time="2025-12-27T10:05:12.289082397Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 27 10:05:12 embed-certs-017122 crio[834]: time="2025-12-27T10:05:12.301068024Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:706cf70c7873489872b2f90ae898a19b7d4ace4d9197468db1b4f96c0c5ff735 UID:b1105fa6-5257-4bdf-a5ef-08d24fc959ae NetNS:/var/run/netns/46403031-35d6-4d42-81b0-dfdda0332b33 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x4000079cf0}] Aliases:map[]}"
	Dec 27 10:05:12 embed-certs-017122 crio[834]: time="2025-12-27T10:05:12.301127356Z" level=info msg="Adding pod default_busybox to CNI network \"kindnet\" (type=ptp)"
	Dec 27 10:05:12 embed-certs-017122 crio[834]: time="2025-12-27T10:05:12.310951782Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:706cf70c7873489872b2f90ae898a19b7d4ace4d9197468db1b4f96c0c5ff735 UID:b1105fa6-5257-4bdf-a5ef-08d24fc959ae NetNS:/var/run/netns/46403031-35d6-4d42-81b0-dfdda0332b33 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x4000079cf0}] Aliases:map[]}"
	Dec 27 10:05:12 embed-certs-017122 crio[834]: time="2025-12-27T10:05:12.311104637Z" level=info msg="Checking pod default_busybox for CNI network kindnet (type=ptp)"
	Dec 27 10:05:12 embed-certs-017122 crio[834]: time="2025-12-27T10:05:12.317206596Z" level=info msg="Ran pod sandbox 706cf70c7873489872b2f90ae898a19b7d4ace4d9197468db1b4f96c0c5ff735 with infra container: default/busybox/POD" id=33055d9b-a3c5-48ea-936b-9a28998e9b7e name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 27 10:05:12 embed-certs-017122 crio[834]: time="2025-12-27T10:05:12.323976379Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=61795171-8f32-4f5a-b87c-496b3e9dd72e name=/runtime.v1.ImageService/ImageStatus
	Dec 27 10:05:12 embed-certs-017122 crio[834]: time="2025-12-27T10:05:12.324392524Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=61795171-8f32-4f5a-b87c-496b3e9dd72e name=/runtime.v1.ImageService/ImageStatus
	Dec 27 10:05:12 embed-certs-017122 crio[834]: time="2025-12-27T10:05:12.3245376Z" level=info msg="Neither image nor artfiact gcr.io/k8s-minikube/busybox:1.28.4-glibc found" id=61795171-8f32-4f5a-b87c-496b3e9dd72e name=/runtime.v1.ImageService/ImageStatus
	Dec 27 10:05:12 embed-certs-017122 crio[834]: time="2025-12-27T10:05:12.328716262Z" level=info msg="Pulling image: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=051297c2-2560-46cc-a144-70b80bc03339 name=/runtime.v1.ImageService/PullImage
	Dec 27 10:05:12 embed-certs-017122 crio[834]: time="2025-12-27T10:05:12.331488054Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Dec 27 10:05:14 embed-certs-017122 crio[834]: time="2025-12-27T10:05:14.386415485Z" level=info msg="Pulled image: gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e" id=051297c2-2560-46cc-a144-70b80bc03339 name=/runtime.v1.ImageService/PullImage
	Dec 27 10:05:14 embed-certs-017122 crio[834]: time="2025-12-27T10:05:14.387553707Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=aaca67fe-4339-4e37-aefb-e5693c3fa60e name=/runtime.v1.ImageService/ImageStatus
	Dec 27 10:05:14 embed-certs-017122 crio[834]: time="2025-12-27T10:05:14.389647813Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=f61a788d-b03f-4663-9592-38774604bf3c name=/runtime.v1.ImageService/ImageStatus
	Dec 27 10:05:14 embed-certs-017122 crio[834]: time="2025-12-27T10:05:14.397496165Z" level=info msg="Creating container: default/busybox/busybox" id=4c995626-1315-40d2-be0f-9019509390b7 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 27 10:05:14 embed-certs-017122 crio[834]: time="2025-12-27T10:05:14.397626128Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 27 10:05:14 embed-certs-017122 crio[834]: time="2025-12-27T10:05:14.402858743Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 27 10:05:14 embed-certs-017122 crio[834]: time="2025-12-27T10:05:14.403349153Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 27 10:05:14 embed-certs-017122 crio[834]: time="2025-12-27T10:05:14.424144152Z" level=info msg="Created container fe04608b3a69159e912bf6c0befa3ccda666983d1c5863efd19ab6677c9ec5d5: default/busybox/busybox" id=4c995626-1315-40d2-be0f-9019509390b7 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 27 10:05:14 embed-certs-017122 crio[834]: time="2025-12-27T10:05:14.426485974Z" level=info msg="Starting container: fe04608b3a69159e912bf6c0befa3ccda666983d1c5863efd19ab6677c9ec5d5" id=210744c7-e8e5-420a-8c5e-c4049186784c name=/runtime.v1.RuntimeService/StartContainer
	Dec 27 10:05:14 embed-certs-017122 crio[834]: time="2025-12-27T10:05:14.432687668Z" level=info msg="Started container" PID=1842 containerID=fe04608b3a69159e912bf6c0befa3ccda666983d1c5863efd19ab6677c9ec5d5 description=default/busybox/busybox id=210744c7-e8e5-420a-8c5e-c4049186784c name=/runtime.v1.RuntimeService/StartContainer sandboxID=706cf70c7873489872b2f90ae898a19b7d4ace4d9197468db1b4f96c0c5ff735
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                          NAMESPACE
	fe04608b3a691       gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e   8 seconds ago       Running             busybox                   0                   706cf70c78734       busybox                                      default
	df5d3519bc870       e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf                                      13 seconds ago      Running             coredns                   0                   2879f2516d2ee       coredns-7d764666f9-bdwpn                     kube-system
	ba5c33ff65990       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                      13 seconds ago      Running             storage-provisioner       0                   a661b67497fb6       storage-provisioner                          kube-system
	50f3ff7ee16e0       docker.io/kindest/kindnetd@sha256:f1260f5691195cc9a693dc0b55178aa724d944efd62486a8320f0583272b1fa3    24 seconds ago      Running             kindnet-cni               0                   11f4e72d9a081       kindnet-7ts9b                                kube-system
	1f750fa8fb603       de369f46c2ff55c31ea783a663eb203caa820f3db1f9b9c935e79e7d1e9fd9e5                                      28 seconds ago      Running             kube-proxy                0                   b2ac45a2b83c3       kube-proxy-knmrq                             kube-system
	b8a28cf4eb914       88898f1d1a62a3ea9db5d4d099dee7aa52ebe8191016c5b3c721388a309983e0                                      41 seconds ago      Running             kube-controller-manager   0                   6fa4fea92ba84       kube-controller-manager-embed-certs-017122   kube-system
	417eec82ff02f       ddc8422d4d35a6fc66c34be61e24df795e5cebf197eb546f62740d0bafef874f                                      41 seconds ago      Running             kube-scheduler            0                   bf14464fcb9f4       kube-scheduler-embed-certs-017122            kube-system
	5b43ceeb0d794       271e49a0ebc56647476845128fcd2a73bb138beeca3878cc3bf52b4ff1172a57                                      41 seconds ago      Running             etcd                      0                   f4f65512bf122       etcd-embed-certs-017122                      kube-system
	1e16a66b805bc       c3fcf259c473a57a5d7da116e29161904491091743512d27467c907c5516f856                                      41 seconds ago      Running             kube-apiserver            0                   fc923014779a0       kube-apiserver-embed-certs-017122            kube-system
	
	
	==> coredns [df5d3519bc870807a4c0ccc81a60ad10c6889977c24a26946a05cfe7e3e5e2bd] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 3e2243e8b9e7116f563b83b1933f477a68ba9ad4a829ed5d7e54629fb2ce53528b9bc6023030be20be434ad805fd246296dd428c64e9bbef3a70f22b8621f560
	CoreDNS-1.13.1
	linux/arm64, go1.25.2, 1db4568
	[INFO] 127.0.0.1:53738 - 32283 "HINFO IN 5321984033122112368.2909099674332451179. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.016541203s
	
	
	==> describe nodes <==
	Name:               embed-certs-017122
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=embed-certs-017122
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=a2daf445edf4872fd9586416ba5dbf507613db86
	                    minikube.k8s.io/name=embed-certs-017122
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_27T10_04_48_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 27 Dec 2025 10:04:44 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-017122
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 27 Dec 2025 10:05:18 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 27 Dec 2025 10:05:18 +0000   Sat, 27 Dec 2025 10:04:41 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 27 Dec 2025 10:05:18 +0000   Sat, 27 Dec 2025 10:04:41 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 27 Dec 2025 10:05:18 +0000   Sat, 27 Dec 2025 10:04:41 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 27 Dec 2025 10:05:18 +0000   Sat, 27 Dec 2025 10:05:08 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    embed-certs-017122
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022304Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022304Ki
	  pods:               110
	System Info:
	  Machine ID:                 56495bcb733b49eb3642bf15694bbc8c
	  System UUID:                65221525-5166-4f0b-9b53-9db790e49fde
	  Boot ID:                    bc2ae459-717b-455e-ba6b-b9cf89c5e3c8
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.3
	  Kubelet Version:            v1.35.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         11s
	  kube-system                 coredns-7d764666f9-bdwpn                      100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     29s
	  kube-system                 etcd-embed-certs-017122                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         36s
	  kube-system                 kindnet-7ts9b                                 100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      29s
	  kube-system                 kube-apiserver-embed-certs-017122             250m (12%)    0 (0%)      0 (0%)           0 (0%)         35s
	  kube-system                 kube-controller-manager-embed-certs-017122    200m (10%)    0 (0%)      0 (0%)           0 (0%)         35s
	  kube-system                 kube-proxy-knmrq                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         29s
	  kube-system                 kube-scheduler-embed-certs-017122             100m (5%)     0 (0%)      0 (0%)           0 (0%)         35s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         26s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type    Reason          Age   From             Message
	  ----    ------          ----  ----             -------
	  Normal  RegisteredNode  30s   node-controller  Node embed-certs-017122 event: Registered Node embed-certs-017122 in Controller
	
	
	==> dmesg <==
	[Dec27 09:33] overlayfs: idmapped layers are currently not supported
	[Dec27 09:34] overlayfs: idmapped layers are currently not supported
	[Dec27 09:35] overlayfs: idmapped layers are currently not supported
	[ +35.855481] overlayfs: idmapped layers are currently not supported
	[Dec27 09:36] overlayfs: idmapped layers are currently not supported
	[Dec27 09:37] overlayfs: idmapped layers are currently not supported
	[  +4.962986] overlayfs: idmapped layers are currently not supported
	[Dec27 09:38] overlayfs: idmapped layers are currently not supported
	[ +23.051169] overlayfs: idmapped layers are currently not supported
	[Dec27 09:39] overlayfs: idmapped layers are currently not supported
	[ +34.383287] overlayfs: idmapped layers are currently not supported
	[Dec27 09:41] overlayfs: idmapped layers are currently not supported
	[Dec27 09:43] overlayfs: idmapped layers are currently not supported
	[ +35.522529] overlayfs: idmapped layers are currently not supported
	[Dec27 09:48] overlayfs: idmapped layers are currently not supported
	[Dec27 09:49] overlayfs: idmapped layers are currently not supported
	[Dec27 09:51] overlayfs: idmapped layers are currently not supported
	[Dec27 09:52] overlayfs: idmapped layers are currently not supported
	[Dec27 09:59] overlayfs: idmapped layers are currently not supported
	[ +37.649191] overlayfs: idmapped layers are currently not supported
	[Dec27 10:01] overlayfs: idmapped layers are currently not supported
	[Dec27 10:02] overlayfs: idmapped layers are currently not supported
	[Dec27 10:03] overlayfs: idmapped layers are currently not supported
	[Dec27 10:04] overlayfs: idmapped layers are currently not supported
	[Dec27 10:05] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [5b43ceeb0d794c5ef4998b716b432b06fb115ad3bc8b6fc34de9512042a79196] <==
	{"level":"info","ts":"2025-12-27T10:04:40.851629Z","caller":"membership/cluster.go:424","msg":"added member","cluster-id":"6f20f2c4b2fb5f8a","local-member-id":"ea7e25599daad906","added-peer-id":"ea7e25599daad906","added-peer-peer-urls":["https://192.168.76.2:2380"],"added-peer-is-learner":false}
	{"level":"info","ts":"2025-12-27T10:04:41.218218Z","logger":"raft","caller":"v3@v3.6.0/raft.go:988","msg":"ea7e25599daad906 is starting a new election at term 1"}
	{"level":"info","ts":"2025-12-27T10:04:41.218354Z","logger":"raft","caller":"v3@v3.6.0/raft.go:930","msg":"ea7e25599daad906 became pre-candidate at term 1"}
	{"level":"info","ts":"2025-12-27T10:04:41.218464Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1077","msg":"ea7e25599daad906 received MsgPreVoteResp from ea7e25599daad906 at term 1"}
	{"level":"info","ts":"2025-12-27T10:04:41.218550Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1693","msg":"ea7e25599daad906 has received 1 MsgPreVoteResp votes and 0 vote rejections"}
	{"level":"info","ts":"2025-12-27T10:04:41.218605Z","logger":"raft","caller":"v3@v3.6.0/raft.go:912","msg":"ea7e25599daad906 became candidate at term 2"}
	{"level":"info","ts":"2025-12-27T10:04:41.222195Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1077","msg":"ea7e25599daad906 received MsgVoteResp from ea7e25599daad906 at term 2"}
	{"level":"info","ts":"2025-12-27T10:04:41.222286Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1693","msg":"ea7e25599daad906 has received 1 MsgVoteResp votes and 0 vote rejections"}
	{"level":"info","ts":"2025-12-27T10:04:41.222331Z","logger":"raft","caller":"v3@v3.6.0/raft.go:970","msg":"ea7e25599daad906 became leader at term 2"}
	{"level":"info","ts":"2025-12-27T10:04:41.222374Z","logger":"raft","caller":"v3@v3.6.0/node.go:370","msg":"raft.node: ea7e25599daad906 elected leader ea7e25599daad906 at term 2"}
	{"level":"info","ts":"2025-12-27T10:04:41.226297Z","caller":"etcdserver/server.go:2420","msg":"setting up initial cluster version using v3 API","cluster-version":"3.6"}
	{"level":"info","ts":"2025-12-27T10:04:41.234316Z","caller":"etcdserver/server.go:1820","msg":"published local member to cluster through raft","local-member-id":"ea7e25599daad906","local-member-attributes":"{Name:embed-certs-017122 ClientURLs:[https://192.168.76.2:2379]}","cluster-id":"6f20f2c4b2fb5f8a","publish-timeout":"7s"}
	{"level":"info","ts":"2025-12-27T10:04:41.234586Z","caller":"membership/cluster.go:682","msg":"set initial cluster version","cluster-id":"6f20f2c4b2fb5f8a","local-member-id":"ea7e25599daad906","cluster-version":"3.6"}
	{"level":"info","ts":"2025-12-27T10:04:41.234710Z","caller":"api/capability.go:76","msg":"enabled capabilities for version","cluster-version":"3.6"}
	{"level":"info","ts":"2025-12-27T10:04:41.234781Z","caller":"etcdserver/server.go:2440","msg":"cluster version is updated","cluster-version":"3.6"}
	{"level":"info","ts":"2025-12-27T10:04:41.234852Z","caller":"version/monitor.go:116","msg":"cluster version differs from storage version.","cluster-version":"3.6.0","storage-version":"3.5.0"}
	{"level":"info","ts":"2025-12-27T10:04:41.234962Z","caller":"schema/migration.go:65","msg":"updated storage version","new-storage-version":"3.6.0"}
	{"level":"info","ts":"2025-12-27T10:04:41.235016Z","caller":"embed/serve.go:138","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-12-27T10:04:41.237026Z","caller":"v3rpc/health.go:63","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-12-27T10:04:41.261004Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-12-27T10:04:41.261139Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-12-27T10:04:41.246363Z","caller":"embed/serve.go:138","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-12-27T10:04:41.277971Z","caller":"embed/serve.go:283","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.76.2:2379"}
	{"level":"info","ts":"2025-12-27T10:04:41.279233Z","caller":"v3rpc/health.go:63","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-12-27T10:04:41.311054Z","caller":"embed/serve.go:283","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	
	
	==> kernel <==
	 10:05:22 up  2:47,  0 user,  load average: 4.25, 2.46, 2.21
	Linux embed-certs-017122 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [50f3ff7ee16e025e7574e56bbb6d6d7851b15f31e0aa7f63f525136d87473a30] <==
	I1227 10:04:58.020619       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1227 10:04:58.021028       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1227 10:04:58.021186       1 main.go:148] setting mtu 1500 for CNI 
	I1227 10:04:58.021230       1 main.go:178] kindnetd IP family: "ipv4"
	I1227 10:04:58.021278       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-12-27T10:04:58Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1227 10:04:58.221210       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1227 10:04:58.221314       1 controller.go:381] "Waiting for informer caches to sync"
	I1227 10:04:58.221522       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1227 10:04:58.222954       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1227 10:04:58.426234       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1227 10:04:58.426337       1 metrics.go:72] Registering metrics
	I1227 10:04:58.426418       1 controller.go:711] "Syncing nftables rules"
	I1227 10:05:08.221322       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1227 10:05:08.221407       1 main.go:301] handling current node
	I1227 10:05:18.222681       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1227 10:05:18.222717       1 main.go:301] handling current node
	
	
	==> kube-apiserver [1e16a66b805bcbc34dad204764bf13a17b7abc82308e0a467638673405342a02] <==
	E1227 10:04:44.574563       1 controller.go:156] "Error while syncing ConfigMap" err="namespaces \"kube-system\" not found" logger="UnhandledError" configmap="kube-system/kube-apiserver-legacy-service-account-token-tracking"
	I1227 10:04:44.610535       1 controller.go:667] quota admission added evaluator for: namespaces
	I1227 10:04:44.638159       1 cidrallocator.go:302] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1227 10:04:44.638531       1 default_servicecidr_controller.go:231] Setting default ServiceCIDR condition Ready to True
	I1227 10:04:44.673353       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1227 10:04:44.689762       1 cidrallocator.go:278] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1227 10:04:44.793034       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1227 10:04:45.306807       1 storage_scheduling.go:123] created PriorityClass system-node-critical with value 2000001000
	I1227 10:04:45.337691       1 storage_scheduling.go:123] created PriorityClass system-cluster-critical with value 2000000000
	I1227 10:04:45.337711       1 storage_scheduling.go:139] all system priority classes are created successfully or already exist.
	I1227 10:04:46.389694       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1227 10:04:46.442496       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1227 10:04:46.552328       1 alloc.go:329] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1227 10:04:46.567411       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.76.2]
	I1227 10:04:46.568572       1 controller.go:667] quota admission added evaluator for: endpoints
	I1227 10:04:46.577984       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1227 10:04:47.346926       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1227 10:04:47.360341       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1227 10:04:47.378933       1 alloc.go:329] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1227 10:04:47.390268       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1227 10:04:53.139582       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1227 10:04:53.266853       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	I1227 10:04:53.543789       1 cidrallocator.go:278] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1227 10:04:53.610707       1 cidrallocator.go:278] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	E1227 10:05:21.104182       1 conn.go:339] Error on socket receive: read tcp 192.168.76.2:8443->192.168.76.1:36072: use of closed network connection
	
	
	==> kube-controller-manager [b8a28cf4eb91469ced0ca5f10bfddbc7eec17ac50b1db180d838695b017f6351] <==
	I1227 10:04:52.269355       1 shared_informer.go:377] "Caches are synced"
	I1227 10:04:52.269464       1 shared_informer.go:377] "Caches are synced"
	I1227 10:04:52.269626       1 shared_informer.go:377] "Caches are synced"
	I1227 10:04:52.270539       1 node_lifecycle_controller.go:1234] "Initializing eviction metric for zone" zone=""
	I1227 10:04:52.270600       1 node_lifecycle_controller.go:886] "Missing timestamp for Node. Assuming now as a timestamp" node="embed-certs-017122"
	I1227 10:04:52.270648       1 node_lifecycle_controller.go:1038] "Controller detected that all Nodes are not-Ready. Entering master disruption mode"
	I1227 10:04:52.270666       1 shared_informer.go:377] "Caches are synced"
	I1227 10:04:52.270679       1 shared_informer.go:377] "Caches are synced"
	I1227 10:04:52.270692       1 shared_informer.go:377] "Caches are synced"
	I1227 10:04:52.270834       1 shared_informer.go:377] "Caches are synced"
	I1227 10:04:52.270867       1 shared_informer.go:377] "Caches are synced"
	I1227 10:04:52.270899       1 shared_informer.go:377] "Caches are synced"
	I1227 10:04:52.271096       1 shared_informer.go:377] "Caches are synced"
	I1227 10:04:52.271129       1 shared_informer.go:377] "Caches are synced"
	I1227 10:04:52.271153       1 shared_informer.go:377] "Caches are synced"
	I1227 10:04:52.269733       1 shared_informer.go:377] "Caches are synced"
	I1227 10:04:52.269741       1 shared_informer.go:377] "Caches are synced"
	I1227 10:04:52.283830       1 shared_informer.go:377] "Caches are synced"
	I1227 10:04:52.288401       1 range_allocator.go:433] "Set node PodCIDR" node="embed-certs-017122" podCIDRs=["10.244.0.0/24"]
	I1227 10:04:52.317356       1 shared_informer.go:370] "Waiting for caches to sync"
	I1227 10:04:52.466140       1 shared_informer.go:377] "Caches are synced"
	I1227 10:04:52.466262       1 garbagecollector.go:166] "Garbage collector: all resource monitors have synced"
	I1227 10:04:52.466314       1 garbagecollector.go:169] "Proceeding to collect garbage"
	I1227 10:04:52.517858       1 shared_informer.go:377] "Caches are synced"
	I1227 10:05:12.272793       1 node_lifecycle_controller.go:1057] "Controller detected that some Nodes are Ready. Exiting master disruption mode"
	
	
	==> kube-proxy [1f750fa8fb603a35ab934b101e270686ae3964b03b9c662f808ac23ef6a77187] <==
	I1227 10:04:54.590197       1 server_linux.go:53] "Using iptables proxy"
	I1227 10:04:54.735933       1 shared_informer.go:370] "Waiting for caches to sync"
	I1227 10:04:54.845164       1 shared_informer.go:377] "Caches are synced"
	I1227 10:04:54.845201       1 server.go:218] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1227 10:04:54.845364       1 server.go:255] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1227 10:04:55.128641       1 server.go:264] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1227 10:04:55.128691       1 server_linux.go:136] "Using iptables Proxier"
	I1227 10:04:55.139264       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1227 10:04:55.143987       1 server.go:529] "Version info" version="v1.35.0"
	I1227 10:04:55.144002       1 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1227 10:04:55.145426       1 config.go:200] "Starting service config controller"
	I1227 10:04:55.145436       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1227 10:04:55.145453       1 config.go:106] "Starting endpoint slice config controller"
	I1227 10:04:55.145457       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1227 10:04:55.145467       1 config.go:403] "Starting serviceCIDR config controller"
	I1227 10:04:55.145471       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1227 10:04:55.159606       1 config.go:309] "Starting node config controller"
	I1227 10:04:55.159671       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1227 10:04:55.159683       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1227 10:04:55.246024       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1227 10:04:55.246074       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1227 10:04:55.246109       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [417eec82ff02f94d20ca912a5ce12b3ffd752ccfdcb3d9611571190aa506d7aa] <==
	E1227 10:04:44.590338       1 reflector.go:204] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.CSINode"
	E1227 10:04:44.590409       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ResourceSlice"
	E1227 10:04:44.590434       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Service"
	E1227 10:04:44.590458       1 reflector.go:204] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.PersistentVolume"
	E1227 10:04:44.590521       1 reflector.go:204] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.VolumeAttachment"
	E1227 10:04:44.590561       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ResourceClaim"
	E1227 10:04:44.590582       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Pod"
	E1227 10:04:45.427250       1 reflector.go:204] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.PodDisruptionBudget"
	E1227 10:04:45.437859       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ResourceClaim"
	E1227 10:04:45.525349       1 reflector.go:204] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.StorageClass"
	E1227 10:04:45.538465       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Service"
	E1227 10:04:45.588334       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1268" type="*v1.ConfigMap"
	E1227 10:04:45.625672       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Node"
	E1227 10:04:45.670598       1 reflector.go:204] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.PersistentVolume"
	E1227 10:04:45.832277       1 reflector.go:204] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.CSIStorageCapacity"
	E1227 10:04:45.883433       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Namespace"
	E1227 10:04:45.892563       1 reflector.go:204] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.DeviceClass"
	E1227 10:04:45.907603       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ReplicationController"
	E1227 10:04:45.979911       1 reflector.go:204] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.PersistentVolumeClaim"
	E1227 10:04:45.998444       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Pod"
	E1227 10:04:46.024637       1 reflector.go:204] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.StatefulSet"
	E1227 10:04:46.082933       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ResourceSlice"
	E1227 10:04:46.085662       1 reflector.go:204] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.CSINode"
	E1227 10:04:46.101470       1 reflector.go:204] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.VolumeAttachment"
	I1227 10:04:47.635312       1 shared_informer.go:377] "Caches are synced"
	
	
	==> kubelet <==
	Dec 27 10:04:53 embed-certs-017122 kubelet[1305]: I1227 10:04:53.545271    1305 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/b7367ee9-b0b9-46f4-8178-63756396ad78-cni-cfg\") pod \"kindnet-7ts9b\" (UID: \"b7367ee9-b0b9-46f4-8178-63756396ad78\") " pod="kube-system/kindnet-7ts9b"
	Dec 27 10:04:53 embed-certs-017122 kubelet[1305]: I1227 10:04:53.545298    1305 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b7367ee9-b0b9-46f4-8178-63756396ad78-lib-modules\") pod \"kindnet-7ts9b\" (UID: \"b7367ee9-b0b9-46f4-8178-63756396ad78\") " pod="kube-system/kindnet-7ts9b"
	Dec 27 10:04:53 embed-certs-017122 kubelet[1305]: I1227 10:04:53.814766    1305 swap_util.go:78] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory"
	Dec 27 10:04:54 embed-certs-017122 kubelet[1305]: W1227 10:04:54.167977    1305 manager.go:1172] Failed to process watch event {EventType:0 Name:/docker/f2b20a6dc27416978f803efe5894b77a3f380f056e00a1283ea3861f5ac2afb4/crio-11f4e72d9a081090906c07f3389855f5a2ccc8de3d484a0834758d37fad8b5c9 WatchSource:0}: Error finding container 11f4e72d9a081090906c07f3389855f5a2ccc8de3d484a0834758d37fad8b5c9: Status 404 returned error can't find the container with id 11f4e72d9a081090906c07f3389855f5a2ccc8de3d484a0834758d37fad8b5c9
	Dec 27 10:04:54 embed-certs-017122 kubelet[1305]: I1227 10:04:54.831744    1305 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kube-proxy-knmrq" podStartSLOduration=1.831729906 podStartE2EDuration="1.831729906s" podCreationTimestamp="2025-12-27 10:04:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-27 10:04:54.831300518 +0000 UTC m=+7.518870665" watchObservedRunningTime="2025-12-27 10:04:54.831729906 +0000 UTC m=+7.519300053"
	Dec 27 10:04:55 embed-certs-017122 kubelet[1305]: E1227 10:04:55.327134    1305 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-embed-certs-017122" containerName="kube-scheduler"
	Dec 27 10:04:57 embed-certs-017122 kubelet[1305]: E1227 10:04:57.332452    1305 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-controller-manager-embed-certs-017122" containerName="kube-controller-manager"
	Dec 27 10:04:58 embed-certs-017122 kubelet[1305]: E1227 10:04:58.430099    1305 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/etcd-embed-certs-017122" containerName="etcd"
	Dec 27 10:04:58 embed-certs-017122 kubelet[1305]: E1227 10:04:58.838811    1305 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/etcd-embed-certs-017122" containerName="etcd"
	Dec 27 10:05:02 embed-certs-017122 kubelet[1305]: E1227 10:05:02.293090    1305 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-embed-certs-017122" containerName="kube-apiserver"
	Dec 27 10:05:02 embed-certs-017122 kubelet[1305]: I1227 10:05:02.316279    1305 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kindnet-7ts9b" podStartSLOduration=5.66177965 podStartE2EDuration="9.316265632s" podCreationTimestamp="2025-12-27 10:04:53 +0000 UTC" firstStartedPulling="2025-12-27 10:04:54.1787511 +0000 UTC m=+6.866321255" lastFinishedPulling="2025-12-27 10:04:57.83323709 +0000 UTC m=+10.520807237" observedRunningTime="2025-12-27 10:04:58.856490813 +0000 UTC m=+11.544060960" watchObservedRunningTime="2025-12-27 10:05:02.316265632 +0000 UTC m=+15.003835787"
	Dec 27 10:05:05 embed-certs-017122 kubelet[1305]: E1227 10:05:05.336995    1305 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-embed-certs-017122" containerName="kube-scheduler"
	Dec 27 10:05:07 embed-certs-017122 kubelet[1305]: E1227 10:05:07.342410    1305 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-controller-manager-embed-certs-017122" containerName="kube-controller-manager"
	Dec 27 10:05:08 embed-certs-017122 kubelet[1305]: I1227 10:05:08.622378    1305 kubelet_node_status.go:427] "Fast updating node status as it just became ready"
	Dec 27 10:05:08 embed-certs-017122 kubelet[1305]: I1227 10:05:08.724032    1305 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fbjbl\" (UniqueName: \"kubernetes.io/projected/d9484e97-87d3-4568-bc0e-929f8c2bac3e-kube-api-access-fbjbl\") pod \"storage-provisioner\" (UID: \"d9484e97-87d3-4568-bc0e-929f8c2bac3e\") " pod="kube-system/storage-provisioner"
	Dec 27 10:05:08 embed-certs-017122 kubelet[1305]: I1227 10:05:08.724278    1305 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/d9484e97-87d3-4568-bc0e-929f8c2bac3e-tmp\") pod \"storage-provisioner\" (UID: \"d9484e97-87d3-4568-bc0e-929f8c2bac3e\") " pod="kube-system/storage-provisioner"
	Dec 27 10:05:08 embed-certs-017122 kubelet[1305]: I1227 10:05:08.824655    1305 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7ab96e40-4206-4459-bc72-d2eed89b2d21-config-volume\") pod \"coredns-7d764666f9-bdwpn\" (UID: \"7ab96e40-4206-4459-bc72-d2eed89b2d21\") " pod="kube-system/coredns-7d764666f9-bdwpn"
	Dec 27 10:05:08 embed-certs-017122 kubelet[1305]: I1227 10:05:08.824709    1305 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tsmc4\" (UniqueName: \"kubernetes.io/projected/7ab96e40-4206-4459-bc72-d2eed89b2d21-kube-api-access-tsmc4\") pod \"coredns-7d764666f9-bdwpn\" (UID: \"7ab96e40-4206-4459-bc72-d2eed89b2d21\") " pod="kube-system/coredns-7d764666f9-bdwpn"
	Dec 27 10:05:09 embed-certs-017122 kubelet[1305]: W1227 10:05:09.003344    1305 manager.go:1172] Failed to process watch event {EventType:0 Name:/docker/f2b20a6dc27416978f803efe5894b77a3f380f056e00a1283ea3861f5ac2afb4/crio-a661b67497fb667d54f14fd9102a68f4e010335a891259661bafa9000fffa792 WatchSource:0}: Error finding container a661b67497fb667d54f14fd9102a68f4e010335a891259661bafa9000fffa792: Status 404 returned error can't find the container with id a661b67497fb667d54f14fd9102a68f4e010335a891259661bafa9000fffa792
	Dec 27 10:05:09 embed-certs-017122 kubelet[1305]: W1227 10:05:09.056096    1305 manager.go:1172] Failed to process watch event {EventType:0 Name:/docker/f2b20a6dc27416978f803efe5894b77a3f380f056e00a1283ea3861f5ac2afb4/crio-2879f2516d2ee03f88f86ad2dee3b2245e811fe6ad3401ee8381ac02898822f9 WatchSource:0}: Error finding container 2879f2516d2ee03f88f86ad2dee3b2245e811fe6ad3401ee8381ac02898822f9: Status 404 returned error can't find the container with id 2879f2516d2ee03f88f86ad2dee3b2245e811fe6ad3401ee8381ac02898822f9
	Dec 27 10:05:09 embed-certs-017122 kubelet[1305]: E1227 10:05:09.863970    1305 prober_manager.go:209] "Readiness probe already exists for container" pod="kube-system/coredns-7d764666f9-bdwpn" containerName="coredns"
	Dec 27 10:05:09 embed-certs-017122 kubelet[1305]: I1227 10:05:09.883062    1305 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/coredns-7d764666f9-bdwpn" podStartSLOduration=16.883046245 podStartE2EDuration="16.883046245s" podCreationTimestamp="2025-12-27 10:04:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-27 10:05:09.882539867 +0000 UTC m=+22.570110022" watchObservedRunningTime="2025-12-27 10:05:09.883046245 +0000 UTC m=+22.570616392"
	Dec 27 10:05:09 embed-certs-017122 kubelet[1305]: I1227 10:05:09.921460    1305 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=13.921440939 podStartE2EDuration="13.921440939s" podCreationTimestamp="2025-12-27 10:04:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-27 10:05:09.901360319 +0000 UTC m=+22.588930466" watchObservedRunningTime="2025-12-27 10:05:09.921440939 +0000 UTC m=+22.609011102"
	Dec 27 10:05:10 embed-certs-017122 kubelet[1305]: E1227 10:05:10.872042    1305 prober_manager.go:209] "Readiness probe already exists for container" pod="kube-system/coredns-7d764666f9-bdwpn" containerName="coredns"
	Dec 27 10:05:12 embed-certs-017122 kubelet[1305]: I1227 10:05:12.054480    1305 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z7t46\" (UniqueName: \"kubernetes.io/projected/b1105fa6-5257-4bdf-a5ef-08d24fc959ae-kube-api-access-z7t46\") pod \"busybox\" (UID: \"b1105fa6-5257-4bdf-a5ef-08d24fc959ae\") " pod="default/busybox"
	
	
	==> storage-provisioner [ba5c33ff659906f70e19b97ebd0a163e5176de12b8704d365db1b6a312d0a968] <==
	I1227 10:05:09.145348       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1227 10:05:09.190823       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1227 10:05:09.190956       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1227 10:05:09.194468       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1227 10:05:09.220061       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1227 10:05:09.220309       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1227 10:05:09.220515       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-017122_f70138e9-0457-4f97-806d-17019406985e!
	W1227 10:05:09.225410       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1227 10:05:09.228834       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"0d9ebc47-35a9-4be4-b5b3-d21c89072018", APIVersion:"v1", ResourceVersion:"458", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-017122_f70138e9-0457-4f97-806d-17019406985e became leader
	W1227 10:05:09.242927       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1227 10:05:09.321597       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-017122_f70138e9-0457-4f97-806d-17019406985e!
	W1227 10:05:11.246490       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1227 10:05:11.251663       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1227 10:05:13.254455       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1227 10:05:13.263905       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1227 10:05:15.267510       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1227 10:05:15.274778       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1227 10:05:17.288484       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1227 10:05:17.294308       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1227 10:05:19.297602       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1227 10:05:19.302182       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1227 10:05:21.305215       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1227 10:05:21.310752       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-017122 -n embed-certs-017122
helpers_test.go:270: (dbg) Run:  kubectl --context embed-certs-017122 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:294: <<< TestStartStop/group/embed-certs/serial/EnableAddonWhileActive FAILED: end of post-mortem logs <<<
helpers_test.go:295: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (2.42s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (2.51s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p default-k8s-diff-port-681744 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable metrics-server -p default-k8s-diff-port-681744 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 11 (319.938445ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-27T10:05:38Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-arm64 addons enable metrics-server -p default-k8s-diff-port-681744 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 11
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context default-k8s-diff-port-681744 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:213: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-681744 describe deploy/metrics-server -n kube-system: exit status 1 (87.225776ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): deployments.apps "metrics-server" not found

                                                
                                                
** /stderr **
start_stop_delete_test.go:215: failed to get info on auto-pause deployments. args "kubectl --context default-k8s-diff-port-681744 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:219: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect default-k8s-diff-port-681744
helpers_test.go:244: (dbg) docker inspect default-k8s-diff-port-681744:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "d2370e32a3db686d85e4d6785307c138600396030b0352d75607381960c53a89",
	        "Created": "2025-12-27T10:04:44.730801241Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 512281,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-27T10:04:44.796050629Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:713d59eed1cb7ea89d31e3cbef8f6274a6ab7509a421e96cf0197c334e76398d",
	        "ResolvConfPath": "/var/lib/docker/containers/d2370e32a3db686d85e4d6785307c138600396030b0352d75607381960c53a89/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/d2370e32a3db686d85e4d6785307c138600396030b0352d75607381960c53a89/hostname",
	        "HostsPath": "/var/lib/docker/containers/d2370e32a3db686d85e4d6785307c138600396030b0352d75607381960c53a89/hosts",
	        "LogPath": "/var/lib/docker/containers/d2370e32a3db686d85e4d6785307c138600396030b0352d75607381960c53a89/d2370e32a3db686d85e4d6785307c138600396030b0352d75607381960c53a89-json.log",
	        "Name": "/default-k8s-diff-port-681744",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "default-k8s-diff-port-681744:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "default-k8s-diff-port-681744",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "d2370e32a3db686d85e4d6785307c138600396030b0352d75607381960c53a89",
	                "LowerDir": "/var/lib/docker/overlay2/3da655765ee70512272e8bbdfdd19ad89be9ad5b15dd83cabbe503b7c3d1dd23-init/diff:/var/lib/docker/overlay2/888349771a41a46b09e7de676064af9effbe2f5ae2a8ba49ad062335fb2a70e5/diff",
	                "MergedDir": "/var/lib/docker/overlay2/3da655765ee70512272e8bbdfdd19ad89be9ad5b15dd83cabbe503b7c3d1dd23/merged",
	                "UpperDir": "/var/lib/docker/overlay2/3da655765ee70512272e8bbdfdd19ad89be9ad5b15dd83cabbe503b7c3d1dd23/diff",
	                "WorkDir": "/var/lib/docker/overlay2/3da655765ee70512272e8bbdfdd19ad89be9ad5b15dd83cabbe503b7c3d1dd23/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "default-k8s-diff-port-681744",
	                "Source": "/var/lib/docker/volumes/default-k8s-diff-port-681744/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "default-k8s-diff-port-681744",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8444/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "default-k8s-diff-port-681744",
	                "name.minikube.sigs.k8s.io": "default-k8s-diff-port-681744",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "75b95a185a4b10fd258217f366fdcea0f22f930c920efd742d1051be3061fd7a",
	            "SandboxKey": "/var/run/docker/netns/75b95a185a4b",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33446"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33447"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33450"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33448"
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33449"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "default-k8s-diff-port-681744": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "1e:04:f2:b9:8b:63",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "a1f92b122a97b2834afb7ef2e15881b65b61b90adec9a9012e2ffcfe6970dabd",
	                    "EndpointID": "ea09e4267aed1c2505fd9afe96886d2bc15c16ddcacc44db2b608e4d4d8fee70",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "default-k8s-diff-port-681744",
	                        "d2370e32a3db"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-681744 -n default-k8s-diff-port-681744
helpers_test.go:253: <<< TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p default-k8s-diff-port-681744 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-arm64 -p default-k8s-diff-port-681744 logs -n 25: (1.15632157s)
helpers_test.go:261: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ addons  │ enable dashboard -p old-k8s-version-156305 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-156305       │ jenkins │ v1.37.0 │ 27 Dec 25 10:00 UTC │ 27 Dec 25 10:00 UTC │
	│ start   │ -p old-k8s-version-156305 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-156305       │ jenkins │ v1.37.0 │ 27 Dec 25 10:00 UTC │ 27 Dec 25 10:01 UTC │
	│ image   │ old-k8s-version-156305 image list --format=json                                                                                                                                                                                               │ old-k8s-version-156305       │ jenkins │ v1.37.0 │ 27 Dec 25 10:01 UTC │ 27 Dec 25 10:01 UTC │
	│ pause   │ -p old-k8s-version-156305 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-156305       │ jenkins │ v1.37.0 │ 27 Dec 25 10:01 UTC │                     │
	│ delete  │ -p old-k8s-version-156305                                                                                                                                                                                                                     │ old-k8s-version-156305       │ jenkins │ v1.37.0 │ 27 Dec 25 10:02 UTC │ 27 Dec 25 10:02 UTC │
	│ delete  │ -p old-k8s-version-156305                                                                                                                                                                                                                     │ old-k8s-version-156305       │ jenkins │ v1.37.0 │ 27 Dec 25 10:02 UTC │ 27 Dec 25 10:02 UTC │
	│ start   │ -p no-preload-021144 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0                                                                                       │ no-preload-021144            │ jenkins │ v1.37.0 │ 27 Dec 25 10:02 UTC │ 27 Dec 25 10:03 UTC │
	│ addons  │ enable metrics-server -p no-preload-021144 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-021144            │ jenkins │ v1.37.0 │ 27 Dec 25 10:03 UTC │                     │
	│ stop    │ -p no-preload-021144 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-021144            │ jenkins │ v1.37.0 │ 27 Dec 25 10:03 UTC │ 27 Dec 25 10:03 UTC │
	│ addons  │ enable dashboard -p no-preload-021144 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-021144            │ jenkins │ v1.37.0 │ 27 Dec 25 10:03 UTC │ 27 Dec 25 10:03 UTC │
	│ start   │ -p no-preload-021144 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0                                                                                       │ no-preload-021144            │ jenkins │ v1.37.0 │ 27 Dec 25 10:03 UTC │ 27 Dec 25 10:04 UTC │
	│ ssh     │ force-systemd-flag-779725 ssh cat /etc/crio/crio.conf.d/02-crio.conf                                                                                                                                                                          │ force-systemd-flag-779725    │ jenkins │ v1.37.0 │ 27 Dec 25 10:04 UTC │ 27 Dec 25 10:04 UTC │
	│ delete  │ -p force-systemd-flag-779725                                                                                                                                                                                                                  │ force-systemd-flag-779725    │ jenkins │ v1.37.0 │ 27 Dec 25 10:04 UTC │ 27 Dec 25 10:04 UTC │
	│ start   │ -p embed-certs-017122 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0                                                                                        │ embed-certs-017122           │ jenkins │ v1.37.0 │ 27 Dec 25 10:04 UTC │ 27 Dec 25 10:05 UTC │
	│ image   │ no-preload-021144 image list --format=json                                                                                                                                                                                                    │ no-preload-021144            │ jenkins │ v1.37.0 │ 27 Dec 25 10:04 UTC │ 27 Dec 25 10:04 UTC │
	│ pause   │ -p no-preload-021144 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-021144            │ jenkins │ v1.37.0 │ 27 Dec 25 10:04 UTC │                     │
	│ delete  │ -p no-preload-021144                                                                                                                                                                                                                          │ no-preload-021144            │ jenkins │ v1.37.0 │ 27 Dec 25 10:04 UTC │ 27 Dec 25 10:04 UTC │
	│ delete  │ -p no-preload-021144                                                                                                                                                                                                                          │ no-preload-021144            │ jenkins │ v1.37.0 │ 27 Dec 25 10:04 UTC │ 27 Dec 25 10:04 UTC │
	│ delete  │ -p disable-driver-mounts-242374                                                                                                                                                                                                               │ disable-driver-mounts-242374 │ jenkins │ v1.37.0 │ 27 Dec 25 10:04 UTC │ 27 Dec 25 10:04 UTC │
	│ start   │ -p default-k8s-diff-port-681744 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0                                                                      │ default-k8s-diff-port-681744 │ jenkins │ v1.37.0 │ 27 Dec 25 10:04 UTC │ 27 Dec 25 10:05 UTC │
	│ addons  │ enable metrics-server -p embed-certs-017122 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-017122           │ jenkins │ v1.37.0 │ 27 Dec 25 10:05 UTC │                     │
	│ stop    │ -p embed-certs-017122 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-017122           │ jenkins │ v1.37.0 │ 27 Dec 25 10:05 UTC │ 27 Dec 25 10:05 UTC │
	│ addons  │ enable dashboard -p embed-certs-017122 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-017122           │ jenkins │ v1.37.0 │ 27 Dec 25 10:05 UTC │ 27 Dec 25 10:05 UTC │
	│ start   │ -p embed-certs-017122 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0                                                                                        │ embed-certs-017122           │ jenkins │ v1.37.0 │ 27 Dec 25 10:05 UTC │                     │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-681744 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-681744 │ jenkins │ v1.37.0 │ 27 Dec 25 10:05 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/27 10:05:35
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1227 10:05:35.885332  515844 out.go:360] Setting OutFile to fd 1 ...
	I1227 10:05:35.885535  515844 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 10:05:35.885562  515844 out.go:374] Setting ErrFile to fd 2...
	I1227 10:05:35.885582  515844 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 10:05:35.885976  515844 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22344-301174/.minikube/bin
	I1227 10:05:35.886528  515844 out.go:368] Setting JSON to false
	I1227 10:05:35.887524  515844 start.go:133] hostinfo: {"hostname":"ip-172-31-31-251","uptime":10085,"bootTime":1766819851,"procs":183,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1227 10:05:35.887654  515844 start.go:143] virtualization:  
	I1227 10:05:35.891077  515844 out.go:179] * [embed-certs-017122] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1227 10:05:35.893392  515844 out.go:179]   - MINIKUBE_LOCATION=22344
	I1227 10:05:35.893513  515844 notify.go:221] Checking for updates...
	I1227 10:05:35.900141  515844 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1227 10:05:35.903005  515844 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22344-301174/kubeconfig
	I1227 10:05:35.905880  515844 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22344-301174/.minikube
	I1227 10:05:35.908727  515844 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1227 10:05:35.911592  515844 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1227 10:05:35.915126  515844 config.go:182] Loaded profile config "embed-certs-017122": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1227 10:05:35.915664  515844 driver.go:422] Setting default libvirt URI to qemu:///system
	I1227 10:05:35.949960  515844 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1227 10:05:35.950110  515844 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1227 10:05:36.010782  515844 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:41 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-27 10:05:35.997169557 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1227 10:05:36.010903  515844 docker.go:319] overlay module found
	I1227 10:05:36.014255  515844 out.go:179] * Using the docker driver based on existing profile
	I1227 10:05:36.017124  515844 start.go:309] selected driver: docker
	I1227 10:05:36.017151  515844 start.go:928] validating driver "docker" against &{Name:embed-certs-017122 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:embed-certs-017122 Namespace:default APIServerHAVIP: APIServerN
ame:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:
9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1227 10:05:36.017272  515844 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1227 10:05:36.018051  515844 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1227 10:05:36.090598  515844 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:41 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-27 10:05:36.080448913 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1227 10:05:36.090945  515844 start_flags.go:1019] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1227 10:05:36.090985  515844 cni.go:84] Creating CNI manager for ""
	I1227 10:05:36.091050  515844 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1227 10:05:36.091089  515844 start.go:353] cluster config:
	{Name:embed-certs-017122 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:embed-certs-017122 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Contain
erRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false
DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1227 10:05:36.094385  515844 out.go:179] * Starting "embed-certs-017122" primary control-plane node in "embed-certs-017122" cluster
	I1227 10:05:36.097251  515844 cache.go:134] Beginning downloading kic base image for docker with crio
	I1227 10:05:36.100303  515844 out.go:179] * Pulling base image v0.0.48-1766570851-22316 ...
	I1227 10:05:36.103157  515844 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime crio
	I1227 10:05:36.103215  515844 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22344-301174/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-cri-o-overlay-arm64.tar.lz4
	I1227 10:05:36.103246  515844 cache.go:65] Caching tarball of preloaded images
	I1227 10:05:36.103256  515844 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a in local docker daemon
	I1227 10:05:36.103335  515844 preload.go:251] Found /home/jenkins/minikube-integration/22344-301174/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1227 10:05:36.103345  515844 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0 on crio
	I1227 10:05:36.103465  515844 profile.go:143] Saving config to /home/jenkins/minikube-integration/22344-301174/.minikube/profiles/embed-certs-017122/config.json ...
	I1227 10:05:36.123091  515844 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a in local docker daemon, skipping pull
	I1227 10:05:36.123115  515844 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a exists in daemon, skipping load
	I1227 10:05:36.123131  515844 cache.go:243] Successfully downloaded all kic artifacts
	I1227 10:05:36.123162  515844 start.go:360] acquireMachinesLock for embed-certs-017122: {Name:mkc5c6a144bc51d843c500d769feb1ef839b15a5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1227 10:05:36.123221  515844 start.go:364] duration metric: took 36.177µs to acquireMachinesLock for "embed-certs-017122"
	I1227 10:05:36.123245  515844 start.go:96] Skipping create...Using existing machine configuration
	I1227 10:05:36.123251  515844 fix.go:54] fixHost starting: 
	I1227 10:05:36.123512  515844 cli_runner.go:164] Run: docker container inspect embed-certs-017122 --format={{.State.Status}}
	I1227 10:05:36.140554  515844 fix.go:112] recreateIfNeeded on embed-certs-017122: state=Stopped err=<nil>
	W1227 10:05:36.140583  515844 fix.go:138] unexpected machine state, will restart: <nil>
	
	
	==> CRI-O <==
	Dec 27 10:05:26 default-k8s-diff-port-681744 crio[838]: time="2025-12-27T10:05:26.361093954Z" level=info msg="Created container 3f303297e42665fe3ab40efb7cc6b072648285baef2af41fcf11f37b4cd56b76: kube-system/coredns-7d764666f9-gsk6s/coredns" id=62fa25ce-4090-4f0c-bbc1-652ab68fb292 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 27 10:05:26 default-k8s-diff-port-681744 crio[838]: time="2025-12-27T10:05:26.362826496Z" level=info msg="Starting container: 3f303297e42665fe3ab40efb7cc6b072648285baef2af41fcf11f37b4cd56b76" id=4b1878dc-f118-4be1-a18d-4e47744a811d name=/runtime.v1.RuntimeService/StartContainer
	Dec 27 10:05:26 default-k8s-diff-port-681744 crio[838]: time="2025-12-27T10:05:26.367526272Z" level=info msg="Started container" PID=1774 containerID=3f303297e42665fe3ab40efb7cc6b072648285baef2af41fcf11f37b4cd56b76 description=kube-system/coredns-7d764666f9-gsk6s/coredns id=4b1878dc-f118-4be1-a18d-4e47744a811d name=/runtime.v1.RuntimeService/StartContainer sandboxID=3f9cd5edc5ed43c776999257c3eb2f062b737d07ec3828cd0cff58ef70dd6853
	Dec 27 10:05:29 default-k8s-diff-port-681744 crio[838]: time="2025-12-27T10:05:29.008959998Z" level=info msg="Running pod sandbox: default/busybox/POD" id=00f2b452-acc1-4e81-8a4e-fcc2679e202b name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 27 10:05:29 default-k8s-diff-port-681744 crio[838]: time="2025-12-27T10:05:29.009036938Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 27 10:05:29 default-k8s-diff-port-681744 crio[838]: time="2025-12-27T10:05:29.021438529Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:7cc20e1f732aadd605f2dc73a4fa2133f342eb889159ea1520ae9444b2ec2c32 UID:009d55c6-9295-4db0-86af-fd454f83cf65 NetNS:/var/run/netns/6ac12175-b2b7-4308-bafd-eccdb37aeb5f Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x40000ebbc0}] Aliases:map[]}"
	Dec 27 10:05:29 default-k8s-diff-port-681744 crio[838]: time="2025-12-27T10:05:29.021482648Z" level=info msg="Adding pod default_busybox to CNI network \"kindnet\" (type=ptp)"
	Dec 27 10:05:29 default-k8s-diff-port-681744 crio[838]: time="2025-12-27T10:05:29.030476098Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:7cc20e1f732aadd605f2dc73a4fa2133f342eb889159ea1520ae9444b2ec2c32 UID:009d55c6-9295-4db0-86af-fd454f83cf65 NetNS:/var/run/netns/6ac12175-b2b7-4308-bafd-eccdb37aeb5f Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x40000ebbc0}] Aliases:map[]}"
	Dec 27 10:05:29 default-k8s-diff-port-681744 crio[838]: time="2025-12-27T10:05:29.030670398Z" level=info msg="Checking pod default_busybox for CNI network kindnet (type=ptp)"
	Dec 27 10:05:29 default-k8s-diff-port-681744 crio[838]: time="2025-12-27T10:05:29.033698169Z" level=info msg="Ran pod sandbox 7cc20e1f732aadd605f2dc73a4fa2133f342eb889159ea1520ae9444b2ec2c32 with infra container: default/busybox/POD" id=00f2b452-acc1-4e81-8a4e-fcc2679e202b name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 27 10:05:29 default-k8s-diff-port-681744 crio[838]: time="2025-12-27T10:05:29.035425206Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=0961eb71-5c57-41d8-a2eb-d6a311ba8f55 name=/runtime.v1.ImageService/ImageStatus
	Dec 27 10:05:29 default-k8s-diff-port-681744 crio[838]: time="2025-12-27T10:05:29.035584855Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=0961eb71-5c57-41d8-a2eb-d6a311ba8f55 name=/runtime.v1.ImageService/ImageStatus
	Dec 27 10:05:29 default-k8s-diff-port-681744 crio[838]: time="2025-12-27T10:05:29.035638328Z" level=info msg="Neither image nor artfiact gcr.io/k8s-minikube/busybox:1.28.4-glibc found" id=0961eb71-5c57-41d8-a2eb-d6a311ba8f55 name=/runtime.v1.ImageService/ImageStatus
	Dec 27 10:05:29 default-k8s-diff-port-681744 crio[838]: time="2025-12-27T10:05:29.038527923Z" level=info msg="Pulling image: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=66668308-45a2-47d4-950b-6a8762fa8ece name=/runtime.v1.ImageService/PullImage
	Dec 27 10:05:29 default-k8s-diff-port-681744 crio[838]: time="2025-12-27T10:05:29.040996992Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Dec 27 10:05:31 default-k8s-diff-port-681744 crio[838]: time="2025-12-27T10:05:31.079024305Z" level=info msg="Pulled image: gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e" id=66668308-45a2-47d4-950b-6a8762fa8ece name=/runtime.v1.ImageService/PullImage
	Dec 27 10:05:31 default-k8s-diff-port-681744 crio[838]: time="2025-12-27T10:05:31.079664264Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=871b9d60-7287-4b84-87ef-462661c90f94 name=/runtime.v1.ImageService/ImageStatus
	Dec 27 10:05:31 default-k8s-diff-port-681744 crio[838]: time="2025-12-27T10:05:31.083299386Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=6ae2fade-bcc3-42fb-8bf0-4c12068a4ed1 name=/runtime.v1.ImageService/ImageStatus
	Dec 27 10:05:31 default-k8s-diff-port-681744 crio[838]: time="2025-12-27T10:05:31.089464452Z" level=info msg="Creating container: default/busybox/busybox" id=ee6fd684-9268-4bfa-ab9e-d1cf8148e191 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 27 10:05:31 default-k8s-diff-port-681744 crio[838]: time="2025-12-27T10:05:31.089598566Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 27 10:05:31 default-k8s-diff-port-681744 crio[838]: time="2025-12-27T10:05:31.097389112Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 27 10:05:31 default-k8s-diff-port-681744 crio[838]: time="2025-12-27T10:05:31.09797824Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 27 10:05:31 default-k8s-diff-port-681744 crio[838]: time="2025-12-27T10:05:31.114950628Z" level=info msg="Created container 66336ce2d1cee89d58c02c850fc4f008f431323e7347a4f07a9c8cb4dc936ae6: default/busybox/busybox" id=ee6fd684-9268-4bfa-ab9e-d1cf8148e191 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 27 10:05:31 default-k8s-diff-port-681744 crio[838]: time="2025-12-27T10:05:31.115945619Z" level=info msg="Starting container: 66336ce2d1cee89d58c02c850fc4f008f431323e7347a4f07a9c8cb4dc936ae6" id=50b9b666-113f-46f2-ba3f-05ae71d5374a name=/runtime.v1.RuntimeService/StartContainer
	Dec 27 10:05:31 default-k8s-diff-port-681744 crio[838]: time="2025-12-27T10:05:31.119179054Z" level=info msg="Started container" PID=1826 containerID=66336ce2d1cee89d58c02c850fc4f008f431323e7347a4f07a9c8cb4dc936ae6 description=default/busybox/busybox id=50b9b666-113f-46f2-ba3f-05ae71d5374a name=/runtime.v1.RuntimeService/StartContainer sandboxID=7cc20e1f732aadd605f2dc73a4fa2133f342eb889159ea1520ae9444b2ec2c32
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                                    NAMESPACE
	66336ce2d1cee       gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e   8 seconds ago       Running             busybox                   0                   7cc20e1f732aa       busybox                                                default
	3f303297e4266       e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf                                      12 seconds ago      Running             coredns                   0                   3f9cd5edc5ed4       coredns-7d764666f9-gsk6s                               kube-system
	6ffab1fcb0d8a       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                      12 seconds ago      Running             storage-provisioner       0                   946eb5b021471       storage-provisioner                                    kube-system
	ca23d7ced86c6       docker.io/kindest/kindnetd@sha256:f1260f5691195cc9a693dc0b55178aa724d944efd62486a8320f0583272b1fa3    23 seconds ago      Running             kindnet-cni               0                   857cbe607e0c5       kindnet-n6bcg                                          kube-system
	f33343a4ccae6       de369f46c2ff55c31ea783a663eb203caa820f3db1f9b9c935e79e7d1e9fd9e5                                      25 seconds ago      Running             kube-proxy                0                   114a868921152       kube-proxy-6wq7w                                       kube-system
	9ee16407148c7       88898f1d1a62a3ea9db5d4d099dee7aa52ebe8191016c5b3c721388a309983e0                                      37 seconds ago      Running             kube-controller-manager   0                   29c18af7befcb       kube-controller-manager-default-k8s-diff-port-681744   kube-system
	88523b2615e9c       271e49a0ebc56647476845128fcd2a73bb138beeca3878cc3bf52b4ff1172a57                                      37 seconds ago      Running             etcd                      0                   e569afc58213b       etcd-default-k8s-diff-port-681744                      kube-system
	80eabbea5b455       c3fcf259c473a57a5d7da116e29161904491091743512d27467c907c5516f856                                      37 seconds ago      Running             kube-apiserver            0                   76acb60d9d153       kube-apiserver-default-k8s-diff-port-681744            kube-system
	134346be0a2c5       ddc8422d4d35a6fc66c34be61e24df795e5cebf197eb546f62740d0bafef874f                                      37 seconds ago      Running             kube-scheduler            0                   258c7244ec9dc       kube-scheduler-default-k8s-diff-port-681744            kube-system
	
	
	==> coredns [3f303297e42665fe3ab40efb7cc6b072648285baef2af41fcf11f37b4cd56b76] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.13.1
	linux/arm64, go1.25.2, 1db4568
	[INFO] 127.0.0.1:49500 - 62626 "HINFO IN 7209509470302300619.7673354716498306598. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.029271634s
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-681744
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=default-k8s-diff-port-681744
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=a2daf445edf4872fd9586416ba5dbf507613db86
	                    minikube.k8s.io/name=default-k8s-diff-port-681744
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_27T10_05_08_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 27 Dec 2025 10:05:04 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-681744
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 27 Dec 2025 10:05:38 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 27 Dec 2025 10:05:38 +0000   Sat, 27 Dec 2025 10:05:01 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 27 Dec 2025 10:05:38 +0000   Sat, 27 Dec 2025 10:05:01 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 27 Dec 2025 10:05:38 +0000   Sat, 27 Dec 2025 10:05:01 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 27 Dec 2025 10:05:38 +0000   Sat, 27 Dec 2025 10:05:25 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    default-k8s-diff-port-681744
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022304Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022304Ki
	  pods:               110
	System Info:
	  Machine ID:                 56495bcb733b49eb3642bf15694bbc8c
	  System UUID:                aaa4a45e-c8b8-47d4-86bd-5fcd976160a4
	  Boot ID:                    bc2ae459-717b-455e-ba6b-b9cf89c5e3c8
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.3
	  Kubelet Version:            v1.35.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         11s
	  kube-system                 coredns-7d764666f9-gsk6s                                100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     27s
	  kube-system                 etcd-default-k8s-diff-port-681744                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         34s
	  kube-system                 kindnet-n6bcg                                           100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      27s
	  kube-system                 kube-apiserver-default-k8s-diff-port-681744             250m (12%)    0 (0%)      0 (0%)           0 (0%)         32s
	  kube-system                 kube-controller-manager-default-k8s-diff-port-681744    200m (10%)    0 (0%)      0 (0%)           0 (0%)         32s
	  kube-system                 kube-proxy-6wq7w                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         27s
	  kube-system                 kube-scheduler-default-k8s-diff-port-681744             100m (5%)     0 (0%)      0 (0%)           0 (0%)         32s
	  kube-system                 storage-provisioner                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         26s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type    Reason          Age   From             Message
	  ----    ------          ----  ----             -------
	  Normal  RegisteredNode  28s   node-controller  Node default-k8s-diff-port-681744 event: Registered Node default-k8s-diff-port-681744 in Controller
	
	
	==> dmesg <==
	[Dec27 09:33] overlayfs: idmapped layers are currently not supported
	[Dec27 09:34] overlayfs: idmapped layers are currently not supported
	[Dec27 09:35] overlayfs: idmapped layers are currently not supported
	[ +35.855481] overlayfs: idmapped layers are currently not supported
	[Dec27 09:36] overlayfs: idmapped layers are currently not supported
	[Dec27 09:37] overlayfs: idmapped layers are currently not supported
	[  +4.962986] overlayfs: idmapped layers are currently not supported
	[Dec27 09:38] overlayfs: idmapped layers are currently not supported
	[ +23.051169] overlayfs: idmapped layers are currently not supported
	[Dec27 09:39] overlayfs: idmapped layers are currently not supported
	[ +34.383287] overlayfs: idmapped layers are currently not supported
	[Dec27 09:41] overlayfs: idmapped layers are currently not supported
	[Dec27 09:43] overlayfs: idmapped layers are currently not supported
	[ +35.522529] overlayfs: idmapped layers are currently not supported
	[Dec27 09:48] overlayfs: idmapped layers are currently not supported
	[Dec27 09:49] overlayfs: idmapped layers are currently not supported
	[Dec27 09:51] overlayfs: idmapped layers are currently not supported
	[Dec27 09:52] overlayfs: idmapped layers are currently not supported
	[Dec27 09:59] overlayfs: idmapped layers are currently not supported
	[ +37.649191] overlayfs: idmapped layers are currently not supported
	[Dec27 10:01] overlayfs: idmapped layers are currently not supported
	[Dec27 10:02] overlayfs: idmapped layers are currently not supported
	[Dec27 10:03] overlayfs: idmapped layers are currently not supported
	[Dec27 10:04] overlayfs: idmapped layers are currently not supported
	[Dec27 10:05] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [88523b2615e9c85bc8ee410f549460037295542ed80508266235d54d6aa359de] <==
	{"level":"info","ts":"2025-12-27T10:05:01.870765Z","caller":"membership/cluster.go:424","msg":"added member","cluster-id":"68eaea490fab4e05","local-member-id":"9f0758e1c58a86ed","added-peer-id":"9f0758e1c58a86ed","added-peer-peer-urls":["https://192.168.85.2:2380"],"added-peer-is-learner":false}
	{"level":"info","ts":"2025-12-27T10:05:02.348164Z","logger":"raft","caller":"v3@v3.6.0/raft.go:988","msg":"9f0758e1c58a86ed is starting a new election at term 1"}
	{"level":"info","ts":"2025-12-27T10:05:02.348221Z","logger":"raft","caller":"v3@v3.6.0/raft.go:930","msg":"9f0758e1c58a86ed became pre-candidate at term 1"}
	{"level":"info","ts":"2025-12-27T10:05:02.348266Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1077","msg":"9f0758e1c58a86ed received MsgPreVoteResp from 9f0758e1c58a86ed at term 1"}
	{"level":"info","ts":"2025-12-27T10:05:02.348276Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1693","msg":"9f0758e1c58a86ed has received 1 MsgPreVoteResp votes and 0 vote rejections"}
	{"level":"info","ts":"2025-12-27T10:05:02.348292Z","logger":"raft","caller":"v3@v3.6.0/raft.go:912","msg":"9f0758e1c58a86ed became candidate at term 2"}
	{"level":"info","ts":"2025-12-27T10:05:02.349413Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1077","msg":"9f0758e1c58a86ed received MsgVoteResp from 9f0758e1c58a86ed at term 2"}
	{"level":"info","ts":"2025-12-27T10:05:02.349503Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1693","msg":"9f0758e1c58a86ed has received 1 MsgVoteResp votes and 0 vote rejections"}
	{"level":"info","ts":"2025-12-27T10:05:02.349546Z","logger":"raft","caller":"v3@v3.6.0/raft.go:970","msg":"9f0758e1c58a86ed became leader at term 2"}
	{"level":"info","ts":"2025-12-27T10:05:02.349592Z","logger":"raft","caller":"v3@v3.6.0/node.go:370","msg":"raft.node: 9f0758e1c58a86ed elected leader 9f0758e1c58a86ed at term 2"}
	{"level":"info","ts":"2025-12-27T10:05:02.351629Z","caller":"etcdserver/server.go:1820","msg":"published local member to cluster through raft","local-member-id":"9f0758e1c58a86ed","local-member-attributes":"{Name:default-k8s-diff-port-681744 ClientURLs:[https://192.168.85.2:2379]}","cluster-id":"68eaea490fab4e05","publish-timeout":"7s"}
	{"level":"info","ts":"2025-12-27T10:05:02.351855Z","caller":"embed/serve.go:138","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-12-27T10:05:02.352037Z","caller":"etcdserver/server.go:2420","msg":"setting up initial cluster version using v3 API","cluster-version":"3.6"}
	{"level":"info","ts":"2025-12-27T10:05:02.352173Z","caller":"embed/serve.go:138","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-12-27T10:05:02.352342Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-12-27T10:05:02.352367Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-12-27T10:05:02.353361Z","caller":"membership/cluster.go:682","msg":"set initial cluster version","cluster-id":"68eaea490fab4e05","local-member-id":"9f0758e1c58a86ed","cluster-version":"3.6"}
	{"level":"info","ts":"2025-12-27T10:05:02.353519Z","caller":"api/capability.go:76","msg":"enabled capabilities for version","cluster-version":"3.6"}
	{"level":"info","ts":"2025-12-27T10:05:02.353585Z","caller":"etcdserver/server.go:2440","msg":"cluster version is updated","cluster-version":"3.6"}
	{"level":"info","ts":"2025-12-27T10:05:02.353655Z","caller":"version/monitor.go:116","msg":"cluster version differs from storage version.","cluster-version":"3.6.0","storage-version":"3.5.0"}
	{"level":"info","ts":"2025-12-27T10:05:02.353761Z","caller":"schema/migration.go:65","msg":"updated storage version","new-storage-version":"3.6.0"}
	{"level":"info","ts":"2025-12-27T10:05:02.368473Z","caller":"v3rpc/health.go:63","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-12-27T10:05:02.376495Z","caller":"embed/serve.go:283","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.85.2:2379"}
	{"level":"info","ts":"2025-12-27T10:05:02.386563Z","caller":"v3rpc/health.go:63","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-12-27T10:05:02.387363Z","caller":"embed/serve.go:283","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	
	
	==> kernel <==
	 10:05:39 up  2:48,  0 user,  load average: 3.26, 2.35, 2.18
	Linux default-k8s-diff-port-681744 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [ca23d7ced86c658fb45468154b964e58bdad9ea4f0112178de828ba0bdd610b8] <==
	I1227 10:05:15.532791       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1227 10:05:15.619066       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1227 10:05:15.619221       1 main.go:148] setting mtu 1500 for CNI 
	I1227 10:05:15.619240       1 main.go:178] kindnetd IP family: "ipv4"
	I1227 10:05:15.619255       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-12-27T10:05:15Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1227 10:05:15.819511       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1227 10:05:15.819544       1 controller.go:381] "Waiting for informer caches to sync"
	I1227 10:05:15.819564       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1227 10:05:15.819903       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1227 10:05:16.020257       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1227 10:05:16.020354       1 metrics.go:72] Registering metrics
	I1227 10:05:16.020445       1 controller.go:711] "Syncing nftables rules"
	I1227 10:05:25.819315       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1227 10:05:25.819374       1 main.go:301] handling current node
	I1227 10:05:35.822216       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1227 10:05:35.822253       1 main.go:301] handling current node
	
	
	==> kube-apiserver [80eabbea5b45573fb30309b58dddb4e5c9eb5bbf5f08f7f507e629f47fb57d5c] <==
	I1227 10:05:04.813958       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1227 10:05:04.815201       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1227 10:05:04.826275       1 controller.go:667] quota admission added evaluator for: namespaces
	I1227 10:05:04.840210       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1227 10:05:04.862204       1 cidrallocator.go:302] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1227 10:05:04.862346       1 default_servicecidr_controller.go:231] Setting default ServiceCIDR condition Ready to True
	I1227 10:05:04.878576       1 default_servicecidr_controller.go:231] Setting default ServiceCIDR condition Ready to True
	I1227 10:05:04.879382       1 cidrallocator.go:278] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1227 10:05:05.526814       1 storage_scheduling.go:123] created PriorityClass system-node-critical with value 2000001000
	I1227 10:05:05.534456       1 storage_scheduling.go:123] created PriorityClass system-cluster-critical with value 2000000000
	I1227 10:05:05.534478       1 storage_scheduling.go:139] all system priority classes are created successfully or already exist.
	I1227 10:05:06.289394       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1227 10:05:06.343183       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1227 10:05:06.438455       1 alloc.go:329] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1227 10:05:06.446341       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.85.2]
	I1227 10:05:06.447544       1 controller.go:667] quota admission added evaluator for: endpoints
	I1227 10:05:06.453009       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1227 10:05:06.670434       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1227 10:05:07.314819       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1227 10:05:07.381349       1 alloc.go:329] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1227 10:05:07.413751       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1227 10:05:12.337515       1 cidrallocator.go:278] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1227 10:05:12.342663       1 cidrallocator.go:278] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1227 10:05:12.477547       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1227 10:05:12.747565       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	
	
	==> kube-controller-manager [9ee16407148c752f042ad4b5204f6e4472047e73ff7acc03ff941915a4e43685] <==
	I1227 10:05:11.482393       1 range_allocator.go:181] "Starting range CIDR allocator"
	I1227 10:05:11.482402       1 shared_informer.go:370] "Waiting for caches to sync"
	I1227 10:05:11.482406       1 shared_informer.go:377] "Caches are synced"
	I1227 10:05:11.482733       1 shared_informer.go:377] "Caches are synced"
	I1227 10:05:11.482892       1 shared_informer.go:377] "Caches are synced"
	I1227 10:05:11.483389       1 shared_informer.go:377] "Caches are synced"
	I1227 10:05:11.484348       1 shared_informer.go:377] "Caches are synced"
	I1227 10:05:11.484388       1 shared_informer.go:377] "Caches are synced"
	I1227 10:05:11.485487       1 shared_informer.go:377] "Caches are synced"
	I1227 10:05:11.485548       1 shared_informer.go:377] "Caches are synced"
	I1227 10:05:11.485599       1 shared_informer.go:377] "Caches are synced"
	I1227 10:05:11.485634       1 shared_informer.go:377] "Caches are synced"
	I1227 10:05:11.485708       1 node_lifecycle_controller.go:1234] "Initializing eviction metric for zone" zone=""
	I1227 10:05:11.485785       1 node_lifecycle_controller.go:886] "Missing timestamp for Node. Assuming now as a timestamp" node="default-k8s-diff-port-681744"
	I1227 10:05:11.485885       1 node_lifecycle_controller.go:1038] "Controller detected that all Nodes are not-Ready. Entering master disruption mode"
	I1227 10:05:11.485915       1 shared_informer.go:377] "Caches are synced"
	I1227 10:05:11.485949       1 shared_informer.go:377] "Caches are synced"
	I1227 10:05:11.504427       1 shared_informer.go:370] "Waiting for caches to sync"
	I1227 10:05:11.505033       1 range_allocator.go:433] "Set node PodCIDR" node="default-k8s-diff-port-681744" podCIDRs=["10.244.0.0/24"]
	I1227 10:05:11.507736       1 shared_informer.go:377] "Caches are synced"
	I1227 10:05:11.581959       1 shared_informer.go:377] "Caches are synced"
	I1227 10:05:11.581988       1 garbagecollector.go:166] "Garbage collector: all resource monitors have synced"
	I1227 10:05:11.581994       1 garbagecollector.go:169] "Proceeding to collect garbage"
	I1227 10:05:11.604876       1 shared_informer.go:377] "Caches are synced"
	I1227 10:05:26.487882       1 node_lifecycle_controller.go:1057] "Controller detected that some Nodes are Ready. Exiting master disruption mode"
	
	
	==> kube-proxy [f33343a4ccae69adec2e6c5c97503651ecbd7db3ad2a63f5ac994aee73b8bcaa] <==
	I1227 10:05:13.464323       1 server_linux.go:53] "Using iptables proxy"
	I1227 10:05:13.577433       1 shared_informer.go:370] "Waiting for caches to sync"
	I1227 10:05:13.678400       1 shared_informer.go:377] "Caches are synced"
	I1227 10:05:13.678434       1 server.go:218] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1227 10:05:13.678501       1 server.go:255] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1227 10:05:13.729385       1 server.go:264] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1227 10:05:13.729447       1 server_linux.go:136] "Using iptables Proxier"
	I1227 10:05:13.766247       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1227 10:05:13.766900       1 server.go:529] "Version info" version="v1.35.0"
	I1227 10:05:13.766918       1 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1227 10:05:13.768943       1 config.go:200] "Starting service config controller"
	I1227 10:05:13.768955       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1227 10:05:13.768973       1 config.go:106] "Starting endpoint slice config controller"
	I1227 10:05:13.768976       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1227 10:05:13.768996       1 config.go:403] "Starting serviceCIDR config controller"
	I1227 10:05:13.769002       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1227 10:05:13.771529       1 config.go:309] "Starting node config controller"
	I1227 10:05:13.771562       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1227 10:05:13.771570       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1227 10:05:13.971940       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1227 10:05:13.971983       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1227 10:05:13.972030       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [134346be0a2c54bef2aa6b42094a85b48eed2e9b31b6475b20718c20e900b604] <==
	E1227 10:05:04.713198       1 reflector.go:204] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.DeviceClass"
	E1227 10:05:04.713236       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ReplicaSet"
	E1227 10:05:04.727513       1 reflector.go:204] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.PersistentVolumeClaim"
	E1227 10:05:04.727580       1 reflector.go:204] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.VolumeAttachment"
	E1227 10:05:04.727614       1 reflector.go:204] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.PodDisruptionBudget"
	E1227 10:05:04.727719       1 reflector.go:204] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.CSINode"
	E1227 10:05:04.729057       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Namespace"
	E1227 10:05:04.729137       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ResourceSlice"
	E1227 10:05:04.729725       1 reflector.go:204] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.CSIStorageCapacity"
	E1227 10:05:04.729753       1 reflector.go:204] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.StatefulSet"
	E1227 10:05:04.730118       1 reflector.go:204] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.CSIDriver"
	E1227 10:05:04.730408       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Pod"
	E1227 10:05:05.585580       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ReplicationController"
	E1227 10:05:05.670108       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Node"
	E1227 10:05:05.715910       1 reflector.go:204] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.PodDisruptionBudget"
	E1227 10:05:05.731611       1 reflector.go:204] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.VolumeAttachment"
	E1227 10:05:05.807370       1 reflector.go:204] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.StatefulSet"
	E1227 10:05:05.823202       1 reflector.go:204] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.DeviceClass"
	E1227 10:05:05.906106       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Pod"
	E1227 10:05:05.906686       1 reflector.go:204] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.CSIStorageCapacity"
	E1227 10:05:05.939271       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ResourceClaim"
	E1227 10:05:05.945266       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ResourceSlice"
	E1227 10:05:05.969496       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Namespace"
	E1227 10:05:06.128339       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1268" type="*v1.ConfigMap"
	I1227 10:05:07.969200       1 shared_informer.go:377] "Caches are synced"
	
	
	==> kubelet <==
	Dec 27 10:05:12 default-k8s-diff-port-681744 kubelet[1299]: I1227 10:05:12.968838    1299 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/fa32b3d6-de74-4996-8943-cd4072b7a4e4-lib-modules\") pod \"kindnet-n6bcg\" (UID: \"fa32b3d6-de74-4996-8943-cd4072b7a4e4\") " pod="kube-system/kindnet-n6bcg"
	Dec 27 10:05:13 default-k8s-diff-port-681744 kubelet[1299]: I1227 10:05:13.085234    1299 swap_util.go:78] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory"
	Dec 27 10:05:13 default-k8s-diff-port-681744 kubelet[1299]: W1227 10:05:13.220487    1299 manager.go:1172] Failed to process watch event {EventType:0 Name:/docker/d2370e32a3db686d85e4d6785307c138600396030b0352d75607381960c53a89/crio-114a8689211523508c0c2d59ba3192142c3c2ce025380208fac48093fb2683b2 WatchSource:0}: Error finding container 114a8689211523508c0c2d59ba3192142c3c2ce025380208fac48093fb2683b2: Status 404 returned error can't find the container with id 114a8689211523508c0c2d59ba3192142c3c2ce025380208fac48093fb2683b2
	Dec 27 10:05:14 default-k8s-diff-port-681744 kubelet[1299]: E1227 10:05:14.829128    1299 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-default-k8s-diff-port-681744" containerName="kube-apiserver"
	Dec 27 10:05:14 default-k8s-diff-port-681744 kubelet[1299]: I1227 10:05:14.845108    1299 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kube-proxy-6wq7w" podStartSLOduration=2.845093006 podStartE2EDuration="2.845093006s" podCreationTimestamp="2025-12-27 10:05:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-27 10:05:13.450846064 +0000 UTC m=+6.278327218" watchObservedRunningTime="2025-12-27 10:05:14.845093006 +0000 UTC m=+7.672574151"
	Dec 27 10:05:15 default-k8s-diff-port-681744 kubelet[1299]: E1227 10:05:15.402533    1299 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-controller-manager-default-k8s-diff-port-681744" containerName="kube-controller-manager"
	Dec 27 10:05:16 default-k8s-diff-port-681744 kubelet[1299]: E1227 10:05:16.381338    1299 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-default-k8s-diff-port-681744" containerName="kube-scheduler"
	Dec 27 10:05:20 default-k8s-diff-port-681744 kubelet[1299]: E1227 10:05:20.477253    1299 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/etcd-default-k8s-diff-port-681744" containerName="etcd"
	Dec 27 10:05:20 default-k8s-diff-port-681744 kubelet[1299]: I1227 10:05:20.490914    1299 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kindnet-n6bcg" podStartSLOduration=6.2089835 podStartE2EDuration="8.490896685s" podCreationTimestamp="2025-12-27 10:05:12 +0000 UTC" firstStartedPulling="2025-12-27 10:05:13.202437019 +0000 UTC m=+6.029918165" lastFinishedPulling="2025-12-27 10:05:15.484350204 +0000 UTC m=+8.311831350" observedRunningTime="2025-12-27 10:05:16.443013148 +0000 UTC m=+9.270494302" watchObservedRunningTime="2025-12-27 10:05:20.490896685 +0000 UTC m=+13.318377847"
	Dec 27 10:05:24 default-k8s-diff-port-681744 kubelet[1299]: E1227 10:05:24.838541    1299 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-default-k8s-diff-port-681744" containerName="kube-apiserver"
	Dec 27 10:05:25 default-k8s-diff-port-681744 kubelet[1299]: E1227 10:05:25.412042    1299 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-controller-manager-default-k8s-diff-port-681744" containerName="kube-controller-manager"
	Dec 27 10:05:25 default-k8s-diff-port-681744 kubelet[1299]: I1227 10:05:25.914284    1299 kubelet_node_status.go:427] "Fast updating node status as it just became ready"
	Dec 27 10:05:26 default-k8s-diff-port-681744 kubelet[1299]: I1227 10:05:26.082781    1299 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w9skt\" (UniqueName: \"kubernetes.io/projected/7d515dc0-eaac-424b-9308-be2c50a7d4fc-kube-api-access-w9skt\") pod \"storage-provisioner\" (UID: \"7d515dc0-eaac-424b-9308-be2c50a7d4fc\") " pod="kube-system/storage-provisioner"
	Dec 27 10:05:26 default-k8s-diff-port-681744 kubelet[1299]: I1227 10:05:26.082849    1299 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5cd01233-f1ab-4fa5-b523-fcd838dbbdad-config-volume\") pod \"coredns-7d764666f9-gsk6s\" (UID: \"5cd01233-f1ab-4fa5-b523-fcd838dbbdad\") " pod="kube-system/coredns-7d764666f9-gsk6s"
	Dec 27 10:05:26 default-k8s-diff-port-681744 kubelet[1299]: I1227 10:05:26.082884    1299 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xpgf8\" (UniqueName: \"kubernetes.io/projected/5cd01233-f1ab-4fa5-b523-fcd838dbbdad-kube-api-access-xpgf8\") pod \"coredns-7d764666f9-gsk6s\" (UID: \"5cd01233-f1ab-4fa5-b523-fcd838dbbdad\") " pod="kube-system/coredns-7d764666f9-gsk6s"
	Dec 27 10:05:26 default-k8s-diff-port-681744 kubelet[1299]: I1227 10:05:26.082906    1299 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/7d515dc0-eaac-424b-9308-be2c50a7d4fc-tmp\") pod \"storage-provisioner\" (UID: \"7d515dc0-eaac-424b-9308-be2c50a7d4fc\") " pod="kube-system/storage-provisioner"
	Dec 27 10:05:26 default-k8s-diff-port-681744 kubelet[1299]: W1227 10:05:26.272649    1299 manager.go:1172] Failed to process watch event {EventType:0 Name:/docker/d2370e32a3db686d85e4d6785307c138600396030b0352d75607381960c53a89/crio-946eb5b021471d1da832fe8eab1598b1f1346626a2b53869c439ddadcdb182ee WatchSource:0}: Error finding container 946eb5b021471d1da832fe8eab1598b1f1346626a2b53869c439ddadcdb182ee: Status 404 returned error can't find the container with id 946eb5b021471d1da832fe8eab1598b1f1346626a2b53869c439ddadcdb182ee
	Dec 27 10:05:26 default-k8s-diff-port-681744 kubelet[1299]: W1227 10:05:26.303279    1299 manager.go:1172] Failed to process watch event {EventType:0 Name:/docker/d2370e32a3db686d85e4d6785307c138600396030b0352d75607381960c53a89/crio-3f9cd5edc5ed43c776999257c3eb2f062b737d07ec3828cd0cff58ef70dd6853 WatchSource:0}: Error finding container 3f9cd5edc5ed43c776999257c3eb2f062b737d07ec3828cd0cff58ef70dd6853: Status 404 returned error can't find the container with id 3f9cd5edc5ed43c776999257c3eb2f062b737d07ec3828cd0cff58ef70dd6853
	Dec 27 10:05:26 default-k8s-diff-port-681744 kubelet[1299]: E1227 10:05:26.398871    1299 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-default-k8s-diff-port-681744" containerName="kube-scheduler"
	Dec 27 10:05:26 default-k8s-diff-port-681744 kubelet[1299]: E1227 10:05:26.450473    1299 prober_manager.go:209] "Readiness probe already exists for container" pod="kube-system/coredns-7d764666f9-gsk6s" containerName="coredns"
	Dec 27 10:05:26 default-k8s-diff-port-681744 kubelet[1299]: I1227 10:05:26.505963    1299 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=13.505945515 podStartE2EDuration="13.505945515s" podCreationTimestamp="2025-12-27 10:05:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-27 10:05:26.490198242 +0000 UTC m=+19.317679396" watchObservedRunningTime="2025-12-27 10:05:26.505945515 +0000 UTC m=+19.333426669"
	Dec 27 10:05:27 default-k8s-diff-port-681744 kubelet[1299]: E1227 10:05:27.455618    1299 prober_manager.go:209] "Readiness probe already exists for container" pod="kube-system/coredns-7d764666f9-gsk6s" containerName="coredns"
	Dec 27 10:05:28 default-k8s-diff-port-681744 kubelet[1299]: E1227 10:05:28.458272    1299 prober_manager.go:209] "Readiness probe already exists for container" pod="kube-system/coredns-7d764666f9-gsk6s" containerName="coredns"
	Dec 27 10:05:28 default-k8s-diff-port-681744 kubelet[1299]: I1227 10:05:28.699504    1299 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/coredns-7d764666f9-gsk6s" podStartSLOduration=16.699461822 podStartE2EDuration="16.699461822s" podCreationTimestamp="2025-12-27 10:05:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-27 10:05:26.516493609 +0000 UTC m=+19.343974780" watchObservedRunningTime="2025-12-27 10:05:28.699461822 +0000 UTC m=+21.526942968"
	Dec 27 10:05:28 default-k8s-diff-port-681744 kubelet[1299]: I1227 10:05:28.800254    1299 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bwkm6\" (UniqueName: \"kubernetes.io/projected/009d55c6-9295-4db0-86af-fd454f83cf65-kube-api-access-bwkm6\") pod \"busybox\" (UID: \"009d55c6-9295-4db0-86af-fd454f83cf65\") " pod="default/busybox"
	
	
	==> storage-provisioner [6ffab1fcb0d8a879dcdf39c6ebda116d12c42ab0be8df58c5f90c8d848d181d5] <==
	I1227 10:05:26.347383       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1227 10:05:26.362767       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1227 10:05:26.362815       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1227 10:05:26.364919       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1227 10:05:26.380972       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1227 10:05:26.381149       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1227 10:05:26.381353       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-681744_98b4dd3c-1762-4118-9901-b1adc0599881!
	I1227 10:05:26.388236       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"b784311c-5962-4e1e-afb9-963a396928d5", APIVersion:"v1", ResourceVersion:"458", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-681744_98b4dd3c-1762-4118-9901-b1adc0599881 became leader
	W1227 10:05:26.394388       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1227 10:05:26.421847       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1227 10:05:26.481817       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-681744_98b4dd3c-1762-4118-9901-b1adc0599881!
	W1227 10:05:28.426754       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1227 10:05:28.432059       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1227 10:05:30.434803       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1227 10:05:30.442620       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1227 10:05:32.445877       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1227 10:05:32.450470       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1227 10:05:34.453687       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1227 10:05:34.460683       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1227 10:05:36.471372       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1227 10:05:36.477135       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1227 10:05:38.479885       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1227 10:05:38.484545       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-681744 -n default-k8s-diff-port-681744
E1227 10:05:39.952406  303043 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22344-301174/.minikube/profiles/old-k8s-version-156305/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:270: (dbg) Run:  kubectl --context default-k8s-diff-port-681744 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:294: <<< TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive FAILED: end of post-mortem logs <<<
helpers_test.go:295: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (2.51s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (6.52s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p embed-certs-017122 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 pause -p embed-certs-017122 --alsologtostderr -v=1: exit status 80 (1.88605157s)

                                                
                                                
-- stdout --
	* Pausing node embed-certs-017122 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1227 10:06:38.814991  520935 out.go:360] Setting OutFile to fd 1 ...
	I1227 10:06:38.815209  520935 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 10:06:38.815235  520935 out.go:374] Setting ErrFile to fd 2...
	I1227 10:06:38.815243  520935 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 10:06:38.815770  520935 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22344-301174/.minikube/bin
	I1227 10:06:38.816295  520935 out.go:368] Setting JSON to false
	I1227 10:06:38.817111  520935 mustload.go:66] Loading cluster: embed-certs-017122
	I1227 10:06:38.817998  520935 config.go:182] Loaded profile config "embed-certs-017122": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1227 10:06:38.818843  520935 cli_runner.go:164] Run: docker container inspect embed-certs-017122 --format={{.State.Status}}
	I1227 10:06:38.837896  520935 host.go:66] Checking if "embed-certs-017122" exists ...
	I1227 10:06:38.838364  520935 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1227 10:06:38.900771  520935 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:49 OomKillDisable:true NGoroutines:62 SystemTime:2025-12-27 10:06:38.891097537 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1227 10:06:38.901524  520935 pause.go:60] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/22158/minikube-v1.37.0-1766719468-22158-arm64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1766719468-22158/minikube-v1.37.0-1766719468-22158-arm64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1766719468-22158-arm64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) preload-source:auto profile:embed-certs-017122 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) rosetta:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(boo
l=true) wantupdatenotification:%!s(bool=true) wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1227 10:06:38.906962  520935 out.go:179] * Pausing node embed-certs-017122 ... 
	I1227 10:06:38.911373  520935 host.go:66] Checking if "embed-certs-017122" exists ...
	I1227 10:06:38.911724  520935 ssh_runner.go:195] Run: systemctl --version
	I1227 10:06:38.911779  520935 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-017122
	I1227 10:06:38.929033  520935 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33451 SSHKeyPath:/home/jenkins/minikube-integration/22344-301174/.minikube/machines/embed-certs-017122/id_rsa Username:docker}
	I1227 10:06:39.040663  520935 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1227 10:06:39.079879  520935 pause.go:52] kubelet running: true
	I1227 10:06:39.080016  520935 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1227 10:06:39.353663  520935 cri.go:61] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1227 10:06:39.353763  520935 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1227 10:06:39.423332  520935 cri.go:96] found id: "c42ed83b7c4b6ba64ecae6ede25519b2ab7b1b1805784e124265c4431dd093ac"
	I1227 10:06:39.423353  520935 cri.go:96] found id: "4c9ab6fe53512b9ed7548d8e5e3e67c177bc266d4293df4e51df87ff6e091014"
	I1227 10:06:39.423358  520935 cri.go:96] found id: "d20768561f33e54160822873cd11a005f1ff46dbc38b0abae2e8ecd8d9636275"
	I1227 10:06:39.423377  520935 cri.go:96] found id: "2fd5b0ea3051372087c322ed48f365491b0c576c41d49c831586bb295e8cd4b1"
	I1227 10:06:39.423380  520935 cri.go:96] found id: "db5a895de4aa9bea7fd27e010a93c2e73b2cd31487927aa4bc444480a74acabc"
	I1227 10:06:39.423384  520935 cri.go:96] found id: "0cc3bd645f392c02eb74608b63e52a0c2ca4f3ab5d2fa6e9de3815e6b3f84037"
	I1227 10:06:39.423387  520935 cri.go:96] found id: "76f3b93c8f471b74c03f3058edede420056a0cf37682f580aa788c86b60dd759"
	I1227 10:06:39.423390  520935 cri.go:96] found id: "6cfa79ecfd13f3a2204b0eca76862e1ae58e5961230bbbb0e2c311e1886de756"
	I1227 10:06:39.423394  520935 cri.go:96] found id: "44cdcc6347ae5077ffdabfa2362bee311b3b59c6c54028ea82f59bab340bbb83"
	I1227 10:06:39.423400  520935 cri.go:96] found id: "9bb2eed3a1a16c237734c407b363356f266324a7f0f4221b794d2a8ea991e392"
	I1227 10:06:39.423403  520935 cri.go:96] found id: "185eb58aa58d9d750b982bfdc9c22d6399ce253489b24c332510411e62876981"
	I1227 10:06:39.423406  520935 cri.go:96] found id: ""
	I1227 10:06:39.423455  520935 ssh_runner.go:195] Run: sudo runc list -f json
	I1227 10:06:39.443249  520935 retry.go:84] will retry after 400ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-27T10:06:39Z" level=error msg="open /run/runc: no such file or directory"
	I1227 10:06:39.817856  520935 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1227 10:06:39.832413  520935 pause.go:52] kubelet running: false
	I1227 10:06:39.832487  520935 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1227 10:06:40.020118  520935 cri.go:61] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1227 10:06:40.020218  520935 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1227 10:06:40.126691  520935 cri.go:96] found id: "c42ed83b7c4b6ba64ecae6ede25519b2ab7b1b1805784e124265c4431dd093ac"
	I1227 10:06:40.126714  520935 cri.go:96] found id: "4c9ab6fe53512b9ed7548d8e5e3e67c177bc266d4293df4e51df87ff6e091014"
	I1227 10:06:40.126719  520935 cri.go:96] found id: "d20768561f33e54160822873cd11a005f1ff46dbc38b0abae2e8ecd8d9636275"
	I1227 10:06:40.126722  520935 cri.go:96] found id: "2fd5b0ea3051372087c322ed48f365491b0c576c41d49c831586bb295e8cd4b1"
	I1227 10:06:40.126725  520935 cri.go:96] found id: "db5a895de4aa9bea7fd27e010a93c2e73b2cd31487927aa4bc444480a74acabc"
	I1227 10:06:40.126729  520935 cri.go:96] found id: "0cc3bd645f392c02eb74608b63e52a0c2ca4f3ab5d2fa6e9de3815e6b3f84037"
	I1227 10:06:40.126733  520935 cri.go:96] found id: "76f3b93c8f471b74c03f3058edede420056a0cf37682f580aa788c86b60dd759"
	I1227 10:06:40.126736  520935 cri.go:96] found id: "6cfa79ecfd13f3a2204b0eca76862e1ae58e5961230bbbb0e2c311e1886de756"
	I1227 10:06:40.126739  520935 cri.go:96] found id: "44cdcc6347ae5077ffdabfa2362bee311b3b59c6c54028ea82f59bab340bbb83"
	I1227 10:06:40.126765  520935 cri.go:96] found id: "9bb2eed3a1a16c237734c407b363356f266324a7f0f4221b794d2a8ea991e392"
	I1227 10:06:40.126772  520935 cri.go:96] found id: "185eb58aa58d9d750b982bfdc9c22d6399ce253489b24c332510411e62876981"
	I1227 10:06:40.126775  520935 cri.go:96] found id: ""
	I1227 10:06:40.126829  520935 ssh_runner.go:195] Run: sudo runc list -f json
	I1227 10:06:40.364498  520935 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1227 10:06:40.378077  520935 pause.go:52] kubelet running: false
	I1227 10:06:40.378223  520935 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1227 10:06:40.549829  520935 cri.go:61] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1227 10:06:40.549985  520935 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1227 10:06:40.617784  520935 cri.go:96] found id: "c42ed83b7c4b6ba64ecae6ede25519b2ab7b1b1805784e124265c4431dd093ac"
	I1227 10:06:40.617860  520935 cri.go:96] found id: "4c9ab6fe53512b9ed7548d8e5e3e67c177bc266d4293df4e51df87ff6e091014"
	I1227 10:06:40.617881  520935 cri.go:96] found id: "d20768561f33e54160822873cd11a005f1ff46dbc38b0abae2e8ecd8d9636275"
	I1227 10:06:40.617901  520935 cri.go:96] found id: "2fd5b0ea3051372087c322ed48f365491b0c576c41d49c831586bb295e8cd4b1"
	I1227 10:06:40.617931  520935 cri.go:96] found id: "db5a895de4aa9bea7fd27e010a93c2e73b2cd31487927aa4bc444480a74acabc"
	I1227 10:06:40.617955  520935 cri.go:96] found id: "0cc3bd645f392c02eb74608b63e52a0c2ca4f3ab5d2fa6e9de3815e6b3f84037"
	I1227 10:06:40.617975  520935 cri.go:96] found id: "76f3b93c8f471b74c03f3058edede420056a0cf37682f580aa788c86b60dd759"
	I1227 10:06:40.617995  520935 cri.go:96] found id: "6cfa79ecfd13f3a2204b0eca76862e1ae58e5961230bbbb0e2c311e1886de756"
	I1227 10:06:40.618052  520935 cri.go:96] found id: "44cdcc6347ae5077ffdabfa2362bee311b3b59c6c54028ea82f59bab340bbb83"
	I1227 10:06:40.618072  520935 cri.go:96] found id: "9bb2eed3a1a16c237734c407b363356f266324a7f0f4221b794d2a8ea991e392"
	I1227 10:06:40.618092  520935 cri.go:96] found id: "185eb58aa58d9d750b982bfdc9c22d6399ce253489b24c332510411e62876981"
	I1227 10:06:40.618112  520935 cri.go:96] found id: ""
	I1227 10:06:40.618297  520935 ssh_runner.go:195] Run: sudo runc list -f json
	I1227 10:06:40.634134  520935 out.go:203] 
	W1227 10:06:40.637205  520935 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-27T10:06:40Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-27T10:06:40Z" level=error msg="open /run/runc: no such file or directory"
	
	W1227 10:06:40.637234  520935 out.go:285] * 
	* 
	W1227 10:06:40.641520  520935 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1227 10:06:40.644792  520935 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:309: out/minikube-linux-arm64 pause -p embed-certs-017122 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect embed-certs-017122
helpers_test.go:244: (dbg) docker inspect embed-certs-017122:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "f2b20a6dc27416978f803efe5894b77a3f380f056e00a1283ea3861f5ac2afb4",
	        "Created": "2025-12-27T10:04:22.683463694Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 515973,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-27T10:05:36.177427623Z",
	            "FinishedAt": "2025-12-27T10:05:35.31446463Z"
	        },
	        "Image": "sha256:713d59eed1cb7ea89d31e3cbef8f6274a6ab7509a421e96cf0197c334e76398d",
	        "ResolvConfPath": "/var/lib/docker/containers/f2b20a6dc27416978f803efe5894b77a3f380f056e00a1283ea3861f5ac2afb4/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/f2b20a6dc27416978f803efe5894b77a3f380f056e00a1283ea3861f5ac2afb4/hostname",
	        "HostsPath": "/var/lib/docker/containers/f2b20a6dc27416978f803efe5894b77a3f380f056e00a1283ea3861f5ac2afb4/hosts",
	        "LogPath": "/var/lib/docker/containers/f2b20a6dc27416978f803efe5894b77a3f380f056e00a1283ea3861f5ac2afb4/f2b20a6dc27416978f803efe5894b77a3f380f056e00a1283ea3861f5ac2afb4-json.log",
	        "Name": "/embed-certs-017122",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "embed-certs-017122:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "embed-certs-017122",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "f2b20a6dc27416978f803efe5894b77a3f380f056e00a1283ea3861f5ac2afb4",
	                "LowerDir": "/var/lib/docker/overlay2/052d61c91b3bdbfd03ac674bac933f73adb21bf22ba18d065f91f0746a20b7f4-init/diff:/var/lib/docker/overlay2/888349771a41a46b09e7de676064af9effbe2f5ae2a8ba49ad062335fb2a70e5/diff",
	                "MergedDir": "/var/lib/docker/overlay2/052d61c91b3bdbfd03ac674bac933f73adb21bf22ba18d065f91f0746a20b7f4/merged",
	                "UpperDir": "/var/lib/docker/overlay2/052d61c91b3bdbfd03ac674bac933f73adb21bf22ba18d065f91f0746a20b7f4/diff",
	                "WorkDir": "/var/lib/docker/overlay2/052d61c91b3bdbfd03ac674bac933f73adb21bf22ba18d065f91f0746a20b7f4/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "embed-certs-017122",
	                "Source": "/var/lib/docker/volumes/embed-certs-017122/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "embed-certs-017122",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "embed-certs-017122",
	                "name.minikube.sigs.k8s.io": "embed-certs-017122",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "39b46dfffc95ef177f153210bb7a7e5e7aa063e1ce9641ef950769297f2ac25a",
	            "SandboxKey": "/var/run/docker/netns/39b46dfffc95",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33451"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33452"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33455"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33453"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33454"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "embed-certs-017122": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "de:5e:1c:c8:a6:df",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "3ffc320fafa322491008f70d428c80b42cc8ee40dadd5618a8bbe80fddaf33d5",
	                    "EndpointID": "fd12319fbd4646f7b6cbcb3359cc21942ae3d8520fec65a069abce5434e15c69",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "embed-certs-017122",
	                        "f2b20a6dc274"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-017122 -n embed-certs-017122
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-017122 -n embed-certs-017122: exit status 2 (349.086097ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestStartStop/group/embed-certs/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p embed-certs-017122 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-arm64 -p embed-certs-017122 logs -n 25: (1.450936364s)
helpers_test.go:261: TestStartStop/group/embed-certs/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                   ARGS                                                                                   │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ delete  │ -p old-k8s-version-156305                                                                                                                                                │ old-k8s-version-156305       │ jenkins │ v1.37.0 │ 27 Dec 25 10:02 UTC │ 27 Dec 25 10:02 UTC │
	│ start   │ -p no-preload-021144 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0                  │ no-preload-021144            │ jenkins │ v1.37.0 │ 27 Dec 25 10:02 UTC │ 27 Dec 25 10:03 UTC │
	│ addons  │ enable metrics-server -p no-preload-021144 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                  │ no-preload-021144            │ jenkins │ v1.37.0 │ 27 Dec 25 10:03 UTC │                     │
	│ stop    │ -p no-preload-021144 --alsologtostderr -v=3                                                                                                                              │ no-preload-021144            │ jenkins │ v1.37.0 │ 27 Dec 25 10:03 UTC │ 27 Dec 25 10:03 UTC │
	│ addons  │ enable dashboard -p no-preload-021144 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                             │ no-preload-021144            │ jenkins │ v1.37.0 │ 27 Dec 25 10:03 UTC │ 27 Dec 25 10:03 UTC │
	│ start   │ -p no-preload-021144 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0                  │ no-preload-021144            │ jenkins │ v1.37.0 │ 27 Dec 25 10:03 UTC │ 27 Dec 25 10:04 UTC │
	│ ssh     │ force-systemd-flag-779725 ssh cat /etc/crio/crio.conf.d/02-crio.conf                                                                                                     │ force-systemd-flag-779725    │ jenkins │ v1.37.0 │ 27 Dec 25 10:04 UTC │ 27 Dec 25 10:04 UTC │
	│ delete  │ -p force-systemd-flag-779725                                                                                                                                             │ force-systemd-flag-779725    │ jenkins │ v1.37.0 │ 27 Dec 25 10:04 UTC │ 27 Dec 25 10:04 UTC │
	│ start   │ -p embed-certs-017122 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0                   │ embed-certs-017122           │ jenkins │ v1.37.0 │ 27 Dec 25 10:04 UTC │ 27 Dec 25 10:05 UTC │
	│ image   │ no-preload-021144 image list --format=json                                                                                                                               │ no-preload-021144            │ jenkins │ v1.37.0 │ 27 Dec 25 10:04 UTC │ 27 Dec 25 10:04 UTC │
	│ pause   │ -p no-preload-021144 --alsologtostderr -v=1                                                                                                                              │ no-preload-021144            │ jenkins │ v1.37.0 │ 27 Dec 25 10:04 UTC │                     │
	│ delete  │ -p no-preload-021144                                                                                                                                                     │ no-preload-021144            │ jenkins │ v1.37.0 │ 27 Dec 25 10:04 UTC │ 27 Dec 25 10:04 UTC │
	│ delete  │ -p no-preload-021144                                                                                                                                                     │ no-preload-021144            │ jenkins │ v1.37.0 │ 27 Dec 25 10:04 UTC │ 27 Dec 25 10:04 UTC │
	│ delete  │ -p disable-driver-mounts-242374                                                                                                                                          │ disable-driver-mounts-242374 │ jenkins │ v1.37.0 │ 27 Dec 25 10:04 UTC │ 27 Dec 25 10:04 UTC │
	│ start   │ -p default-k8s-diff-port-681744 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0 │ default-k8s-diff-port-681744 │ jenkins │ v1.37.0 │ 27 Dec 25 10:04 UTC │ 27 Dec 25 10:05 UTC │
	│ addons  │ enable metrics-server -p embed-certs-017122 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                 │ embed-certs-017122           │ jenkins │ v1.37.0 │ 27 Dec 25 10:05 UTC │                     │
	│ stop    │ -p embed-certs-017122 --alsologtostderr -v=3                                                                                                                             │ embed-certs-017122           │ jenkins │ v1.37.0 │ 27 Dec 25 10:05 UTC │ 27 Dec 25 10:05 UTC │
	│ addons  │ enable dashboard -p embed-certs-017122 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                            │ embed-certs-017122           │ jenkins │ v1.37.0 │ 27 Dec 25 10:05 UTC │ 27 Dec 25 10:05 UTC │
	│ start   │ -p embed-certs-017122 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0                   │ embed-certs-017122           │ jenkins │ v1.37.0 │ 27 Dec 25 10:05 UTC │ 27 Dec 25 10:06 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-681744 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                       │ default-k8s-diff-port-681744 │ jenkins │ v1.37.0 │ 27 Dec 25 10:05 UTC │                     │
	│ stop    │ -p default-k8s-diff-port-681744 --alsologtostderr -v=3                                                                                                                   │ default-k8s-diff-port-681744 │ jenkins │ v1.37.0 │ 27 Dec 25 10:05 UTC │ 27 Dec 25 10:05 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-681744 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                  │ default-k8s-diff-port-681744 │ jenkins │ v1.37.0 │ 27 Dec 25 10:05 UTC │ 27 Dec 25 10:05 UTC │
	│ start   │ -p default-k8s-diff-port-681744 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0 │ default-k8s-diff-port-681744 │ jenkins │ v1.37.0 │ 27 Dec 25 10:05 UTC │                     │
	│ image   │ embed-certs-017122 image list --format=json                                                                                                                              │ embed-certs-017122           │ jenkins │ v1.37.0 │ 27 Dec 25 10:06 UTC │ 27 Dec 25 10:06 UTC │
	│ pause   │ -p embed-certs-017122 --alsologtostderr -v=1                                                                                                                             │ embed-certs-017122           │ jenkins │ v1.37.0 │ 27 Dec 25 10:06 UTC │                     │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/27 10:05:52
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1227 10:05:52.976420  518484 out.go:360] Setting OutFile to fd 1 ...
	I1227 10:05:52.976641  518484 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 10:05:52.976650  518484 out.go:374] Setting ErrFile to fd 2...
	I1227 10:05:52.976655  518484 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 10:05:52.976920  518484 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22344-301174/.minikube/bin
	I1227 10:05:52.977297  518484 out.go:368] Setting JSON to false
	I1227 10:05:52.978382  518484 start.go:133] hostinfo: {"hostname":"ip-172-31-31-251","uptime":10102,"bootTime":1766819851,"procs":201,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1227 10:05:52.978455  518484 start.go:143] virtualization:  
	I1227 10:05:52.982838  518484 out.go:179] * [default-k8s-diff-port-681744] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1227 10:05:52.986821  518484 out.go:179]   - MINIKUBE_LOCATION=22344
	I1227 10:05:52.986876  518484 notify.go:221] Checking for updates...
	I1227 10:05:52.993293  518484 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1227 10:05:52.996436  518484 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22344-301174/kubeconfig
	I1227 10:05:52.999552  518484 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22344-301174/.minikube
	I1227 10:05:53.002599  518484 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1227 10:05:53.005653  518484 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1227 10:05:53.009156  518484 config.go:182] Loaded profile config "default-k8s-diff-port-681744": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1227 10:05:53.009751  518484 driver.go:422] Setting default libvirt URI to qemu:///system
	I1227 10:05:53.050650  518484 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1227 10:05:53.050771  518484 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1227 10:05:53.147744  518484 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:53 SystemTime:2025-12-27 10:05:53.138029179 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1227 10:05:53.147848  518484 docker.go:319] overlay module found
	I1227 10:05:53.151424  518484 out.go:179] * Using the docker driver based on existing profile
	I1227 10:05:53.154777  518484 start.go:309] selected driver: docker
	I1227 10:05:53.154797  518484 start.go:928] validating driver "docker" against &{Name:default-k8s-diff-port-681744 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:default-k8s-diff-port-681744 Namespace:default APISe
rverHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] M
ountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1227 10:05:53.154902  518484 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1227 10:05:53.155634  518484 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1227 10:05:53.246584  518484 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:53 SystemTime:2025-12-27 10:05:53.237445187 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1227 10:05:53.246893  518484 start_flags.go:1019] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1227 10:05:53.246913  518484 cni.go:84] Creating CNI manager for ""
	I1227 10:05:53.246966  518484 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1227 10:05:53.247002  518484 start.go:353] cluster config:
	{Name:default-k8s-diff-port-681744 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:default-k8s-diff-port-681744 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:
cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false
DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1227 10:05:53.250199  518484 out.go:179] * Starting "default-k8s-diff-port-681744" primary control-plane node in "default-k8s-diff-port-681744" cluster
	I1227 10:05:53.252987  518484 cache.go:134] Beginning downloading kic base image for docker with crio
	I1227 10:05:53.256406  518484 out.go:179] * Pulling base image v0.0.48-1766570851-22316 ...
	I1227 10:05:53.259393  518484 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime crio
	I1227 10:05:53.259439  518484 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22344-301174/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-cri-o-overlay-arm64.tar.lz4
	I1227 10:05:53.259450  518484 cache.go:65] Caching tarball of preloaded images
	I1227 10:05:53.259563  518484 preload.go:251] Found /home/jenkins/minikube-integration/22344-301174/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1227 10:05:53.259574  518484 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0 on crio
	I1227 10:05:53.259697  518484 profile.go:143] Saving config to /home/jenkins/minikube-integration/22344-301174/.minikube/profiles/default-k8s-diff-port-681744/config.json ...
	I1227 10:05:53.259912  518484 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a in local docker daemon
	I1227 10:05:53.284599  518484 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a in local docker daemon, skipping pull
	I1227 10:05:53.284625  518484 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a exists in daemon, skipping load
	I1227 10:05:53.284641  518484 cache.go:243] Successfully downloaded all kic artifacts
	I1227 10:05:53.284670  518484 start.go:360] acquireMachinesLock for default-k8s-diff-port-681744: {Name:mk8a28038e1b078aa1c0d3cea0d9a4fa9fc07d3f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1227 10:05:53.284734  518484 start.go:364] duration metric: took 41.601µs to acquireMachinesLock for "default-k8s-diff-port-681744"
	I1227 10:05:53.284761  518484 start.go:96] Skipping create...Using existing machine configuration
	I1227 10:05:53.284770  518484 fix.go:54] fixHost starting: 
	I1227 10:05:53.285034  518484 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-681744 --format={{.State.Status}}
	I1227 10:05:53.314882  518484 fix.go:112] recreateIfNeeded on default-k8s-diff-port-681744: state=Stopped err=<nil>
	W1227 10:05:53.314916  518484 fix.go:138] unexpected machine state, will restart: <nil>
	W1227 10:05:50.985413  515844 pod_ready.go:104] pod "coredns-7d764666f9-bdwpn" is not "Ready", error: <nil>
	W1227 10:05:52.985766  515844 pod_ready.go:104] pod "coredns-7d764666f9-bdwpn" is not "Ready", error: <nil>
	W1227 10:05:55.486225  515844 pod_ready.go:104] pod "coredns-7d764666f9-bdwpn" is not "Ready", error: <nil>
	I1227 10:05:53.319285  518484 out.go:252] * Restarting existing docker container for "default-k8s-diff-port-681744" ...
	I1227 10:05:53.319385  518484 cli_runner.go:164] Run: docker start default-k8s-diff-port-681744
	I1227 10:05:53.656481  518484 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-681744 --format={{.State.Status}}
	I1227 10:05:53.686924  518484 kic.go:430] container "default-k8s-diff-port-681744" state is running.
	I1227 10:05:53.687416  518484 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-681744
	I1227 10:05:53.711540  518484 profile.go:143] Saving config to /home/jenkins/minikube-integration/22344-301174/.minikube/profiles/default-k8s-diff-port-681744/config.json ...
	I1227 10:05:53.711748  518484 machine.go:94] provisionDockerMachine start ...
	I1227 10:05:53.711808  518484 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-681744
	I1227 10:05:53.738106  518484 main.go:144] libmachine: Using SSH client type: native
	I1227 10:05:53.738488  518484 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dbbe0] 0x3de0e0 <nil>  [] 0s} 127.0.0.1 33456 <nil> <nil>}
	I1227 10:05:53.738500  518484 main.go:144] libmachine: About to run SSH command:
	hostname
	I1227 10:05:53.739112  518484 main.go:144] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:50078->127.0.0.1:33456: read: connection reset by peer
	I1227 10:05:56.890497  518484 main.go:144] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-681744
	
	I1227 10:05:56.890571  518484 ubuntu.go:182] provisioning hostname "default-k8s-diff-port-681744"
	I1227 10:05:56.890669  518484 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-681744
	I1227 10:05:56.913833  518484 main.go:144] libmachine: Using SSH client type: native
	I1227 10:05:56.915335  518484 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dbbe0] 0x3de0e0 <nil>  [] 0s} 127.0.0.1 33456 <nil> <nil>}
	I1227 10:05:56.915364  518484 main.go:144] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-681744 && echo "default-k8s-diff-port-681744" | sudo tee /etc/hostname
	I1227 10:05:57.076818  518484 main.go:144] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-681744
	
	I1227 10:05:57.076987  518484 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-681744
	I1227 10:05:57.102232  518484 main.go:144] libmachine: Using SSH client type: native
	I1227 10:05:57.102542  518484 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dbbe0] 0x3de0e0 <nil>  [] 0s} 127.0.0.1 33456 <nil> <nil>}
	I1227 10:05:57.102558  518484 main.go:144] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-681744' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-681744/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-681744' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1227 10:05:57.259077  518484 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	I1227 10:05:57.259161  518484 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22344-301174/.minikube CaCertPath:/home/jenkins/minikube-integration/22344-301174/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22344-301174/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22344-301174/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22344-301174/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22344-301174/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22344-301174/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22344-301174/.minikube}
	I1227 10:05:57.259206  518484 ubuntu.go:190] setting up certificates
	I1227 10:05:57.259249  518484 provision.go:84] configureAuth start
	I1227 10:05:57.259345  518484 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-681744
	I1227 10:05:57.283158  518484 provision.go:143] copyHostCerts
	I1227 10:05:57.283222  518484 exec_runner.go:144] found /home/jenkins/minikube-integration/22344-301174/.minikube/ca.pem, removing ...
	I1227 10:05:57.283237  518484 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22344-301174/.minikube/ca.pem
	I1227 10:05:57.283307  518484 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22344-301174/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22344-301174/.minikube/ca.pem (1082 bytes)
	I1227 10:05:57.283400  518484 exec_runner.go:144] found /home/jenkins/minikube-integration/22344-301174/.minikube/cert.pem, removing ...
	I1227 10:05:57.283406  518484 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22344-301174/.minikube/cert.pem
	I1227 10:05:57.283431  518484 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22344-301174/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22344-301174/.minikube/cert.pem (1123 bytes)
	I1227 10:05:57.283477  518484 exec_runner.go:144] found /home/jenkins/minikube-integration/22344-301174/.minikube/key.pem, removing ...
	I1227 10:05:57.283482  518484 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22344-301174/.minikube/key.pem
	I1227 10:05:57.283504  518484 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22344-301174/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22344-301174/.minikube/key.pem (1675 bytes)
	I1227 10:05:57.283548  518484 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22344-301174/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22344-301174/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22344-301174/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-681744 san=[127.0.0.1 192.168.85.2 default-k8s-diff-port-681744 localhost minikube]
	I1227 10:05:57.507051  518484 provision.go:177] copyRemoteCerts
	I1227 10:05:57.507170  518484 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1227 10:05:57.507231  518484 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-681744
	I1227 10:05:57.528691  518484 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33456 SSHKeyPath:/home/jenkins/minikube-integration/22344-301174/.minikube/machines/default-k8s-diff-port-681744/id_rsa Username:docker}
	I1227 10:05:57.635255  518484 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22344-301174/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1227 10:05:57.655681  518484 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22344-301174/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I1227 10:05:57.682632  518484 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22344-301174/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1227 10:05:57.702794  518484 provision.go:87] duration metric: took 443.508337ms to configureAuth
	I1227 10:05:57.702868  518484 ubuntu.go:206] setting minikube options for container-runtime
	I1227 10:05:57.703097  518484 config.go:182] Loaded profile config "default-k8s-diff-port-681744": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1227 10:05:57.703255  518484 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-681744
	I1227 10:05:57.723603  518484 main.go:144] libmachine: Using SSH client type: native
	I1227 10:05:57.723913  518484 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dbbe0] 0x3de0e0 <nil>  [] 0s} 127.0.0.1 33456 <nil> <nil>}
	I1227 10:05:57.723928  518484 main.go:144] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1227 10:05:58.152233  518484 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1227 10:05:58.152325  518484 machine.go:97] duration metric: took 4.440563065s to provisionDockerMachine
	I1227 10:05:58.152359  518484 start.go:293] postStartSetup for "default-k8s-diff-port-681744" (driver="docker")
	I1227 10:05:58.152388  518484 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1227 10:05:58.152472  518484 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1227 10:05:58.152529  518484 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-681744
	I1227 10:05:58.181862  518484 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33456 SSHKeyPath:/home/jenkins/minikube-integration/22344-301174/.minikube/machines/default-k8s-diff-port-681744/id_rsa Username:docker}
	I1227 10:05:58.282501  518484 ssh_runner.go:195] Run: cat /etc/os-release
	I1227 10:05:58.286212  518484 main.go:144] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1227 10:05:58.286239  518484 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1227 10:05:58.286250  518484 filesync.go:126] Scanning /home/jenkins/minikube-integration/22344-301174/.minikube/addons for local assets ...
	I1227 10:05:58.286304  518484 filesync.go:126] Scanning /home/jenkins/minikube-integration/22344-301174/.minikube/files for local assets ...
	I1227 10:05:58.286382  518484 filesync.go:149] local asset: /home/jenkins/minikube-integration/22344-301174/.minikube/files/etc/ssl/certs/3030432.pem -> 3030432.pem in /etc/ssl/certs
	I1227 10:05:58.286485  518484 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1227 10:05:58.297827  518484 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22344-301174/.minikube/files/etc/ssl/certs/3030432.pem --> /etc/ssl/certs/3030432.pem (1708 bytes)
	I1227 10:05:58.326405  518484 start.go:296] duration metric: took 174.012081ms for postStartSetup
	I1227 10:05:58.326542  518484 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1227 10:05:58.326599  518484 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-681744
	I1227 10:05:58.364076  518484 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33456 SSHKeyPath:/home/jenkins/minikube-integration/22344-301174/.minikube/machines/default-k8s-diff-port-681744/id_rsa Username:docker}
	I1227 10:05:58.472170  518484 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1227 10:05:58.477172  518484 fix.go:56] duration metric: took 5.192395113s for fixHost
	I1227 10:05:58.477202  518484 start.go:83] releasing machines lock for "default-k8s-diff-port-681744", held for 5.192454461s
	I1227 10:05:58.477278  518484 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-681744
	I1227 10:05:58.502486  518484 ssh_runner.go:195] Run: cat /version.json
	I1227 10:05:58.502539  518484 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-681744
	I1227 10:05:58.502884  518484 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1227 10:05:58.502933  518484 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-681744
	I1227 10:05:58.534847  518484 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33456 SSHKeyPath:/home/jenkins/minikube-integration/22344-301174/.minikube/machines/default-k8s-diff-port-681744/id_rsa Username:docker}
	I1227 10:05:58.550064  518484 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33456 SSHKeyPath:/home/jenkins/minikube-integration/22344-301174/.minikube/machines/default-k8s-diff-port-681744/id_rsa Username:docker}
	I1227 10:05:58.658220  518484 ssh_runner.go:195] Run: systemctl --version
	I1227 10:05:58.758547  518484 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1227 10:05:58.811912  518484 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1227 10:05:58.817693  518484 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1227 10:05:58.817822  518484 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1227 10:05:58.827919  518484 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1227 10:05:58.827994  518484 start.go:496] detecting cgroup driver to use...
	I1227 10:05:58.828041  518484 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1227 10:05:58.828130  518484 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1227 10:05:58.846071  518484 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1227 10:05:58.861947  518484 docker.go:218] disabling cri-docker service (if available) ...
	I1227 10:05:58.862009  518484 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1227 10:05:58.880391  518484 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1227 10:05:58.895831  518484 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1227 10:05:59.067263  518484 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1227 10:05:59.225089  518484 docker.go:234] disabling docker service ...
	I1227 10:05:59.225204  518484 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1227 10:05:59.242803  518484 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1227 10:05:59.257292  518484 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1227 10:05:59.411553  518484 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1227 10:05:59.627884  518484 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1227 10:05:59.642638  518484 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1227 10:05:59.663779  518484 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1227 10:05:59.663893  518484 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 10:05:59.682891  518484 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1227 10:05:59.683046  518484 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 10:05:59.695440  518484 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 10:05:59.708924  518484 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 10:05:59.721631  518484 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1227 10:05:59.732260  518484 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 10:05:59.745205  518484 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 10:05:59.756887  518484 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 10:05:59.768971  518484 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1227 10:05:59.779280  518484 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1227 10:05:59.789388  518484 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1227 10:05:59.953365  518484 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1227 10:06:00.681862  518484 start.go:553] Will wait 60s for socket path /var/run/crio/crio.sock
	I1227 10:06:00.682007  518484 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1227 10:06:00.689572  518484 start.go:574] Will wait 60s for crictl version
	I1227 10:06:00.689693  518484 ssh_runner.go:195] Run: which crictl
	I1227 10:06:00.700714  518484 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1227 10:06:00.739086  518484 start.go:590] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1227 10:06:00.739238  518484 ssh_runner.go:195] Run: crio --version
	I1227 10:06:00.781291  518484 ssh_runner.go:195] Run: crio --version
	I1227 10:06:00.825955  518484 out.go:179] * Preparing Kubernetes v1.35.0 on CRI-O 1.34.3 ...
	W1227 10:05:57.487853  515844 pod_ready.go:104] pod "coredns-7d764666f9-bdwpn" is not "Ready", error: <nil>
	W1227 10:05:59.996763  515844 pod_ready.go:104] pod "coredns-7d764666f9-bdwpn" is not "Ready", error: <nil>
	I1227 10:06:00.828985  518484 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-681744 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1227 10:06:00.848078  518484 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1227 10:06:00.852365  518484 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1227 10:06:00.865160  518484 kubeadm.go:884] updating cluster {Name:default-k8s-diff-port-681744 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:default-k8s-diff-port-681744 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountT
ype:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false} ...
	I1227 10:06:00.865282  518484 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime crio
	I1227 10:06:00.865342  518484 ssh_runner.go:195] Run: sudo crictl images --output json
	I1227 10:06:00.927760  518484 crio.go:561] all images are preloaded for cri-o runtime.
	I1227 10:06:00.927781  518484 crio.go:433] Images already preloaded, skipping extraction
	I1227 10:06:00.927838  518484 ssh_runner.go:195] Run: sudo crictl images --output json
	I1227 10:06:00.972881  518484 crio.go:561] all images are preloaded for cri-o runtime.
	I1227 10:06:00.972951  518484 cache_images.go:86] Images are preloaded, skipping loading
	I1227 10:06:00.972977  518484 kubeadm.go:935] updating node { 192.168.85.2 8444 v1.35.0 crio true true} ...
	I1227 10:06:00.973107  518484 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=default-k8s-diff-port-681744 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0 ClusterName:default-k8s-diff-port-681744 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1227 10:06:00.973206  518484 ssh_runner.go:195] Run: crio config
	I1227 10:06:01.064250  518484 cni.go:84] Creating CNI manager for ""
	I1227 10:06:01.064316  518484 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1227 10:06:01.064347  518484 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1227 10:06:01.064405  518484 kubeadm.go:197] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8444 KubernetesVersion:v1.35.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-681744 NodeName:default-k8s-diff-port-681744 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.
crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock failCgroupV1:false hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1227 10:06:01.064573  518484 kubeadm.go:203] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-681744"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	failCgroupV1: false
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1227 10:06:01.064666  518484 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0
	I1227 10:06:01.075050  518484 binaries.go:51] Found k8s binaries, skipping transfer
	I1227 10:06:01.075166  518484 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1227 10:06:01.084369  518484 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (378 bytes)
	I1227 10:06:01.104200  518484 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1227 10:06:01.120954  518484 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2245 bytes)
	I1227 10:06:01.136864  518484 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1227 10:06:01.142164  518484 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1227 10:06:01.153659  518484 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1227 10:06:01.322291  518484 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1227 10:06:01.341114  518484 certs.go:69] Setting up /home/jenkins/minikube-integration/22344-301174/.minikube/profiles/default-k8s-diff-port-681744 for IP: 192.168.85.2
	I1227 10:06:01.341187  518484 certs.go:195] generating shared ca certs ...
	I1227 10:06:01.341218  518484 certs.go:227] acquiring lock for ca certs: {Name:mk8bd99999eb300c33129cb11fd55a1be3001328 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 10:06:01.341418  518484 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22344-301174/.minikube/ca.key
	I1227 10:06:01.341492  518484 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22344-301174/.minikube/proxy-client-ca.key
	I1227 10:06:01.341526  518484 certs.go:257] generating profile certs ...
	I1227 10:06:01.341654  518484 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22344-301174/.minikube/profiles/default-k8s-diff-port-681744/client.key
	I1227 10:06:01.341759  518484 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22344-301174/.minikube/profiles/default-k8s-diff-port-681744/apiserver.key.263a07fe
	I1227 10:06:01.341829  518484 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22344-301174/.minikube/profiles/default-k8s-diff-port-681744/proxy-client.key
	I1227 10:06:01.341973  518484 certs.go:484] found cert: /home/jenkins/minikube-integration/22344-301174/.minikube/certs/303043.pem (1338 bytes)
	W1227 10:06:01.342046  518484 certs.go:480] ignoring /home/jenkins/minikube-integration/22344-301174/.minikube/certs/303043_empty.pem, impossibly tiny 0 bytes
	I1227 10:06:01.342083  518484 certs.go:484] found cert: /home/jenkins/minikube-integration/22344-301174/.minikube/certs/ca-key.pem (1675 bytes)
	I1227 10:06:01.342140  518484 certs.go:484] found cert: /home/jenkins/minikube-integration/22344-301174/.minikube/certs/ca.pem (1082 bytes)
	I1227 10:06:01.342202  518484 certs.go:484] found cert: /home/jenkins/minikube-integration/22344-301174/.minikube/certs/cert.pem (1123 bytes)
	I1227 10:06:01.342251  518484 certs.go:484] found cert: /home/jenkins/minikube-integration/22344-301174/.minikube/certs/key.pem (1675 bytes)
	I1227 10:06:01.342333  518484 certs.go:484] found cert: /home/jenkins/minikube-integration/22344-301174/.minikube/files/etc/ssl/certs/3030432.pem (1708 bytes)
	I1227 10:06:01.342945  518484 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22344-301174/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1227 10:06:01.374867  518484 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22344-301174/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1227 10:06:01.399379  518484 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22344-301174/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1227 10:06:01.423165  518484 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22344-301174/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1227 10:06:01.448879  518484 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22344-301174/.minikube/profiles/default-k8s-diff-port-681744/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1227 10:06:01.469081  518484 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22344-301174/.minikube/profiles/default-k8s-diff-port-681744/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1227 10:06:01.498883  518484 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22344-301174/.minikube/profiles/default-k8s-diff-port-681744/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1227 10:06:01.523730  518484 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22344-301174/.minikube/profiles/default-k8s-diff-port-681744/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1227 10:06:01.545339  518484 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22344-301174/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1227 10:06:01.567789  518484 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22344-301174/.minikube/certs/303043.pem --> /usr/share/ca-certificates/303043.pem (1338 bytes)
	I1227 10:06:01.598260  518484 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22344-301174/.minikube/files/etc/ssl/certs/3030432.pem --> /usr/share/ca-certificates/3030432.pem (1708 bytes)
	I1227 10:06:01.690428  518484 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (722 bytes)
	I1227 10:06:01.714305  518484 ssh_runner.go:195] Run: openssl version
	I1227 10:06:01.724958  518484 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1227 10:06:01.748209  518484 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1227 10:06:01.759604  518484 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1227 10:06:01.772059  518484 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 27 09:13 /usr/share/ca-certificates/minikubeCA.pem
	I1227 10:06:01.772176  518484 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1227 10:06:01.830709  518484 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1227 10:06:01.839601  518484 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/303043.pem
	I1227 10:06:01.851039  518484 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/303043.pem /etc/ssl/certs/303043.pem
	I1227 10:06:01.859566  518484 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/303043.pem
	I1227 10:06:01.864248  518484 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 27 09:17 /usr/share/ca-certificates/303043.pem
	I1227 10:06:01.864318  518484 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/303043.pem
	I1227 10:06:01.937956  518484 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1227 10:06:01.947115  518484 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/3030432.pem
	I1227 10:06:01.957022  518484 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/3030432.pem /etc/ssl/certs/3030432.pem
	I1227 10:06:01.965496  518484 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3030432.pem
	I1227 10:06:01.969996  518484 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 27 09:17 /usr/share/ca-certificates/3030432.pem
	I1227 10:06:01.970081  518484 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3030432.pem
	I1227 10:06:02.018673  518484 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1227 10:06:02.026996  518484 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1227 10:06:02.031568  518484 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1227 10:06:02.077583  518484 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1227 10:06:02.122618  518484 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1227 10:06:02.173186  518484 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1227 10:06:02.223354  518484 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1227 10:06:02.320629  518484 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1227 10:06:02.423878  518484 kubeadm.go:401] StartCluster: {Name:default-k8s-diff-port-681744 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:default-k8s-diff-port-681744 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType
:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1227 10:06:02.423973  518484 cri.go:61] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1227 10:06:02.424056  518484 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1227 10:06:02.523020  518484 cri.go:96] found id: ""
	I1227 10:06:02.523109  518484 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1227 10:06:02.545226  518484 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1227 10:06:02.545309  518484 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1227 10:06:02.545390  518484 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1227 10:06:02.564336  518484 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1227 10:06:02.565275  518484 kubeconfig.go:47] verify endpoint returned: get endpoint: "default-k8s-diff-port-681744" does not appear in /home/jenkins/minikube-integration/22344-301174/kubeconfig
	I1227 10:06:02.565860  518484 kubeconfig.go:62] /home/jenkins/minikube-integration/22344-301174/kubeconfig needs updating (will repair): [kubeconfig missing "default-k8s-diff-port-681744" cluster setting kubeconfig missing "default-k8s-diff-port-681744" context setting]
	I1227 10:06:02.566723  518484 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22344-301174/kubeconfig: {Name:mk3a3b85dc4ae4614cb3a9a7459a1ec0ee3f4d9f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 10:06:02.568704  518484 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1227 10:06:02.594589  518484 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.85.2
	I1227 10:06:02.594623  518484 kubeadm.go:602] duration metric: took 49.294752ms to restartPrimaryControlPlane
	I1227 10:06:02.594634  518484 kubeadm.go:403] duration metric: took 170.76596ms to StartCluster
	I1227 10:06:02.594649  518484 settings.go:142] acquiring lock: {Name:mkd3c0090464e277fd903dd0170c6ace5a6172ea Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 10:06:02.594716  518484 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22344-301174/kubeconfig
	I1227 10:06:02.596132  518484 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22344-301174/kubeconfig: {Name:mk3a3b85dc4ae4614cb3a9a7459a1ec0ee3f4d9f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 10:06:02.596353  518484 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1227 10:06:02.596795  518484 config.go:182] Loaded profile config "default-k8s-diff-port-681744": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1227 10:06:02.596856  518484 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1227 10:06:02.596947  518484 addons.go:70] Setting storage-provisioner=true in profile "default-k8s-diff-port-681744"
	I1227 10:06:02.596969  518484 addons.go:70] Setting dashboard=true in profile "default-k8s-diff-port-681744"
	I1227 10:06:02.596991  518484 addons.go:239] Setting addon dashboard=true in "default-k8s-diff-port-681744"
	W1227 10:06:02.596998  518484 addons.go:248] addon dashboard should already be in state true
	I1227 10:06:02.597024  518484 host.go:66] Checking if "default-k8s-diff-port-681744" exists ...
	I1227 10:06:02.597573  518484 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-681744 --format={{.State.Status}}
	I1227 10:06:02.597745  518484 addons.go:239] Setting addon storage-provisioner=true in "default-k8s-diff-port-681744"
	W1227 10:06:02.597769  518484 addons.go:248] addon storage-provisioner should already be in state true
	I1227 10:06:02.597821  518484 host.go:66] Checking if "default-k8s-diff-port-681744" exists ...
	I1227 10:06:02.598365  518484 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-681744 --format={{.State.Status}}
	I1227 10:06:02.598914  518484 addons.go:70] Setting default-storageclass=true in profile "default-k8s-diff-port-681744"
	I1227 10:06:02.598948  518484 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-681744"
	I1227 10:06:02.599211  518484 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-681744 --format={{.State.Status}}
	I1227 10:06:02.602000  518484 out.go:179] * Verifying Kubernetes components...
	I1227 10:06:02.605920  518484 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1227 10:06:02.649146  518484 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1227 10:06:02.649509  518484 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1227 10:06:02.655559  518484 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1227 10:06:02.658839  518484 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1227 10:06:02.658864  518484 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1227 10:06:02.658925  518484 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-681744
	I1227 10:06:02.659950  518484 addons.go:239] Setting addon default-storageclass=true in "default-k8s-diff-port-681744"
	W1227 10:06:02.659976  518484 addons.go:248] addon default-storageclass should already be in state true
	I1227 10:06:02.660004  518484 host.go:66] Checking if "default-k8s-diff-port-681744" exists ...
	I1227 10:06:02.660420  518484 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-681744 --format={{.State.Status}}
	I1227 10:06:02.661325  518484 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1227 10:06:02.661344  518484 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1227 10:06:02.661395  518484 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-681744
	I1227 10:06:02.711383  518484 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33456 SSHKeyPath:/home/jenkins/minikube-integration/22344-301174/.minikube/machines/default-k8s-diff-port-681744/id_rsa Username:docker}
	I1227 10:06:02.712298  518484 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1227 10:06:02.712313  518484 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1227 10:06:02.712370  518484 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-681744
	I1227 10:06:02.720472  518484 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33456 SSHKeyPath:/home/jenkins/minikube-integration/22344-301174/.minikube/machines/default-k8s-diff-port-681744/id_rsa Username:docker}
	I1227 10:06:02.752027  518484 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33456 SSHKeyPath:/home/jenkins/minikube-integration/22344-301174/.minikube/machines/default-k8s-diff-port-681744/id_rsa Username:docker}
	W1227 10:06:02.488162  515844 pod_ready.go:104] pod "coredns-7d764666f9-bdwpn" is not "Ready", error: <nil>
	W1227 10:06:04.508418  515844 pod_ready.go:104] pod "coredns-7d764666f9-bdwpn" is not "Ready", error: <nil>
	I1227 10:06:03.031483  518484 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1227 10:06:03.123128  518484 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1227 10:06:03.123197  518484 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1227 10:06:03.140802  518484 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1227 10:06:03.283867  518484 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1227 10:06:03.283944  518484 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1227 10:06:03.285209  518484 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1227 10:06:03.395515  518484 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1227 10:06:03.395590  518484 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1227 10:06:03.459415  518484 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1227 10:06:03.459479  518484 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1227 10:06:03.516043  518484 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1227 10:06:03.516120  518484 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1227 10:06:03.571624  518484 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1227 10:06:03.571710  518484 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1227 10:06:03.612030  518484 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1227 10:06:03.612102  518484 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1227 10:06:03.674340  518484 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1227 10:06:03.674385  518484 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1227 10:06:03.714635  518484 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1227 10:06:03.714659  518484 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1227 10:06:03.771610  518484 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1227 10:06:07.889971  518484 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (4.858408647s)
	I1227 10:06:07.890047  518484 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (4.749170511s)
	I1227 10:06:07.890369  518484 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (4.605087074s)
	I1227 10:06:07.890413  518484 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-681744" to be "Ready" ...
	I1227 10:06:07.890684  518484 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (4.119043643s)
	I1227 10:06:07.894237  518484 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p default-k8s-diff-port-681744 addons enable metrics-server
	
	I1227 10:06:07.921065  518484 node_ready.go:49] node "default-k8s-diff-port-681744" is "Ready"
	I1227 10:06:07.921105  518484 node_ready.go:38] duration metric: took 30.674983ms for node "default-k8s-diff-port-681744" to be "Ready" ...
	I1227 10:06:07.921121  518484 api_server.go:52] waiting for apiserver process to appear ...
	I1227 10:06:07.921195  518484 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 10:06:07.939134  518484 api_server.go:72] duration metric: took 5.342743487s to wait for apiserver process to appear ...
	I1227 10:06:07.939163  518484 api_server.go:88] waiting for apiserver healthz status ...
	I1227 10:06:07.939183  518484 api_server.go:299] Checking apiserver healthz at https://192.168.85.2:8444/healthz ...
	I1227 10:06:07.939955  518484 out.go:179] * Enabled addons: storage-provisioner, dashboard, default-storageclass
	I1227 10:06:07.942787  518484 addons.go:530] duration metric: took 5.345924474s for enable addons: enabled=[storage-provisioner dashboard default-storageclass]
	I1227 10:06:07.955568  518484 api_server.go:325] https://192.168.85.2:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1227 10:06:07.955600  518484 api_server.go:103] status: https://192.168.85.2:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1227 10:06:06.985466  515844 pod_ready.go:104] pod "coredns-7d764666f9-bdwpn" is not "Ready", error: <nil>
	W1227 10:06:09.485125  515844 pod_ready.go:104] pod "coredns-7d764666f9-bdwpn" is not "Ready", error: <nil>
	I1227 10:06:08.440182  518484 api_server.go:299] Checking apiserver healthz at https://192.168.85.2:8444/healthz ...
	I1227 10:06:08.448747  518484 api_server.go:325] https://192.168.85.2:8444/healthz returned 200:
	ok
	I1227 10:06:08.449896  518484 api_server.go:141] control plane version: v1.35.0
	I1227 10:06:08.449920  518484 api_server.go:131] duration metric: took 510.750118ms to wait for apiserver health ...
	I1227 10:06:08.449930  518484 system_pods.go:43] waiting for kube-system pods to appear ...
	I1227 10:06:08.453803  518484 system_pods.go:59] 8 kube-system pods found
	I1227 10:06:08.453886  518484 system_pods.go:61] "coredns-7d764666f9-gsk6s" [5cd01233-f1ab-4fa5-b523-fcd838dbbdad] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1227 10:06:08.453911  518484 system_pods.go:61] "etcd-default-k8s-diff-port-681744" [fcb8304d-4099-4c32-960d-a219ab755fe1] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1227 10:06:08.453956  518484 system_pods.go:61] "kindnet-n6bcg" [fa32b3d6-de74-4996-8943-cd4072b7a4e4] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1227 10:06:08.453984  518484 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-681744" [0ced8a70-d9e1-49bf-89a8-3c243fa652d0] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1227 10:06:08.454031  518484 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-681744" [2006aa2c-c71c-4cae-b454-e688d30f225a] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1227 10:06:08.454057  518484 system_pods.go:61] "kube-proxy-6wq7w" [cd457947-9b5f-43a6-9d83-24f1619c3977] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1227 10:06:08.454078  518484 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-681744" [f2ad2e17-a1cf-4419-855e-eecaeedfd7d0] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1227 10:06:08.454113  518484 system_pods.go:61] "storage-provisioner" [7d515dc0-eaac-424b-9308-be2c50a7d4fc] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1227 10:06:08.454137  518484 system_pods.go:74] duration metric: took 4.200586ms to wait for pod list to return data ...
	I1227 10:06:08.454180  518484 default_sa.go:34] waiting for default service account to be created ...
	I1227 10:06:08.457122  518484 default_sa.go:45] found service account: "default"
	I1227 10:06:08.457181  518484 default_sa.go:55] duration metric: took 2.961965ms for default service account to be created ...
	I1227 10:06:08.457205  518484 system_pods.go:116] waiting for k8s-apps to be running ...
	I1227 10:06:08.460123  518484 system_pods.go:86] 8 kube-system pods found
	I1227 10:06:08.460195  518484 system_pods.go:89] "coredns-7d764666f9-gsk6s" [5cd01233-f1ab-4fa5-b523-fcd838dbbdad] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1227 10:06:08.460222  518484 system_pods.go:89] "etcd-default-k8s-diff-port-681744" [fcb8304d-4099-4c32-960d-a219ab755fe1] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1227 10:06:08.460270  518484 system_pods.go:89] "kindnet-n6bcg" [fa32b3d6-de74-4996-8943-cd4072b7a4e4] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1227 10:06:08.460301  518484 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-681744" [0ced8a70-d9e1-49bf-89a8-3c243fa652d0] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1227 10:06:08.460343  518484 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-681744" [2006aa2c-c71c-4cae-b454-e688d30f225a] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1227 10:06:08.460371  518484 system_pods.go:89] "kube-proxy-6wq7w" [cd457947-9b5f-43a6-9d83-24f1619c3977] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1227 10:06:08.460396  518484 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-681744" [f2ad2e17-a1cf-4419-855e-eecaeedfd7d0] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1227 10:06:08.460437  518484 system_pods.go:89] "storage-provisioner" [7d515dc0-eaac-424b-9308-be2c50a7d4fc] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1227 10:06:08.460462  518484 system_pods.go:126] duration metric: took 3.238638ms to wait for k8s-apps to be running ...
	I1227 10:06:08.460485  518484 system_svc.go:44] waiting for kubelet service to be running ....
	I1227 10:06:08.460577  518484 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1227 10:06:08.476335  518484 system_svc.go:56] duration metric: took 15.841059ms WaitForService to wait for kubelet
	I1227 10:06:08.476413  518484 kubeadm.go:587] duration metric: took 5.880026408s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1227 10:06:08.476448  518484 node_conditions.go:102] verifying NodePressure condition ...
	I1227 10:06:08.479399  518484 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1227 10:06:08.479479  518484 node_conditions.go:123] node cpu capacity is 2
	I1227 10:06:08.479507  518484 node_conditions.go:105] duration metric: took 3.020542ms to run NodePressure ...
	I1227 10:06:08.479533  518484 start.go:242] waiting for startup goroutines ...
	I1227 10:06:08.479567  518484 start.go:247] waiting for cluster config update ...
	I1227 10:06:08.479596  518484 start.go:256] writing updated cluster config ...
	I1227 10:06:08.479938  518484 ssh_runner.go:195] Run: rm -f paused
	I1227 10:06:08.487051  518484 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1227 10:06:08.492100  518484 pod_ready.go:83] waiting for pod "coredns-7d764666f9-gsk6s" in "kube-system" namespace to be "Ready" or be gone ...
	W1227 10:06:10.497783  518484 pod_ready.go:104] pod "coredns-7d764666f9-gsk6s" is not "Ready", error: <nil>
	W1227 10:06:12.500164  518484 pod_ready.go:104] pod "coredns-7d764666f9-gsk6s" is not "Ready", error: <nil>
	W1227 10:06:11.485616  515844 pod_ready.go:104] pod "coredns-7d764666f9-bdwpn" is not "Ready", error: <nil>
	W1227 10:06:13.485704  515844 pod_ready.go:104] pod "coredns-7d764666f9-bdwpn" is not "Ready", error: <nil>
	W1227 10:06:14.999260  518484 pod_ready.go:104] pod "coredns-7d764666f9-gsk6s" is not "Ready", error: <nil>
	W1227 10:06:17.001832  518484 pod_ready.go:104] pod "coredns-7d764666f9-gsk6s" is not "Ready", error: <nil>
	W1227 10:06:15.986510  515844 pod_ready.go:104] pod "coredns-7d764666f9-bdwpn" is not "Ready", error: <nil>
	W1227 10:06:18.489177  515844 pod_ready.go:104] pod "coredns-7d764666f9-bdwpn" is not "Ready", error: <nil>
	W1227 10:06:19.513985  518484 pod_ready.go:104] pod "coredns-7d764666f9-gsk6s" is not "Ready", error: <nil>
	W1227 10:06:21.998332  518484 pod_ready.go:104] pod "coredns-7d764666f9-gsk6s" is not "Ready", error: <nil>
	W1227 10:06:20.985392  515844 pod_ready.go:104] pod "coredns-7d764666f9-bdwpn" is not "Ready", error: <nil>
	W1227 10:06:23.485026  515844 pod_ready.go:104] pod "coredns-7d764666f9-bdwpn" is not "Ready", error: <nil>
	I1227 10:06:25.485519  515844 pod_ready.go:94] pod "coredns-7d764666f9-bdwpn" is "Ready"
	I1227 10:06:25.485550  515844 pod_ready.go:86] duration metric: took 36.506319777s for pod "coredns-7d764666f9-bdwpn" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 10:06:25.488371  515844 pod_ready.go:83] waiting for pod "etcd-embed-certs-017122" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 10:06:25.493600  515844 pod_ready.go:94] pod "etcd-embed-certs-017122" is "Ready"
	I1227 10:06:25.493627  515844 pod_ready.go:86] duration metric: took 5.226445ms for pod "etcd-embed-certs-017122" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 10:06:25.498914  515844 pod_ready.go:83] waiting for pod "kube-apiserver-embed-certs-017122" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 10:06:25.504246  515844 pod_ready.go:94] pod "kube-apiserver-embed-certs-017122" is "Ready"
	I1227 10:06:25.504277  515844 pod_ready.go:86] duration metric: took 5.330684ms for pod "kube-apiserver-embed-certs-017122" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 10:06:25.508132  515844 pod_ready.go:83] waiting for pod "kube-controller-manager-embed-certs-017122" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 10:06:25.683121  515844 pod_ready.go:94] pod "kube-controller-manager-embed-certs-017122" is "Ready"
	I1227 10:06:25.683151  515844 pod_ready.go:86] duration metric: took 174.98778ms for pod "kube-controller-manager-embed-certs-017122" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 10:06:25.883310  515844 pod_ready.go:83] waiting for pod "kube-proxy-knmrq" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 10:06:26.283381  515844 pod_ready.go:94] pod "kube-proxy-knmrq" is "Ready"
	I1227 10:06:26.283413  515844 pod_ready.go:86] duration metric: took 400.074163ms for pod "kube-proxy-knmrq" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 10:06:26.484285  515844 pod_ready.go:83] waiting for pod "kube-scheduler-embed-certs-017122" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 10:06:26.883081  515844 pod_ready.go:94] pod "kube-scheduler-embed-certs-017122" is "Ready"
	I1227 10:06:26.883111  515844 pod_ready.go:86] duration metric: took 398.792424ms for pod "kube-scheduler-embed-certs-017122" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 10:06:26.883126  515844 pod_ready.go:40] duration metric: took 37.908320795s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1227 10:06:26.944215  515844 start.go:625] kubectl: 1.33.2, cluster: 1.35.0 (minor skew: 2)
	I1227 10:06:26.947154  515844 out.go:203] 
	W1227 10:06:26.950475  515844 out.go:285] ! /usr/local/bin/kubectl is version 1.33.2, which may have incompatibilities with Kubernetes 1.35.0.
	I1227 10:06:26.953467  515844 out.go:179]   - Want kubectl v1.35.0? Try 'minikube kubectl -- get pods -A'
	I1227 10:06:26.956360  515844 out.go:179] * Done! kubectl is now configured to use "embed-certs-017122" cluster and "default" namespace by default
	W1227 10:06:24.498087  518484 pod_ready.go:104] pod "coredns-7d764666f9-gsk6s" is not "Ready", error: <nil>
	W1227 10:06:26.498310  518484 pod_ready.go:104] pod "coredns-7d764666f9-gsk6s" is not "Ready", error: <nil>
	W1227 10:06:28.498454  518484 pod_ready.go:104] pod "coredns-7d764666f9-gsk6s" is not "Ready", error: <nil>
	W1227 10:06:30.997868  518484 pod_ready.go:104] pod "coredns-7d764666f9-gsk6s" is not "Ready", error: <nil>
	W1227 10:06:32.997927  518484 pod_ready.go:104] pod "coredns-7d764666f9-gsk6s" is not "Ready", error: <nil>
	W1227 10:06:35.498312  518484 pod_ready.go:104] pod "coredns-7d764666f9-gsk6s" is not "Ready", error: <nil>
	
	
	==> CRI-O <==
	Dec 27 10:06:29 embed-certs-017122 crio[654]: time="2025-12-27T10:06:29.03122619Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Dec 27 10:06:29 embed-certs-017122 crio[654]: time="2025-12-27T10:06:29.03479343Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 27 10:06:29 embed-certs-017122 crio[654]: time="2025-12-27T10:06:29.034830871Z" level=info msg="Updated default CNI network name to kindnet"
	Dec 27 10:06:29 embed-certs-017122 crio[654]: time="2025-12-27T10:06:29.034855224Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Dec 27 10:06:29 embed-certs-017122 crio[654]: time="2025-12-27T10:06:29.038527655Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 27 10:06:29 embed-certs-017122 crio[654]: time="2025-12-27T10:06:29.038563438Z" level=info msg="Updated default CNI network name to kindnet"
	Dec 27 10:06:29 embed-certs-017122 crio[654]: time="2025-12-27T10:06:29.038586955Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Dec 27 10:06:29 embed-certs-017122 crio[654]: time="2025-12-27T10:06:29.042104053Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 27 10:06:29 embed-certs-017122 crio[654]: time="2025-12-27T10:06:29.042141042Z" level=info msg="Updated default CNI network name to kindnet"
	Dec 27 10:06:29 embed-certs-017122 crio[654]: time="2025-12-27T10:06:29.042263357Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Dec 27 10:06:29 embed-certs-017122 crio[654]: time="2025-12-27T10:06:29.045978333Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 27 10:06:29 embed-certs-017122 crio[654]: time="2025-12-27T10:06:29.046021533Z" level=info msg="Updated default CNI network name to kindnet"
	Dec 27 10:06:32 embed-certs-017122 crio[654]: time="2025-12-27T10:06:32.310369286Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=57480d83-7b08-4437-a8c1-25cdd5b1234d name=/runtime.v1.ImageService/ImageStatus
	Dec 27 10:06:32 embed-certs-017122 crio[654]: time="2025-12-27T10:06:32.311792828Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=bab83d87-8e5d-43ca-b0b7-d9d9d5bd3e98 name=/runtime.v1.ImageService/ImageStatus
	Dec 27 10:06:32 embed-certs-017122 crio[654]: time="2025-12-27T10:06:32.312736306Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-tkk2r/dashboard-metrics-scraper" id=47e734af-b569-4202-ab62-5205941fda1b name=/runtime.v1.RuntimeService/CreateContainer
	Dec 27 10:06:32 embed-certs-017122 crio[654]: time="2025-12-27T10:06:32.312832152Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 27 10:06:32 embed-certs-017122 crio[654]: time="2025-12-27T10:06:32.319137223Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 27 10:06:32 embed-certs-017122 crio[654]: time="2025-12-27T10:06:32.319651855Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 27 10:06:32 embed-certs-017122 crio[654]: time="2025-12-27T10:06:32.338806646Z" level=info msg="Created container 9bb2eed3a1a16c237734c407b363356f266324a7f0f4221b794d2a8ea991e392: kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-tkk2r/dashboard-metrics-scraper" id=47e734af-b569-4202-ab62-5205941fda1b name=/runtime.v1.RuntimeService/CreateContainer
	Dec 27 10:06:32 embed-certs-017122 crio[654]: time="2025-12-27T10:06:32.339804336Z" level=info msg="Starting container: 9bb2eed3a1a16c237734c407b363356f266324a7f0f4221b794d2a8ea991e392" id=a947d877-70b8-4fd1-8b40-a5f80ceb5a07 name=/runtime.v1.RuntimeService/StartContainer
	Dec 27 10:06:32 embed-certs-017122 crio[654]: time="2025-12-27T10:06:32.341680807Z" level=info msg="Started container" PID=1736 containerID=9bb2eed3a1a16c237734c407b363356f266324a7f0f4221b794d2a8ea991e392 description=kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-tkk2r/dashboard-metrics-scraper id=a947d877-70b8-4fd1-8b40-a5f80ceb5a07 name=/runtime.v1.RuntimeService/StartContainer sandboxID=bdd1e74d83ebdf713c8a4d9e688cbd687b52ddf461bc8fe53f65f0fbb6e20787
	Dec 27 10:06:32 embed-certs-017122 conmon[1734]: conmon 9bb2eed3a1a16c237734 <ninfo>: container 1736 exited with status 1
	Dec 27 10:06:32 embed-certs-017122 crio[654]: time="2025-12-27T10:06:32.55074687Z" level=info msg="Removing container: a435ea347be6e685a8fb8efdde0c13f38f60709d949072677d1c021195c31c94" id=853a0721-c06d-4133-ac50-98645a29f60e name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 27 10:06:32 embed-certs-017122 crio[654]: time="2025-12-27T10:06:32.56244437Z" level=info msg="Error loading conmon cgroup of container a435ea347be6e685a8fb8efdde0c13f38f60709d949072677d1c021195c31c94: cgroup deleted" id=853a0721-c06d-4133-ac50-98645a29f60e name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 27 10:06:32 embed-certs-017122 crio[654]: time="2025-12-27T10:06:32.56705415Z" level=info msg="Removed container a435ea347be6e685a8fb8efdde0c13f38f60709d949072677d1c021195c31c94: kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-tkk2r/dashboard-metrics-scraper" id=853a0721-c06d-4133-ac50-98645a29f60e name=/runtime.v1.RuntimeService/RemoveContainer
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                          NAMESPACE
	9bb2eed3a1a16       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           9 seconds ago       Exited              dashboard-metrics-scraper   3                   bdd1e74d83ebd       dashboard-metrics-scraper-867fb5f87b-tkk2r   kubernetes-dashboard
	c42ed83b7c4b6       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           22 seconds ago      Running             storage-provisioner         2                   3294cb6595897       storage-provisioner                          kube-system
	185eb58aa58d9       docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf   37 seconds ago      Running             kubernetes-dashboard        0                   a6cc41a704ad5       kubernetes-dashboard-b84665fb8-zzkkj         kubernetes-dashboard
	4c9ab6fe53512       e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf                                           52 seconds ago      Running             coredns                     1                   f352b63e7f341       coredns-7d764666f9-bdwpn                     kube-system
	2a07a957c39cf       1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c                                           52 seconds ago      Running             busybox                     1                   62c36ee471702       busybox                                      default
	d20768561f33e       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           53 seconds ago      Exited              storage-provisioner         1                   3294cb6595897       storage-provisioner                          kube-system
	2fd5b0ea30513       de369f46c2ff55c31ea783a663eb203caa820f3db1f9b9c935e79e7d1e9fd9e5                                           53 seconds ago      Running             kube-proxy                  1                   9714334b98c84       kube-proxy-knmrq                             kube-system
	db5a895de4aa9       c96ee3c17498748ccc544ba99ee8ffeb020fc335b230b43cd28bf43bed229a13                                           53 seconds ago      Running             kindnet-cni                 1                   6342ce0a345e4       kindnet-7ts9b                                kube-system
	0cc3bd645f392       ddc8422d4d35a6fc66c34be61e24df795e5cebf197eb546f62740d0bafef874f                                           57 seconds ago      Running             kube-scheduler              1                   fa657458e88fc       kube-scheduler-embed-certs-017122            kube-system
	76f3b93c8f471       88898f1d1a62a3ea9db5d4d099dee7aa52ebe8191016c5b3c721388a309983e0                                           57 seconds ago      Running             kube-controller-manager     1                   79e5aeb30e0de       kube-controller-manager-embed-certs-017122   kube-system
	6cfa79ecfd13f       c3fcf259c473a57a5d7da116e29161904491091743512d27467c907c5516f856                                           57 seconds ago      Running             kube-apiserver              1                   519be58815797       kube-apiserver-embed-certs-017122            kube-system
	44cdcc6347ae5       271e49a0ebc56647476845128fcd2a73bb138beeca3878cc3bf52b4ff1172a57                                           57 seconds ago      Running             etcd                        1                   45e83682d1e47       etcd-embed-certs-017122                      kube-system
	
	
	==> coredns [4c9ab6fe53512b9ed7548d8e5e3e67c177bc266d4293df4e51df87ff6e091014] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 3e2243e8b9e7116f563b83b1933f477a68ba9ad4a829ed5d7e54629fb2ce53528b9bc6023030be20be434ad805fd246296dd428c64e9bbef3a70f22b8621f560
	CoreDNS-1.13.1
	linux/arm64, go1.25.2, 1db4568
	[INFO] 127.0.0.1:57344 - 64438 "HINFO IN 7110569481248417269.1535362450882622996. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.011535578s
	[INFO] plugin/ready: Plugins not ready: "kubernetes"
	[INFO] plugin/ready: Plugins not ready: "kubernetes"
	[INFO] plugin/ready: Plugins not ready: "kubernetes"
	[INFO] plugin/ready: Plugins not ready: "kubernetes"
	[ERROR] plugin/kubernetes: Failed to watch
	[ERROR] plugin/kubernetes: Failed to watch
	[ERROR] plugin/kubernetes: Failed to watch
	
	
	==> describe nodes <==
	Name:               embed-certs-017122
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=embed-certs-017122
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=a2daf445edf4872fd9586416ba5dbf507613db86
	                    minikube.k8s.io/name=embed-certs-017122
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_27T10_04_48_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 27 Dec 2025 10:04:44 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-017122
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 27 Dec 2025 10:06:38 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 27 Dec 2025 10:06:18 +0000   Sat, 27 Dec 2025 10:04:41 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 27 Dec 2025 10:06:18 +0000   Sat, 27 Dec 2025 10:04:41 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 27 Dec 2025 10:06:18 +0000   Sat, 27 Dec 2025 10:04:41 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 27 Dec 2025 10:06:18 +0000   Sat, 27 Dec 2025 10:05:08 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    embed-certs-017122
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022304Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022304Ki
	  pods:               110
	System Info:
	  Machine ID:                 56495bcb733b49eb3642bf15694bbc8c
	  System UUID:                65221525-5166-4f0b-9b53-9db790e49fde
	  Boot ID:                    bc2ae459-717b-455e-ba6b-b9cf89c5e3c8
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.3
	  Kubelet Version:            v1.35.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         90s
	  kube-system                 coredns-7d764666f9-bdwpn                      100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     108s
	  kube-system                 etcd-embed-certs-017122                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         115s
	  kube-system                 kindnet-7ts9b                                 100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      108s
	  kube-system                 kube-apiserver-embed-certs-017122             250m (12%)    0 (0%)      0 (0%)           0 (0%)         114s
	  kube-system                 kube-controller-manager-embed-certs-017122    200m (10%)    0 (0%)      0 (0%)           0 (0%)         114s
	  kube-system                 kube-proxy-knmrq                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         108s
	  kube-system                 kube-scheduler-embed-certs-017122             100m (5%)     0 (0%)      0 (0%)           0 (0%)         114s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         105s
	  kubernetes-dashboard        dashboard-metrics-scraper-867fb5f87b-tkk2r    0 (0%)        0 (0%)      0 (0%)           0 (0%)         50s
	  kubernetes-dashboard        kubernetes-dashboard-b84665fb8-zzkkj          0 (0%)        0 (0%)      0 (0%)           0 (0%)         50s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type    Reason          Age   From             Message
	  ----    ------          ----  ----             -------
	  Normal  RegisteredNode  109s  node-controller  Node embed-certs-017122 event: Registered Node embed-certs-017122 in Controller
	  Normal  RegisteredNode  51s   node-controller  Node embed-certs-017122 event: Registered Node embed-certs-017122 in Controller
	
	
	==> dmesg <==
	[Dec27 09:35] overlayfs: idmapped layers are currently not supported
	[ +35.855481] overlayfs: idmapped layers are currently not supported
	[Dec27 09:36] overlayfs: idmapped layers are currently not supported
	[Dec27 09:37] overlayfs: idmapped layers are currently not supported
	[  +4.962986] overlayfs: idmapped layers are currently not supported
	[Dec27 09:38] overlayfs: idmapped layers are currently not supported
	[ +23.051169] overlayfs: idmapped layers are currently not supported
	[Dec27 09:39] overlayfs: idmapped layers are currently not supported
	[ +34.383287] overlayfs: idmapped layers are currently not supported
	[Dec27 09:41] overlayfs: idmapped layers are currently not supported
	[Dec27 09:43] overlayfs: idmapped layers are currently not supported
	[ +35.522529] overlayfs: idmapped layers are currently not supported
	[Dec27 09:48] overlayfs: idmapped layers are currently not supported
	[Dec27 09:49] overlayfs: idmapped layers are currently not supported
	[Dec27 09:51] overlayfs: idmapped layers are currently not supported
	[Dec27 09:52] overlayfs: idmapped layers are currently not supported
	[Dec27 09:59] overlayfs: idmapped layers are currently not supported
	[ +37.649191] overlayfs: idmapped layers are currently not supported
	[Dec27 10:01] overlayfs: idmapped layers are currently not supported
	[Dec27 10:02] overlayfs: idmapped layers are currently not supported
	[Dec27 10:03] overlayfs: idmapped layers are currently not supported
	[Dec27 10:04] overlayfs: idmapped layers are currently not supported
	[Dec27 10:05] overlayfs: idmapped layers are currently not supported
	[ +42.108139] overlayfs: idmapped layers are currently not supported
	[Dec27 10:06] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [44cdcc6347ae5077ffdabfa2362bee311b3b59c6c54028ea82f59bab340bbb83] <==
	{"level":"info","ts":"2025-12-27T10:05:44.382751Z","caller":"membership/cluster.go:433","msg":"ignore already added member","cluster-id":"6f20f2c4b2fb5f8a","local-member-id":"ea7e25599daad906","added-peer-id":"ea7e25599daad906","added-peer-peer-urls":["https://192.168.76.2:2380"],"added-peer-is-learner":false}
	{"level":"info","ts":"2025-12-27T10:05:44.382915Z","caller":"membership/cluster.go:674","msg":"updated cluster version","cluster-id":"6f20f2c4b2fb5f8a","local-member-id":"ea7e25599daad906","from":"3.6","to":"3.6"}
	{"level":"info","ts":"2025-12-27T10:05:44.384387Z","caller":"embed/etcd.go:766","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2025-12-27T10:05:44.398472Z","caller":"embed/etcd.go:640","msg":"serving peer traffic","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2025-12-27T10:05:44.398508Z","caller":"embed/etcd.go:611","msg":"cmux::serve","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2025-12-27T10:05:44.399255Z","caller":"embed/etcd.go:292","msg":"now serving peer/client/metrics","local-member-id":"ea7e25599daad906","initial-advertise-peer-urls":["https://192.168.76.2:2380"],"listen-peer-urls":["https://192.168.76.2:2380"],"advertise-client-urls":["https://192.168.76.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.76.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2025-12-27T10:05:44.399292Z","caller":"embed/etcd.go:890","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2025-12-27T10:05:44.767705Z","logger":"raft","caller":"v3@v3.6.0/raft.go:988","msg":"ea7e25599daad906 is starting a new election at term 2"}
	{"level":"info","ts":"2025-12-27T10:05:44.767812Z","logger":"raft","caller":"v3@v3.6.0/raft.go:930","msg":"ea7e25599daad906 became pre-candidate at term 2"}
	{"level":"info","ts":"2025-12-27T10:05:44.767888Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1077","msg":"ea7e25599daad906 received MsgPreVoteResp from ea7e25599daad906 at term 2"}
	{"level":"info","ts":"2025-12-27T10:05:44.767930Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1693","msg":"ea7e25599daad906 has received 1 MsgPreVoteResp votes and 0 vote rejections"}
	{"level":"info","ts":"2025-12-27T10:05:44.767976Z","logger":"raft","caller":"v3@v3.6.0/raft.go:912","msg":"ea7e25599daad906 became candidate at term 3"}
	{"level":"info","ts":"2025-12-27T10:05:44.770193Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1077","msg":"ea7e25599daad906 received MsgVoteResp from ea7e25599daad906 at term 3"}
	{"level":"info","ts":"2025-12-27T10:05:44.770264Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1693","msg":"ea7e25599daad906 has received 1 MsgVoteResp votes and 0 vote rejections"}
	{"level":"info","ts":"2025-12-27T10:05:44.770308Z","logger":"raft","caller":"v3@v3.6.0/raft.go:970","msg":"ea7e25599daad906 became leader at term 3"}
	{"level":"info","ts":"2025-12-27T10:05:44.770345Z","logger":"raft","caller":"v3@v3.6.0/node.go:370","msg":"raft.node: ea7e25599daad906 elected leader ea7e25599daad906 at term 3"}
	{"level":"info","ts":"2025-12-27T10:05:44.774454Z","caller":"etcdserver/server.go:1820","msg":"published local member to cluster through raft","local-member-id":"ea7e25599daad906","local-member-attributes":"{Name:embed-certs-017122 ClientURLs:[https://192.168.76.2:2379]}","cluster-id":"6f20f2c4b2fb5f8a","publish-timeout":"7s"}
	{"level":"info","ts":"2025-12-27T10:05:44.774625Z","caller":"embed/serve.go:138","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-12-27T10:05:44.775542Z","caller":"v3rpc/health.go:63","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-12-27T10:05:44.791079Z","caller":"embed/serve.go:138","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-12-27T10:05:44.792046Z","caller":"v3rpc/health.go:63","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-12-27T10:05:44.795208Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-12-27T10:05:44.795638Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-12-27T10:05:44.796213Z","caller":"embed/serve.go:283","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2025-12-27T10:05:44.830666Z","caller":"embed/serve.go:283","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.76.2:2379"}
	
	
	==> kernel <==
	 10:06:42 up  2:49,  0 user,  load average: 3.46, 2.74, 2.33
	Linux embed-certs-017122 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [db5a895de4aa9bea7fd27e010a93c2e73b2cd31487927aa4bc444480a74acabc] <==
	I1227 10:05:48.822311       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1227 10:05:48.822725       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1227 10:05:48.822912       1 main.go:148] setting mtu 1500 for CNI 
	I1227 10:05:48.822959       1 main.go:178] kindnetd IP family: "ipv4"
	I1227 10:05:48.823012       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-12-27T10:05:49Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1227 10:05:49.021621       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1227 10:05:49.021763       1 controller.go:381] "Waiting for informer caches to sync"
	I1227 10:05:49.021801       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1227 10:05:49.023139       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1227 10:06:19.024603       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1227 10:06:19.024606       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1227 10:06:19.024759       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1227 10:06:19.024825       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	I1227 10:06:20.222806       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1227 10:06:20.222839       1 metrics.go:72] Registering metrics
	I1227 10:06:20.222910       1 controller.go:711] "Syncing nftables rules"
	I1227 10:06:29.021979       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1227 10:06:29.022673       1 main.go:301] handling current node
	I1227 10:06:39.026826       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1227 10:06:39.026860       1 main.go:301] handling current node
	
	
	==> kube-apiserver [6cfa79ecfd13f3a2204b0eca76862e1ae58e5961230bbbb0e2c311e1886de756] <==
	I1227 10:05:47.403925       1 cidrallocator.go:302] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1227 10:05:47.416093       1 shared_informer.go:377] "Caches are synced"
	I1227 10:05:47.416123       1 policy_source.go:248] refreshing policies
	I1227 10:05:47.416352       1 shared_informer.go:377] "Caches are synced"
	I1227 10:05:47.416431       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1227 10:05:47.416444       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1227 10:05:47.416583       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1227 10:05:47.418835       1 shared_informer.go:377] "Caches are synced"
	I1227 10:05:47.418883       1 shared_informer.go:377] "Caches are synced"
	I1227 10:05:47.418904       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1227 10:05:47.419115       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1227 10:05:47.419318       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1227 10:05:47.435666       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	E1227 10:05:47.442334       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1227 10:05:48.004505       1 storage_scheduling.go:139] all system priority classes are created successfully or already exist.
	I1227 10:05:48.077663       1 controller.go:667] quota admission added evaluator for: namespaces
	I1227 10:05:48.135046       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1227 10:05:48.183655       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1227 10:05:48.200815       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1227 10:05:48.217646       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1227 10:05:48.389610       1 alloc.go:329] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.110.196.96"}
	I1227 10:05:48.422481       1 alloc.go:329] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.100.163.170"}
	I1227 10:05:50.862115       1 controller.go:667] quota admission added evaluator for: endpoints
	I1227 10:05:50.911551       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1227 10:05:50.963216       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	
	
	==> kube-controller-manager [76f3b93c8f471b74c03f3058edede420056a0cf37682f580aa788c86b60dd759] <==
	I1227 10:05:50.483046       1 shared_informer.go:377] "Caches are synced"
	I1227 10:05:50.483146       1 range_allocator.go:181] "Starting range CIDR allocator"
	I1227 10:05:50.483184       1 shared_informer.go:370] "Waiting for caches to sync"
	I1227 10:05:50.483210       1 shared_informer.go:377] "Caches are synced"
	I1227 10:05:50.483279       1 shared_informer.go:377] "Caches are synced"
	I1227 10:05:50.485449       1 shared_informer.go:377] "Caches are synced"
	I1227 10:05:50.486372       1 shared_informer.go:377] "Caches are synced"
	I1227 10:05:50.486787       1 node_lifecycle_controller.go:886] "Missing timestamp for Node. Assuming now as a timestamp" node="embed-certs-017122"
	I1227 10:05:50.488469       1 node_lifecycle_controller.go:1080] "Controller detected that zone is now in new state" zone="" newState="Normal"
	I1227 10:05:50.488059       1 shared_informer.go:377] "Caches are synced"
	I1227 10:05:50.488070       1 shared_informer.go:377] "Caches are synced"
	I1227 10:05:50.488077       1 shared_informer.go:377] "Caches are synced"
	I1227 10:05:50.488083       1 shared_informer.go:377] "Caches are synced"
	I1227 10:05:50.488091       1 shared_informer.go:377] "Caches are synced"
	I1227 10:05:50.488099       1 shared_informer.go:377] "Caches are synced"
	I1227 10:05:50.488105       1 shared_informer.go:377] "Caches are synced"
	I1227 10:05:50.488110       1 shared_informer.go:377] "Caches are synced"
	I1227 10:05:50.488116       1 shared_informer.go:377] "Caches are synced"
	I1227 10:05:50.486965       1 shared_informer.go:377] "Caches are synced"
	I1227 10:05:50.512064       1 shared_informer.go:370] "Waiting for caches to sync"
	I1227 10:05:50.518961       1 shared_informer.go:377] "Caches are synced"
	I1227 10:05:50.585117       1 shared_informer.go:377] "Caches are synced"
	I1227 10:05:50.585238       1 garbagecollector.go:166] "Garbage collector: all resource monitors have synced"
	I1227 10:05:50.585276       1 garbagecollector.go:169] "Proceeding to collect garbage"
	I1227 10:05:50.612882       1 shared_informer.go:377] "Caches are synced"
	
	
	==> kube-proxy [2fd5b0ea3051372087c322ed48f365491b0c576c41d49c831586bb295e8cd4b1] <==
	I1227 10:05:48.814432       1 server_linux.go:53] "Using iptables proxy"
	I1227 10:05:48.899724       1 shared_informer.go:370] "Waiting for caches to sync"
	I1227 10:05:49.000476       1 shared_informer.go:377] "Caches are synced"
	I1227 10:05:49.000615       1 server.go:218] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1227 10:05:49.000745       1 server.go:255] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1227 10:05:49.025223       1 server.go:264] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1227 10:05:49.025289       1 server_linux.go:136] "Using iptables Proxier"
	I1227 10:05:49.030744       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1227 10:05:49.031098       1 server.go:529] "Version info" version="v1.35.0"
	I1227 10:05:49.031133       1 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1227 10:05:49.032176       1 config.go:200] "Starting service config controller"
	I1227 10:05:49.032198       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1227 10:05:49.032507       1 config.go:106] "Starting endpoint slice config controller"
	I1227 10:05:49.032522       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1227 10:05:49.032542       1 config.go:403] "Starting serviceCIDR config controller"
	I1227 10:05:49.032546       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1227 10:05:49.033199       1 config.go:309] "Starting node config controller"
	I1227 10:05:49.033217       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1227 10:05:49.033224       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1227 10:05:49.133262       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1227 10:05:49.133283       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1227 10:05:49.133301       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [0cc3bd645f392c02eb74608b63e52a0c2ca4f3ab5d2fa6e9de3815e6b3f84037] <==
	I1227 10:05:45.285744       1 serving.go:386] Generated self-signed cert in-memory
	W1227 10:05:47.250582       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1227 10:05:47.250617       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1227 10:05:47.250627       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1227 10:05:47.250634       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1227 10:05:47.369265       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.35.0"
	I1227 10:05:47.369299       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1227 10:05:47.371704       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1227 10:05:47.371835       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1227 10:05:47.371846       1 shared_informer.go:370] "Waiting for caches to sync"
	I1227 10:05:47.371872       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1227 10:05:47.474431       1 shared_informer.go:377] "Caches are synced"
	
	
	==> kubelet <==
	Dec 27 10:06:00 embed-certs-017122 kubelet[781]: E1227 10:06:00.430878     781 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-867fb5f87b-tkk2r_kubernetes-dashboard(c43e6502-6e83-4533-bf67-bf3d100bb4c6)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-tkk2r" podUID="c43e6502-6e83-4533-bf67-bf3d100bb4c6"
	Dec 27 10:06:04 embed-certs-017122 kubelet[781]: E1227 10:06:04.455076     781 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/kubernetes-dashboard-b84665fb8-zzkkj" containerName="kubernetes-dashboard"
	Dec 27 10:06:05 embed-certs-017122 kubelet[781]: E1227 10:06:05.457506     781 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/kubernetes-dashboard-b84665fb8-zzkkj" containerName="kubernetes-dashboard"
	Dec 27 10:06:10 embed-certs-017122 kubelet[781]: E1227 10:06:10.309501     781 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-tkk2r" containerName="dashboard-metrics-scraper"
	Dec 27 10:06:10 embed-certs-017122 kubelet[781]: I1227 10:06:10.309554     781 scope.go:122] "RemoveContainer" containerID="c16f9f833240b7ca49b7c9bae5e01d879dbec8f3ec59b3a638d825cb21992277"
	Dec 27 10:06:10 embed-certs-017122 kubelet[781]: I1227 10:06:10.471998     781 scope.go:122] "RemoveContainer" containerID="c16f9f833240b7ca49b7c9bae5e01d879dbec8f3ec59b3a638d825cb21992277"
	Dec 27 10:06:10 embed-certs-017122 kubelet[781]: E1227 10:06:10.472335     781 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-tkk2r" containerName="dashboard-metrics-scraper"
	Dec 27 10:06:10 embed-certs-017122 kubelet[781]: I1227 10:06:10.472363     781 scope.go:122] "RemoveContainer" containerID="a435ea347be6e685a8fb8efdde0c13f38f60709d949072677d1c021195c31c94"
	Dec 27 10:06:10 embed-certs-017122 kubelet[781]: E1227 10:06:10.472519     781 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-867fb5f87b-tkk2r_kubernetes-dashboard(c43e6502-6e83-4533-bf67-bf3d100bb4c6)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-tkk2r" podUID="c43e6502-6e83-4533-bf67-bf3d100bb4c6"
	Dec 27 10:06:10 embed-certs-017122 kubelet[781]: I1227 10:06:10.496530     781 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-b84665fb8-zzkkj" podStartSLOduration=7.025446374 podStartE2EDuration="19.496509795s" podCreationTimestamp="2025-12-27 10:05:51 +0000 UTC" firstStartedPulling="2025-12-27 10:05:51.531659875 +0000 UTC m=+8.418332403" lastFinishedPulling="2025-12-27 10:06:04.002723296 +0000 UTC m=+20.889395824" observedRunningTime="2025-12-27 10:06:04.479480376 +0000 UTC m=+21.366152920" watchObservedRunningTime="2025-12-27 10:06:10.496509795 +0000 UTC m=+27.383182323"
	Dec 27 10:06:19 embed-certs-017122 kubelet[781]: I1227 10:06:19.503765     781 scope.go:122] "RemoveContainer" containerID="d20768561f33e54160822873cd11a005f1ff46dbc38b0abae2e8ecd8d9636275"
	Dec 27 10:06:20 embed-certs-017122 kubelet[781]: E1227 10:06:20.093704     781 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-tkk2r" containerName="dashboard-metrics-scraper"
	Dec 27 10:06:20 embed-certs-017122 kubelet[781]: I1227 10:06:20.093751     781 scope.go:122] "RemoveContainer" containerID="a435ea347be6e685a8fb8efdde0c13f38f60709d949072677d1c021195c31c94"
	Dec 27 10:06:20 embed-certs-017122 kubelet[781]: E1227 10:06:20.093921     781 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-867fb5f87b-tkk2r_kubernetes-dashboard(c43e6502-6e83-4533-bf67-bf3d100bb4c6)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-tkk2r" podUID="c43e6502-6e83-4533-bf67-bf3d100bb4c6"
	Dec 27 10:06:25 embed-certs-017122 kubelet[781]: E1227 10:06:25.318404     781 prober_manager.go:209] "Readiness probe already exists for container" pod="kube-system/coredns-7d764666f9-bdwpn" containerName="coredns"
	Dec 27 10:06:32 embed-certs-017122 kubelet[781]: E1227 10:06:32.309786     781 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-tkk2r" containerName="dashboard-metrics-scraper"
	Dec 27 10:06:32 embed-certs-017122 kubelet[781]: I1227 10:06:32.309847     781 scope.go:122] "RemoveContainer" containerID="a435ea347be6e685a8fb8efdde0c13f38f60709d949072677d1c021195c31c94"
	Dec 27 10:06:32 embed-certs-017122 kubelet[781]: I1227 10:06:32.547731     781 scope.go:122] "RemoveContainer" containerID="a435ea347be6e685a8fb8efdde0c13f38f60709d949072677d1c021195c31c94"
	Dec 27 10:06:32 embed-certs-017122 kubelet[781]: E1227 10:06:32.547973     781 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-tkk2r" containerName="dashboard-metrics-scraper"
	Dec 27 10:06:32 embed-certs-017122 kubelet[781]: I1227 10:06:32.548275     781 scope.go:122] "RemoveContainer" containerID="9bb2eed3a1a16c237734c407b363356f266324a7f0f4221b794d2a8ea991e392"
	Dec 27 10:06:32 embed-certs-017122 kubelet[781]: E1227 10:06:32.548494     781 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-867fb5f87b-tkk2r_kubernetes-dashboard(c43e6502-6e83-4533-bf67-bf3d100bb4c6)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-tkk2r" podUID="c43e6502-6e83-4533-bf67-bf3d100bb4c6"
	Dec 27 10:06:39 embed-certs-017122 kubelet[781]: I1227 10:06:39.287105     781 dynamic_cafile_content.go:175] "Shutting down controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	Dec 27 10:06:39 embed-certs-017122 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Dec 27 10:06:39 embed-certs-017122 systemd[1]: kubelet.service: Deactivated successfully.
	Dec 27 10:06:39 embed-certs-017122 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	
	
	==> kubernetes-dashboard [185eb58aa58d9d750b982bfdc9c22d6399ce253489b24c332510411e62876981] <==
	2025/12/27 10:06:04 Using namespace: kubernetes-dashboard
	2025/12/27 10:06:04 Using in-cluster config to connect to apiserver
	2025/12/27 10:06:04 Using secret token for csrf signing
	2025/12/27 10:06:04 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/12/27 10:06:04 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/12/27 10:06:04 Successful initial request to the apiserver, version: v1.35.0
	2025/12/27 10:06:04 Generating JWE encryption key
	2025/12/27 10:06:04 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/12/27 10:06:04 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/12/27 10:06:05 Initializing JWE encryption key from synchronized object
	2025/12/27 10:06:05 Creating in-cluster Sidecar client
	2025/12/27 10:06:05 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/12/27 10:06:05 Serving insecurely on HTTP port: 9090
	2025/12/27 10:06:35 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/12/27 10:06:04 Starting overwatch
	
	
	==> storage-provisioner [c42ed83b7c4b6ba64ecae6ede25519b2ab7b1b1805784e124265c4431dd093ac] <==
	I1227 10:06:19.597695       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1227 10:06:19.619299       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1227 10:06:19.619857       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1227 10:06:19.623361       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1227 10:06:23.078470       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1227 10:06:27.339318       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1227 10:06:30.937381       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1227 10:06:33.990711       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1227 10:06:37.013465       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1227 10:06:37.019180       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1227 10:06:37.019638       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1227 10:06:37.019930       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-017122_25db5ca2-1039-49ca-b74c-77cbd74e1175!
	I1227 10:06:37.020652       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"0d9ebc47-35a9-4be4-b5b3-d21c89072018", APIVersion:"v1", ResourceVersion:"687", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-017122_25db5ca2-1039-49ca-b74c-77cbd74e1175 became leader
	W1227 10:06:37.029130       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1227 10:06:37.033581       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1227 10:06:37.120976       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-017122_25db5ca2-1039-49ca-b74c-77cbd74e1175!
	W1227 10:06:39.040648       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1227 10:06:39.052556       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1227 10:06:41.056122       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1227 10:06:41.064288       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	
	
	==> storage-provisioner [d20768561f33e54160822873cd11a005f1ff46dbc38b0abae2e8ecd8d9636275] <==
	I1227 10:05:48.738135       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1227 10:06:18.744527       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-017122 -n embed-certs-017122
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-017122 -n embed-certs-017122: exit status 2 (362.515163ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:270: (dbg) Run:  kubectl --context embed-certs-017122 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:294: <<< TestStartStop/group/embed-certs/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:295: ---------------------/post-mortem---------------------------------
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect embed-certs-017122
helpers_test.go:244: (dbg) docker inspect embed-certs-017122:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "f2b20a6dc27416978f803efe5894b77a3f380f056e00a1283ea3861f5ac2afb4",
	        "Created": "2025-12-27T10:04:22.683463694Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 515973,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-27T10:05:36.177427623Z",
	            "FinishedAt": "2025-12-27T10:05:35.31446463Z"
	        },
	        "Image": "sha256:713d59eed1cb7ea89d31e3cbef8f6274a6ab7509a421e96cf0197c334e76398d",
	        "ResolvConfPath": "/var/lib/docker/containers/f2b20a6dc27416978f803efe5894b77a3f380f056e00a1283ea3861f5ac2afb4/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/f2b20a6dc27416978f803efe5894b77a3f380f056e00a1283ea3861f5ac2afb4/hostname",
	        "HostsPath": "/var/lib/docker/containers/f2b20a6dc27416978f803efe5894b77a3f380f056e00a1283ea3861f5ac2afb4/hosts",
	        "LogPath": "/var/lib/docker/containers/f2b20a6dc27416978f803efe5894b77a3f380f056e00a1283ea3861f5ac2afb4/f2b20a6dc27416978f803efe5894b77a3f380f056e00a1283ea3861f5ac2afb4-json.log",
	        "Name": "/embed-certs-017122",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "embed-certs-017122:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "embed-certs-017122",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "f2b20a6dc27416978f803efe5894b77a3f380f056e00a1283ea3861f5ac2afb4",
	                "LowerDir": "/var/lib/docker/overlay2/052d61c91b3bdbfd03ac674bac933f73adb21bf22ba18d065f91f0746a20b7f4-init/diff:/var/lib/docker/overlay2/888349771a41a46b09e7de676064af9effbe2f5ae2a8ba49ad062335fb2a70e5/diff",
	                "MergedDir": "/var/lib/docker/overlay2/052d61c91b3bdbfd03ac674bac933f73adb21bf22ba18d065f91f0746a20b7f4/merged",
	                "UpperDir": "/var/lib/docker/overlay2/052d61c91b3bdbfd03ac674bac933f73adb21bf22ba18d065f91f0746a20b7f4/diff",
	                "WorkDir": "/var/lib/docker/overlay2/052d61c91b3bdbfd03ac674bac933f73adb21bf22ba18d065f91f0746a20b7f4/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "embed-certs-017122",
	                "Source": "/var/lib/docker/volumes/embed-certs-017122/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "embed-certs-017122",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "embed-certs-017122",
	                "name.minikube.sigs.k8s.io": "embed-certs-017122",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "39b46dfffc95ef177f153210bb7a7e5e7aa063e1ce9641ef950769297f2ac25a",
	            "SandboxKey": "/var/run/docker/netns/39b46dfffc95",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33451"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33452"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33455"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33453"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33454"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "embed-certs-017122": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "de:5e:1c:c8:a6:df",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "3ffc320fafa322491008f70d428c80b42cc8ee40dadd5618a8bbe80fddaf33d5",
	                    "EndpointID": "fd12319fbd4646f7b6cbcb3359cc21942ae3d8520fec65a069abce5434e15c69",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "embed-certs-017122",
	                        "f2b20a6dc274"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-017122 -n embed-certs-017122
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-017122 -n embed-certs-017122: exit status 2 (350.456149ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestStartStop/group/embed-certs/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p embed-certs-017122 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-arm64 -p embed-certs-017122 logs -n 25: (1.276680441s)
helpers_test.go:261: TestStartStop/group/embed-certs/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                   ARGS                                                                                   │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ delete  │ -p old-k8s-version-156305                                                                                                                                                │ old-k8s-version-156305       │ jenkins │ v1.37.0 │ 27 Dec 25 10:02 UTC │ 27 Dec 25 10:02 UTC │
	│ start   │ -p no-preload-021144 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0                  │ no-preload-021144            │ jenkins │ v1.37.0 │ 27 Dec 25 10:02 UTC │ 27 Dec 25 10:03 UTC │
	│ addons  │ enable metrics-server -p no-preload-021144 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                  │ no-preload-021144            │ jenkins │ v1.37.0 │ 27 Dec 25 10:03 UTC │                     │
	│ stop    │ -p no-preload-021144 --alsologtostderr -v=3                                                                                                                              │ no-preload-021144            │ jenkins │ v1.37.0 │ 27 Dec 25 10:03 UTC │ 27 Dec 25 10:03 UTC │
	│ addons  │ enable dashboard -p no-preload-021144 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                             │ no-preload-021144            │ jenkins │ v1.37.0 │ 27 Dec 25 10:03 UTC │ 27 Dec 25 10:03 UTC │
	│ start   │ -p no-preload-021144 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0                  │ no-preload-021144            │ jenkins │ v1.37.0 │ 27 Dec 25 10:03 UTC │ 27 Dec 25 10:04 UTC │
	│ ssh     │ force-systemd-flag-779725 ssh cat /etc/crio/crio.conf.d/02-crio.conf                                                                                                     │ force-systemd-flag-779725    │ jenkins │ v1.37.0 │ 27 Dec 25 10:04 UTC │ 27 Dec 25 10:04 UTC │
	│ delete  │ -p force-systemd-flag-779725                                                                                                                                             │ force-systemd-flag-779725    │ jenkins │ v1.37.0 │ 27 Dec 25 10:04 UTC │ 27 Dec 25 10:04 UTC │
	│ start   │ -p embed-certs-017122 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0                   │ embed-certs-017122           │ jenkins │ v1.37.0 │ 27 Dec 25 10:04 UTC │ 27 Dec 25 10:05 UTC │
	│ image   │ no-preload-021144 image list --format=json                                                                                                                               │ no-preload-021144            │ jenkins │ v1.37.0 │ 27 Dec 25 10:04 UTC │ 27 Dec 25 10:04 UTC │
	│ pause   │ -p no-preload-021144 --alsologtostderr -v=1                                                                                                                              │ no-preload-021144            │ jenkins │ v1.37.0 │ 27 Dec 25 10:04 UTC │                     │
	│ delete  │ -p no-preload-021144                                                                                                                                                     │ no-preload-021144            │ jenkins │ v1.37.0 │ 27 Dec 25 10:04 UTC │ 27 Dec 25 10:04 UTC │
	│ delete  │ -p no-preload-021144                                                                                                                                                     │ no-preload-021144            │ jenkins │ v1.37.0 │ 27 Dec 25 10:04 UTC │ 27 Dec 25 10:04 UTC │
	│ delete  │ -p disable-driver-mounts-242374                                                                                                                                          │ disable-driver-mounts-242374 │ jenkins │ v1.37.0 │ 27 Dec 25 10:04 UTC │ 27 Dec 25 10:04 UTC │
	│ start   │ -p default-k8s-diff-port-681744 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0 │ default-k8s-diff-port-681744 │ jenkins │ v1.37.0 │ 27 Dec 25 10:04 UTC │ 27 Dec 25 10:05 UTC │
	│ addons  │ enable metrics-server -p embed-certs-017122 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                 │ embed-certs-017122           │ jenkins │ v1.37.0 │ 27 Dec 25 10:05 UTC │                     │
	│ stop    │ -p embed-certs-017122 --alsologtostderr -v=3                                                                                                                             │ embed-certs-017122           │ jenkins │ v1.37.0 │ 27 Dec 25 10:05 UTC │ 27 Dec 25 10:05 UTC │
	│ addons  │ enable dashboard -p embed-certs-017122 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                            │ embed-certs-017122           │ jenkins │ v1.37.0 │ 27 Dec 25 10:05 UTC │ 27 Dec 25 10:05 UTC │
	│ start   │ -p embed-certs-017122 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0                   │ embed-certs-017122           │ jenkins │ v1.37.0 │ 27 Dec 25 10:05 UTC │ 27 Dec 25 10:06 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-681744 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                       │ default-k8s-diff-port-681744 │ jenkins │ v1.37.0 │ 27 Dec 25 10:05 UTC │                     │
	│ stop    │ -p default-k8s-diff-port-681744 --alsologtostderr -v=3                                                                                                                   │ default-k8s-diff-port-681744 │ jenkins │ v1.37.0 │ 27 Dec 25 10:05 UTC │ 27 Dec 25 10:05 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-681744 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                  │ default-k8s-diff-port-681744 │ jenkins │ v1.37.0 │ 27 Dec 25 10:05 UTC │ 27 Dec 25 10:05 UTC │
	│ start   │ -p default-k8s-diff-port-681744 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0 │ default-k8s-diff-port-681744 │ jenkins │ v1.37.0 │ 27 Dec 25 10:05 UTC │                     │
	│ image   │ embed-certs-017122 image list --format=json                                                                                                                              │ embed-certs-017122           │ jenkins │ v1.37.0 │ 27 Dec 25 10:06 UTC │ 27 Dec 25 10:06 UTC │
	│ pause   │ -p embed-certs-017122 --alsologtostderr -v=1                                                                                                                             │ embed-certs-017122           │ jenkins │ v1.37.0 │ 27 Dec 25 10:06 UTC │                     │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/27 10:05:52
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1227 10:05:52.976420  518484 out.go:360] Setting OutFile to fd 1 ...
	I1227 10:05:52.976641  518484 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 10:05:52.976650  518484 out.go:374] Setting ErrFile to fd 2...
	I1227 10:05:52.976655  518484 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 10:05:52.976920  518484 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22344-301174/.minikube/bin
	I1227 10:05:52.977297  518484 out.go:368] Setting JSON to false
	I1227 10:05:52.978382  518484 start.go:133] hostinfo: {"hostname":"ip-172-31-31-251","uptime":10102,"bootTime":1766819851,"procs":201,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1227 10:05:52.978455  518484 start.go:143] virtualization:  
	I1227 10:05:52.982838  518484 out.go:179] * [default-k8s-diff-port-681744] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1227 10:05:52.986821  518484 out.go:179]   - MINIKUBE_LOCATION=22344
	I1227 10:05:52.986876  518484 notify.go:221] Checking for updates...
	I1227 10:05:52.993293  518484 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1227 10:05:52.996436  518484 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22344-301174/kubeconfig
	I1227 10:05:52.999552  518484 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22344-301174/.minikube
	I1227 10:05:53.002599  518484 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1227 10:05:53.005653  518484 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1227 10:05:53.009156  518484 config.go:182] Loaded profile config "default-k8s-diff-port-681744": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1227 10:05:53.009751  518484 driver.go:422] Setting default libvirt URI to qemu:///system
	I1227 10:05:53.050650  518484 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1227 10:05:53.050771  518484 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1227 10:05:53.147744  518484 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:53 SystemTime:2025-12-27 10:05:53.138029179 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1227 10:05:53.147848  518484 docker.go:319] overlay module found
	I1227 10:05:53.151424  518484 out.go:179] * Using the docker driver based on existing profile
	I1227 10:05:53.154777  518484 start.go:309] selected driver: docker
	I1227 10:05:53.154797  518484 start.go:928] validating driver "docker" against &{Name:default-k8s-diff-port-681744 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:default-k8s-diff-port-681744 Namespace:default APISe
rverHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] M
ountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1227 10:05:53.154902  518484 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1227 10:05:53.155634  518484 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1227 10:05:53.246584  518484 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:53 SystemTime:2025-12-27 10:05:53.237445187 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1227 10:05:53.246893  518484 start_flags.go:1019] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1227 10:05:53.246913  518484 cni.go:84] Creating CNI manager for ""
	I1227 10:05:53.246966  518484 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1227 10:05:53.247002  518484 start.go:353] cluster config:
	{Name:default-k8s-diff-port-681744 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:default-k8s-diff-port-681744 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:
cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false
DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1227 10:05:53.250199  518484 out.go:179] * Starting "default-k8s-diff-port-681744" primary control-plane node in "default-k8s-diff-port-681744" cluster
	I1227 10:05:53.252987  518484 cache.go:134] Beginning downloading kic base image for docker with crio
	I1227 10:05:53.256406  518484 out.go:179] * Pulling base image v0.0.48-1766570851-22316 ...
	I1227 10:05:53.259393  518484 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime crio
	I1227 10:05:53.259439  518484 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22344-301174/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-cri-o-overlay-arm64.tar.lz4
	I1227 10:05:53.259450  518484 cache.go:65] Caching tarball of preloaded images
	I1227 10:05:53.259563  518484 preload.go:251] Found /home/jenkins/minikube-integration/22344-301174/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1227 10:05:53.259574  518484 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0 on crio
	I1227 10:05:53.259697  518484 profile.go:143] Saving config to /home/jenkins/minikube-integration/22344-301174/.minikube/profiles/default-k8s-diff-port-681744/config.json ...
	I1227 10:05:53.259912  518484 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a in local docker daemon
	I1227 10:05:53.284599  518484 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a in local docker daemon, skipping pull
	I1227 10:05:53.284625  518484 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a exists in daemon, skipping load
	I1227 10:05:53.284641  518484 cache.go:243] Successfully downloaded all kic artifacts
	I1227 10:05:53.284670  518484 start.go:360] acquireMachinesLock for default-k8s-diff-port-681744: {Name:mk8a28038e1b078aa1c0d3cea0d9a4fa9fc07d3f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1227 10:05:53.284734  518484 start.go:364] duration metric: took 41.601µs to acquireMachinesLock for "default-k8s-diff-port-681744"
	I1227 10:05:53.284761  518484 start.go:96] Skipping create...Using existing machine configuration
	I1227 10:05:53.284770  518484 fix.go:54] fixHost starting: 
	I1227 10:05:53.285034  518484 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-681744 --format={{.State.Status}}
	I1227 10:05:53.314882  518484 fix.go:112] recreateIfNeeded on default-k8s-diff-port-681744: state=Stopped err=<nil>
	W1227 10:05:53.314916  518484 fix.go:138] unexpected machine state, will restart: <nil>
	W1227 10:05:50.985413  515844 pod_ready.go:104] pod "coredns-7d764666f9-bdwpn" is not "Ready", error: <nil>
	W1227 10:05:52.985766  515844 pod_ready.go:104] pod "coredns-7d764666f9-bdwpn" is not "Ready", error: <nil>
	W1227 10:05:55.486225  515844 pod_ready.go:104] pod "coredns-7d764666f9-bdwpn" is not "Ready", error: <nil>
	I1227 10:05:53.319285  518484 out.go:252] * Restarting existing docker container for "default-k8s-diff-port-681744" ...
	I1227 10:05:53.319385  518484 cli_runner.go:164] Run: docker start default-k8s-diff-port-681744
	I1227 10:05:53.656481  518484 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-681744 --format={{.State.Status}}
	I1227 10:05:53.686924  518484 kic.go:430] container "default-k8s-diff-port-681744" state is running.
	I1227 10:05:53.687416  518484 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-681744
	I1227 10:05:53.711540  518484 profile.go:143] Saving config to /home/jenkins/minikube-integration/22344-301174/.minikube/profiles/default-k8s-diff-port-681744/config.json ...
	I1227 10:05:53.711748  518484 machine.go:94] provisionDockerMachine start ...
	I1227 10:05:53.711808  518484 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-681744
	I1227 10:05:53.738106  518484 main.go:144] libmachine: Using SSH client type: native
	I1227 10:05:53.738488  518484 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dbbe0] 0x3de0e0 <nil>  [] 0s} 127.0.0.1 33456 <nil> <nil>}
	I1227 10:05:53.738500  518484 main.go:144] libmachine: About to run SSH command:
	hostname
	I1227 10:05:53.739112  518484 main.go:144] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:50078->127.0.0.1:33456: read: connection reset by peer
	I1227 10:05:56.890497  518484 main.go:144] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-681744
	
	I1227 10:05:56.890571  518484 ubuntu.go:182] provisioning hostname "default-k8s-diff-port-681744"
	I1227 10:05:56.890669  518484 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-681744
	I1227 10:05:56.913833  518484 main.go:144] libmachine: Using SSH client type: native
	I1227 10:05:56.915335  518484 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dbbe0] 0x3de0e0 <nil>  [] 0s} 127.0.0.1 33456 <nil> <nil>}
	I1227 10:05:56.915364  518484 main.go:144] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-681744 && echo "default-k8s-diff-port-681744" | sudo tee /etc/hostname
	I1227 10:05:57.076818  518484 main.go:144] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-681744
	
	I1227 10:05:57.076987  518484 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-681744
	I1227 10:05:57.102232  518484 main.go:144] libmachine: Using SSH client type: native
	I1227 10:05:57.102542  518484 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dbbe0] 0x3de0e0 <nil>  [] 0s} 127.0.0.1 33456 <nil> <nil>}
	I1227 10:05:57.102558  518484 main.go:144] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-681744' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-681744/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-681744' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1227 10:05:57.259077  518484 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	I1227 10:05:57.259161  518484 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22344-301174/.minikube CaCertPath:/home/jenkins/minikube-integration/22344-301174/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22344-301174/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22344-301174/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22344-301174/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22344-301174/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22344-301174/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22344-301174/.minikube}
	I1227 10:05:57.259206  518484 ubuntu.go:190] setting up certificates
	I1227 10:05:57.259249  518484 provision.go:84] configureAuth start
	I1227 10:05:57.259345  518484 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-681744
	I1227 10:05:57.283158  518484 provision.go:143] copyHostCerts
	I1227 10:05:57.283222  518484 exec_runner.go:144] found /home/jenkins/minikube-integration/22344-301174/.minikube/ca.pem, removing ...
	I1227 10:05:57.283237  518484 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22344-301174/.minikube/ca.pem
	I1227 10:05:57.283307  518484 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22344-301174/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22344-301174/.minikube/ca.pem (1082 bytes)
	I1227 10:05:57.283400  518484 exec_runner.go:144] found /home/jenkins/minikube-integration/22344-301174/.minikube/cert.pem, removing ...
	I1227 10:05:57.283406  518484 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22344-301174/.minikube/cert.pem
	I1227 10:05:57.283431  518484 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22344-301174/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22344-301174/.minikube/cert.pem (1123 bytes)
	I1227 10:05:57.283477  518484 exec_runner.go:144] found /home/jenkins/minikube-integration/22344-301174/.minikube/key.pem, removing ...
	I1227 10:05:57.283482  518484 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22344-301174/.minikube/key.pem
	I1227 10:05:57.283504  518484 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22344-301174/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22344-301174/.minikube/key.pem (1675 bytes)
	I1227 10:05:57.283548  518484 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22344-301174/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22344-301174/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22344-301174/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-681744 san=[127.0.0.1 192.168.85.2 default-k8s-diff-port-681744 localhost minikube]
	I1227 10:05:57.507051  518484 provision.go:177] copyRemoteCerts
	I1227 10:05:57.507170  518484 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1227 10:05:57.507231  518484 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-681744
	I1227 10:05:57.528691  518484 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33456 SSHKeyPath:/home/jenkins/minikube-integration/22344-301174/.minikube/machines/default-k8s-diff-port-681744/id_rsa Username:docker}
	I1227 10:05:57.635255  518484 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22344-301174/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1227 10:05:57.655681  518484 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22344-301174/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I1227 10:05:57.682632  518484 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22344-301174/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1227 10:05:57.702794  518484 provision.go:87] duration metric: took 443.508337ms to configureAuth
	I1227 10:05:57.702868  518484 ubuntu.go:206] setting minikube options for container-runtime
	I1227 10:05:57.703097  518484 config.go:182] Loaded profile config "default-k8s-diff-port-681744": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1227 10:05:57.703255  518484 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-681744
	I1227 10:05:57.723603  518484 main.go:144] libmachine: Using SSH client type: native
	I1227 10:05:57.723913  518484 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dbbe0] 0x3de0e0 <nil>  [] 0s} 127.0.0.1 33456 <nil> <nil>}
	I1227 10:05:57.723928  518484 main.go:144] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1227 10:05:58.152233  518484 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1227 10:05:58.152325  518484 machine.go:97] duration metric: took 4.440563065s to provisionDockerMachine
	I1227 10:05:58.152359  518484 start.go:293] postStartSetup for "default-k8s-diff-port-681744" (driver="docker")
	I1227 10:05:58.152388  518484 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1227 10:05:58.152472  518484 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1227 10:05:58.152529  518484 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-681744
	I1227 10:05:58.181862  518484 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33456 SSHKeyPath:/home/jenkins/minikube-integration/22344-301174/.minikube/machines/default-k8s-diff-port-681744/id_rsa Username:docker}
	I1227 10:05:58.282501  518484 ssh_runner.go:195] Run: cat /etc/os-release
	I1227 10:05:58.286212  518484 main.go:144] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1227 10:05:58.286239  518484 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1227 10:05:58.286250  518484 filesync.go:126] Scanning /home/jenkins/minikube-integration/22344-301174/.minikube/addons for local assets ...
	I1227 10:05:58.286304  518484 filesync.go:126] Scanning /home/jenkins/minikube-integration/22344-301174/.minikube/files for local assets ...
	I1227 10:05:58.286382  518484 filesync.go:149] local asset: /home/jenkins/minikube-integration/22344-301174/.minikube/files/etc/ssl/certs/3030432.pem -> 3030432.pem in /etc/ssl/certs
	I1227 10:05:58.286485  518484 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1227 10:05:58.297827  518484 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22344-301174/.minikube/files/etc/ssl/certs/3030432.pem --> /etc/ssl/certs/3030432.pem (1708 bytes)
	I1227 10:05:58.326405  518484 start.go:296] duration metric: took 174.012081ms for postStartSetup
	I1227 10:05:58.326542  518484 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1227 10:05:58.326599  518484 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-681744
	I1227 10:05:58.364076  518484 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33456 SSHKeyPath:/home/jenkins/minikube-integration/22344-301174/.minikube/machines/default-k8s-diff-port-681744/id_rsa Username:docker}
	I1227 10:05:58.472170  518484 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1227 10:05:58.477172  518484 fix.go:56] duration metric: took 5.192395113s for fixHost
	I1227 10:05:58.477202  518484 start.go:83] releasing machines lock for "default-k8s-diff-port-681744", held for 5.192454461s
	I1227 10:05:58.477278  518484 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-681744
	I1227 10:05:58.502486  518484 ssh_runner.go:195] Run: cat /version.json
	I1227 10:05:58.502539  518484 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-681744
	I1227 10:05:58.502884  518484 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1227 10:05:58.502933  518484 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-681744
	I1227 10:05:58.534847  518484 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33456 SSHKeyPath:/home/jenkins/minikube-integration/22344-301174/.minikube/machines/default-k8s-diff-port-681744/id_rsa Username:docker}
	I1227 10:05:58.550064  518484 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33456 SSHKeyPath:/home/jenkins/minikube-integration/22344-301174/.minikube/machines/default-k8s-diff-port-681744/id_rsa Username:docker}
	I1227 10:05:58.658220  518484 ssh_runner.go:195] Run: systemctl --version
	I1227 10:05:58.758547  518484 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1227 10:05:58.811912  518484 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1227 10:05:58.817693  518484 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1227 10:05:58.817822  518484 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1227 10:05:58.827919  518484 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1227 10:05:58.827994  518484 start.go:496] detecting cgroup driver to use...
	I1227 10:05:58.828041  518484 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1227 10:05:58.828130  518484 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1227 10:05:58.846071  518484 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1227 10:05:58.861947  518484 docker.go:218] disabling cri-docker service (if available) ...
	I1227 10:05:58.862009  518484 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1227 10:05:58.880391  518484 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1227 10:05:58.895831  518484 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1227 10:05:59.067263  518484 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1227 10:05:59.225089  518484 docker.go:234] disabling docker service ...
	I1227 10:05:59.225204  518484 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1227 10:05:59.242803  518484 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1227 10:05:59.257292  518484 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1227 10:05:59.411553  518484 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1227 10:05:59.627884  518484 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1227 10:05:59.642638  518484 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1227 10:05:59.663779  518484 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1227 10:05:59.663893  518484 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 10:05:59.682891  518484 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1227 10:05:59.683046  518484 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 10:05:59.695440  518484 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 10:05:59.708924  518484 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 10:05:59.721631  518484 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1227 10:05:59.732260  518484 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 10:05:59.745205  518484 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 10:05:59.756887  518484 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 10:05:59.768971  518484 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1227 10:05:59.779280  518484 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1227 10:05:59.789388  518484 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1227 10:05:59.953365  518484 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1227 10:06:00.681862  518484 start.go:553] Will wait 60s for socket path /var/run/crio/crio.sock
	I1227 10:06:00.682007  518484 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1227 10:06:00.689572  518484 start.go:574] Will wait 60s for crictl version
	I1227 10:06:00.689693  518484 ssh_runner.go:195] Run: which crictl
	I1227 10:06:00.700714  518484 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1227 10:06:00.739086  518484 start.go:590] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1227 10:06:00.739238  518484 ssh_runner.go:195] Run: crio --version
	I1227 10:06:00.781291  518484 ssh_runner.go:195] Run: crio --version
	I1227 10:06:00.825955  518484 out.go:179] * Preparing Kubernetes v1.35.0 on CRI-O 1.34.3 ...
	W1227 10:05:57.487853  515844 pod_ready.go:104] pod "coredns-7d764666f9-bdwpn" is not "Ready", error: <nil>
	W1227 10:05:59.996763  515844 pod_ready.go:104] pod "coredns-7d764666f9-bdwpn" is not "Ready", error: <nil>
	I1227 10:06:00.828985  518484 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-681744 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1227 10:06:00.848078  518484 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1227 10:06:00.852365  518484 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1227 10:06:00.865160  518484 kubeadm.go:884] updating cluster {Name:default-k8s-diff-port-681744 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:default-k8s-diff-port-681744 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountT
ype:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false} ...
	I1227 10:06:00.865282  518484 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime crio
	I1227 10:06:00.865342  518484 ssh_runner.go:195] Run: sudo crictl images --output json
	I1227 10:06:00.927760  518484 crio.go:561] all images are preloaded for cri-o runtime.
	I1227 10:06:00.927781  518484 crio.go:433] Images already preloaded, skipping extraction
	I1227 10:06:00.927838  518484 ssh_runner.go:195] Run: sudo crictl images --output json
	I1227 10:06:00.972881  518484 crio.go:561] all images are preloaded for cri-o runtime.
	I1227 10:06:00.972951  518484 cache_images.go:86] Images are preloaded, skipping loading
	I1227 10:06:00.972977  518484 kubeadm.go:935] updating node { 192.168.85.2 8444 v1.35.0 crio true true} ...
	I1227 10:06:00.973107  518484 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=default-k8s-diff-port-681744 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0 ClusterName:default-k8s-diff-port-681744 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1227 10:06:00.973206  518484 ssh_runner.go:195] Run: crio config
	I1227 10:06:01.064250  518484 cni.go:84] Creating CNI manager for ""
	I1227 10:06:01.064316  518484 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1227 10:06:01.064347  518484 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1227 10:06:01.064405  518484 kubeadm.go:197] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8444 KubernetesVersion:v1.35.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-681744 NodeName:default-k8s-diff-port-681744 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.
crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock failCgroupV1:false hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1227 10:06:01.064573  518484 kubeadm.go:203] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-681744"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	failCgroupV1: false
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1227 10:06:01.064666  518484 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0
	I1227 10:06:01.075050  518484 binaries.go:51] Found k8s binaries, skipping transfer
	I1227 10:06:01.075166  518484 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1227 10:06:01.084369  518484 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (378 bytes)
	I1227 10:06:01.104200  518484 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1227 10:06:01.120954  518484 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2245 bytes)
	I1227 10:06:01.136864  518484 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1227 10:06:01.142164  518484 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1227 10:06:01.153659  518484 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1227 10:06:01.322291  518484 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1227 10:06:01.341114  518484 certs.go:69] Setting up /home/jenkins/minikube-integration/22344-301174/.minikube/profiles/default-k8s-diff-port-681744 for IP: 192.168.85.2
	I1227 10:06:01.341187  518484 certs.go:195] generating shared ca certs ...
	I1227 10:06:01.341218  518484 certs.go:227] acquiring lock for ca certs: {Name:mk8bd99999eb300c33129cb11fd55a1be3001328 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 10:06:01.341418  518484 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22344-301174/.minikube/ca.key
	I1227 10:06:01.341492  518484 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22344-301174/.minikube/proxy-client-ca.key
	I1227 10:06:01.341526  518484 certs.go:257] generating profile certs ...
	I1227 10:06:01.341654  518484 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22344-301174/.minikube/profiles/default-k8s-diff-port-681744/client.key
	I1227 10:06:01.341759  518484 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22344-301174/.minikube/profiles/default-k8s-diff-port-681744/apiserver.key.263a07fe
	I1227 10:06:01.341829  518484 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22344-301174/.minikube/profiles/default-k8s-diff-port-681744/proxy-client.key
	I1227 10:06:01.341973  518484 certs.go:484] found cert: /home/jenkins/minikube-integration/22344-301174/.minikube/certs/303043.pem (1338 bytes)
	W1227 10:06:01.342046  518484 certs.go:480] ignoring /home/jenkins/minikube-integration/22344-301174/.minikube/certs/303043_empty.pem, impossibly tiny 0 bytes
	I1227 10:06:01.342083  518484 certs.go:484] found cert: /home/jenkins/minikube-integration/22344-301174/.minikube/certs/ca-key.pem (1675 bytes)
	I1227 10:06:01.342140  518484 certs.go:484] found cert: /home/jenkins/minikube-integration/22344-301174/.minikube/certs/ca.pem (1082 bytes)
	I1227 10:06:01.342202  518484 certs.go:484] found cert: /home/jenkins/minikube-integration/22344-301174/.minikube/certs/cert.pem (1123 bytes)
	I1227 10:06:01.342251  518484 certs.go:484] found cert: /home/jenkins/minikube-integration/22344-301174/.minikube/certs/key.pem (1675 bytes)
	I1227 10:06:01.342333  518484 certs.go:484] found cert: /home/jenkins/minikube-integration/22344-301174/.minikube/files/etc/ssl/certs/3030432.pem (1708 bytes)
	I1227 10:06:01.342945  518484 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22344-301174/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1227 10:06:01.374867  518484 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22344-301174/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1227 10:06:01.399379  518484 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22344-301174/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1227 10:06:01.423165  518484 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22344-301174/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1227 10:06:01.448879  518484 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22344-301174/.minikube/profiles/default-k8s-diff-port-681744/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1227 10:06:01.469081  518484 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22344-301174/.minikube/profiles/default-k8s-diff-port-681744/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1227 10:06:01.498883  518484 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22344-301174/.minikube/profiles/default-k8s-diff-port-681744/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1227 10:06:01.523730  518484 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22344-301174/.minikube/profiles/default-k8s-diff-port-681744/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1227 10:06:01.545339  518484 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22344-301174/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1227 10:06:01.567789  518484 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22344-301174/.minikube/certs/303043.pem --> /usr/share/ca-certificates/303043.pem (1338 bytes)
	I1227 10:06:01.598260  518484 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22344-301174/.minikube/files/etc/ssl/certs/3030432.pem --> /usr/share/ca-certificates/3030432.pem (1708 bytes)
	I1227 10:06:01.690428  518484 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (722 bytes)
	I1227 10:06:01.714305  518484 ssh_runner.go:195] Run: openssl version
	I1227 10:06:01.724958  518484 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1227 10:06:01.748209  518484 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1227 10:06:01.759604  518484 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1227 10:06:01.772059  518484 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 27 09:13 /usr/share/ca-certificates/minikubeCA.pem
	I1227 10:06:01.772176  518484 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1227 10:06:01.830709  518484 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1227 10:06:01.839601  518484 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/303043.pem
	I1227 10:06:01.851039  518484 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/303043.pem /etc/ssl/certs/303043.pem
	I1227 10:06:01.859566  518484 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/303043.pem
	I1227 10:06:01.864248  518484 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 27 09:17 /usr/share/ca-certificates/303043.pem
	I1227 10:06:01.864318  518484 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/303043.pem
	I1227 10:06:01.937956  518484 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1227 10:06:01.947115  518484 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/3030432.pem
	I1227 10:06:01.957022  518484 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/3030432.pem /etc/ssl/certs/3030432.pem
	I1227 10:06:01.965496  518484 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3030432.pem
	I1227 10:06:01.969996  518484 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 27 09:17 /usr/share/ca-certificates/3030432.pem
	I1227 10:06:01.970081  518484 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3030432.pem
	I1227 10:06:02.018673  518484 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1227 10:06:02.026996  518484 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1227 10:06:02.031568  518484 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1227 10:06:02.077583  518484 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1227 10:06:02.122618  518484 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1227 10:06:02.173186  518484 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1227 10:06:02.223354  518484 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1227 10:06:02.320629  518484 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1227 10:06:02.423878  518484 kubeadm.go:401] StartCluster: {Name:default-k8s-diff-port-681744 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:default-k8s-diff-port-681744 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType
:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1227 10:06:02.423973  518484 cri.go:61] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1227 10:06:02.424056  518484 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1227 10:06:02.523020  518484 cri.go:96] found id: ""
	I1227 10:06:02.523109  518484 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1227 10:06:02.545226  518484 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1227 10:06:02.545309  518484 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1227 10:06:02.545390  518484 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1227 10:06:02.564336  518484 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1227 10:06:02.565275  518484 kubeconfig.go:47] verify endpoint returned: get endpoint: "default-k8s-diff-port-681744" does not appear in /home/jenkins/minikube-integration/22344-301174/kubeconfig
	I1227 10:06:02.565860  518484 kubeconfig.go:62] /home/jenkins/minikube-integration/22344-301174/kubeconfig needs updating (will repair): [kubeconfig missing "default-k8s-diff-port-681744" cluster setting kubeconfig missing "default-k8s-diff-port-681744" context setting]
	I1227 10:06:02.566723  518484 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22344-301174/kubeconfig: {Name:mk3a3b85dc4ae4614cb3a9a7459a1ec0ee3f4d9f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 10:06:02.568704  518484 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1227 10:06:02.594589  518484 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.85.2
	I1227 10:06:02.594623  518484 kubeadm.go:602] duration metric: took 49.294752ms to restartPrimaryControlPlane
	I1227 10:06:02.594634  518484 kubeadm.go:403] duration metric: took 170.76596ms to StartCluster
	I1227 10:06:02.594649  518484 settings.go:142] acquiring lock: {Name:mkd3c0090464e277fd903dd0170c6ace5a6172ea Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 10:06:02.594716  518484 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22344-301174/kubeconfig
	I1227 10:06:02.596132  518484 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22344-301174/kubeconfig: {Name:mk3a3b85dc4ae4614cb3a9a7459a1ec0ee3f4d9f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 10:06:02.596353  518484 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1227 10:06:02.596795  518484 config.go:182] Loaded profile config "default-k8s-diff-port-681744": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1227 10:06:02.596856  518484 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1227 10:06:02.596947  518484 addons.go:70] Setting storage-provisioner=true in profile "default-k8s-diff-port-681744"
	I1227 10:06:02.596969  518484 addons.go:70] Setting dashboard=true in profile "default-k8s-diff-port-681744"
	I1227 10:06:02.596991  518484 addons.go:239] Setting addon dashboard=true in "default-k8s-diff-port-681744"
	W1227 10:06:02.596998  518484 addons.go:248] addon dashboard should already be in state true
	I1227 10:06:02.597024  518484 host.go:66] Checking if "default-k8s-diff-port-681744" exists ...
	I1227 10:06:02.597573  518484 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-681744 --format={{.State.Status}}
	I1227 10:06:02.597745  518484 addons.go:239] Setting addon storage-provisioner=true in "default-k8s-diff-port-681744"
	W1227 10:06:02.597769  518484 addons.go:248] addon storage-provisioner should already be in state true
	I1227 10:06:02.597821  518484 host.go:66] Checking if "default-k8s-diff-port-681744" exists ...
	I1227 10:06:02.598365  518484 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-681744 --format={{.State.Status}}
	I1227 10:06:02.598914  518484 addons.go:70] Setting default-storageclass=true in profile "default-k8s-diff-port-681744"
	I1227 10:06:02.598948  518484 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-681744"
	I1227 10:06:02.599211  518484 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-681744 --format={{.State.Status}}
	I1227 10:06:02.602000  518484 out.go:179] * Verifying Kubernetes components...
	I1227 10:06:02.605920  518484 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1227 10:06:02.649146  518484 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1227 10:06:02.649509  518484 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1227 10:06:02.655559  518484 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1227 10:06:02.658839  518484 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1227 10:06:02.658864  518484 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1227 10:06:02.658925  518484 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-681744
	I1227 10:06:02.659950  518484 addons.go:239] Setting addon default-storageclass=true in "default-k8s-diff-port-681744"
	W1227 10:06:02.659976  518484 addons.go:248] addon default-storageclass should already be in state true
	I1227 10:06:02.660004  518484 host.go:66] Checking if "default-k8s-diff-port-681744" exists ...
	I1227 10:06:02.660420  518484 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-681744 --format={{.State.Status}}
	I1227 10:06:02.661325  518484 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1227 10:06:02.661344  518484 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1227 10:06:02.661395  518484 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-681744
	I1227 10:06:02.711383  518484 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33456 SSHKeyPath:/home/jenkins/minikube-integration/22344-301174/.minikube/machines/default-k8s-diff-port-681744/id_rsa Username:docker}
	I1227 10:06:02.712298  518484 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1227 10:06:02.712313  518484 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1227 10:06:02.712370  518484 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-681744
	I1227 10:06:02.720472  518484 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33456 SSHKeyPath:/home/jenkins/minikube-integration/22344-301174/.minikube/machines/default-k8s-diff-port-681744/id_rsa Username:docker}
	I1227 10:06:02.752027  518484 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33456 SSHKeyPath:/home/jenkins/minikube-integration/22344-301174/.minikube/machines/default-k8s-diff-port-681744/id_rsa Username:docker}
	W1227 10:06:02.488162  515844 pod_ready.go:104] pod "coredns-7d764666f9-bdwpn" is not "Ready", error: <nil>
	W1227 10:06:04.508418  515844 pod_ready.go:104] pod "coredns-7d764666f9-bdwpn" is not "Ready", error: <nil>
	I1227 10:06:03.031483  518484 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1227 10:06:03.123128  518484 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1227 10:06:03.123197  518484 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1227 10:06:03.140802  518484 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1227 10:06:03.283867  518484 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1227 10:06:03.283944  518484 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1227 10:06:03.285209  518484 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1227 10:06:03.395515  518484 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1227 10:06:03.395590  518484 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1227 10:06:03.459415  518484 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1227 10:06:03.459479  518484 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1227 10:06:03.516043  518484 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1227 10:06:03.516120  518484 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1227 10:06:03.571624  518484 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1227 10:06:03.571710  518484 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1227 10:06:03.612030  518484 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1227 10:06:03.612102  518484 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1227 10:06:03.674340  518484 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1227 10:06:03.674385  518484 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1227 10:06:03.714635  518484 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1227 10:06:03.714659  518484 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1227 10:06:03.771610  518484 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1227 10:06:07.889971  518484 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (4.858408647s)
	I1227 10:06:07.890047  518484 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (4.749170511s)
	I1227 10:06:07.890369  518484 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (4.605087074s)
	I1227 10:06:07.890413  518484 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-681744" to be "Ready" ...
	I1227 10:06:07.890684  518484 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (4.119043643s)
	I1227 10:06:07.894237  518484 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p default-k8s-diff-port-681744 addons enable metrics-server
	
	I1227 10:06:07.921065  518484 node_ready.go:49] node "default-k8s-diff-port-681744" is "Ready"
	I1227 10:06:07.921105  518484 node_ready.go:38] duration metric: took 30.674983ms for node "default-k8s-diff-port-681744" to be "Ready" ...
	I1227 10:06:07.921121  518484 api_server.go:52] waiting for apiserver process to appear ...
	I1227 10:06:07.921195  518484 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 10:06:07.939134  518484 api_server.go:72] duration metric: took 5.342743487s to wait for apiserver process to appear ...
	I1227 10:06:07.939163  518484 api_server.go:88] waiting for apiserver healthz status ...
	I1227 10:06:07.939183  518484 api_server.go:299] Checking apiserver healthz at https://192.168.85.2:8444/healthz ...
	I1227 10:06:07.939955  518484 out.go:179] * Enabled addons: storage-provisioner, dashboard, default-storageclass
	I1227 10:06:07.942787  518484 addons.go:530] duration metric: took 5.345924474s for enable addons: enabled=[storage-provisioner dashboard default-storageclass]
	I1227 10:06:07.955568  518484 api_server.go:325] https://192.168.85.2:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1227 10:06:07.955600  518484 api_server.go:103] status: https://192.168.85.2:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1227 10:06:06.985466  515844 pod_ready.go:104] pod "coredns-7d764666f9-bdwpn" is not "Ready", error: <nil>
	W1227 10:06:09.485125  515844 pod_ready.go:104] pod "coredns-7d764666f9-bdwpn" is not "Ready", error: <nil>
	I1227 10:06:08.440182  518484 api_server.go:299] Checking apiserver healthz at https://192.168.85.2:8444/healthz ...
	I1227 10:06:08.448747  518484 api_server.go:325] https://192.168.85.2:8444/healthz returned 200:
	ok
	I1227 10:06:08.449896  518484 api_server.go:141] control plane version: v1.35.0
	I1227 10:06:08.449920  518484 api_server.go:131] duration metric: took 510.750118ms to wait for apiserver health ...
	I1227 10:06:08.449930  518484 system_pods.go:43] waiting for kube-system pods to appear ...
	I1227 10:06:08.453803  518484 system_pods.go:59] 8 kube-system pods found
	I1227 10:06:08.453886  518484 system_pods.go:61] "coredns-7d764666f9-gsk6s" [5cd01233-f1ab-4fa5-b523-fcd838dbbdad] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1227 10:06:08.453911  518484 system_pods.go:61] "etcd-default-k8s-diff-port-681744" [fcb8304d-4099-4c32-960d-a219ab755fe1] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1227 10:06:08.453956  518484 system_pods.go:61] "kindnet-n6bcg" [fa32b3d6-de74-4996-8943-cd4072b7a4e4] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1227 10:06:08.453984  518484 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-681744" [0ced8a70-d9e1-49bf-89a8-3c243fa652d0] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1227 10:06:08.454031  518484 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-681744" [2006aa2c-c71c-4cae-b454-e688d30f225a] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1227 10:06:08.454057  518484 system_pods.go:61] "kube-proxy-6wq7w" [cd457947-9b5f-43a6-9d83-24f1619c3977] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1227 10:06:08.454078  518484 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-681744" [f2ad2e17-a1cf-4419-855e-eecaeedfd7d0] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1227 10:06:08.454113  518484 system_pods.go:61] "storage-provisioner" [7d515dc0-eaac-424b-9308-be2c50a7d4fc] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1227 10:06:08.454137  518484 system_pods.go:74] duration metric: took 4.200586ms to wait for pod list to return data ...
	I1227 10:06:08.454180  518484 default_sa.go:34] waiting for default service account to be created ...
	I1227 10:06:08.457122  518484 default_sa.go:45] found service account: "default"
	I1227 10:06:08.457181  518484 default_sa.go:55] duration metric: took 2.961965ms for default service account to be created ...
	I1227 10:06:08.457205  518484 system_pods.go:116] waiting for k8s-apps to be running ...
	I1227 10:06:08.460123  518484 system_pods.go:86] 8 kube-system pods found
	I1227 10:06:08.460195  518484 system_pods.go:89] "coredns-7d764666f9-gsk6s" [5cd01233-f1ab-4fa5-b523-fcd838dbbdad] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1227 10:06:08.460222  518484 system_pods.go:89] "etcd-default-k8s-diff-port-681744" [fcb8304d-4099-4c32-960d-a219ab755fe1] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1227 10:06:08.460270  518484 system_pods.go:89] "kindnet-n6bcg" [fa32b3d6-de74-4996-8943-cd4072b7a4e4] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1227 10:06:08.460301  518484 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-681744" [0ced8a70-d9e1-49bf-89a8-3c243fa652d0] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1227 10:06:08.460343  518484 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-681744" [2006aa2c-c71c-4cae-b454-e688d30f225a] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1227 10:06:08.460371  518484 system_pods.go:89] "kube-proxy-6wq7w" [cd457947-9b5f-43a6-9d83-24f1619c3977] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1227 10:06:08.460396  518484 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-681744" [f2ad2e17-a1cf-4419-855e-eecaeedfd7d0] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1227 10:06:08.460437  518484 system_pods.go:89] "storage-provisioner" [7d515dc0-eaac-424b-9308-be2c50a7d4fc] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1227 10:06:08.460462  518484 system_pods.go:126] duration metric: took 3.238638ms to wait for k8s-apps to be running ...
	I1227 10:06:08.460485  518484 system_svc.go:44] waiting for kubelet service to be running ....
	I1227 10:06:08.460577  518484 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1227 10:06:08.476335  518484 system_svc.go:56] duration metric: took 15.841059ms WaitForService to wait for kubelet
	I1227 10:06:08.476413  518484 kubeadm.go:587] duration metric: took 5.880026408s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1227 10:06:08.476448  518484 node_conditions.go:102] verifying NodePressure condition ...
	I1227 10:06:08.479399  518484 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1227 10:06:08.479479  518484 node_conditions.go:123] node cpu capacity is 2
	I1227 10:06:08.479507  518484 node_conditions.go:105] duration metric: took 3.020542ms to run NodePressure ...
	I1227 10:06:08.479533  518484 start.go:242] waiting for startup goroutines ...
	I1227 10:06:08.479567  518484 start.go:247] waiting for cluster config update ...
	I1227 10:06:08.479596  518484 start.go:256] writing updated cluster config ...
	I1227 10:06:08.479938  518484 ssh_runner.go:195] Run: rm -f paused
	I1227 10:06:08.487051  518484 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1227 10:06:08.492100  518484 pod_ready.go:83] waiting for pod "coredns-7d764666f9-gsk6s" in "kube-system" namespace to be "Ready" or be gone ...
	W1227 10:06:10.497783  518484 pod_ready.go:104] pod "coredns-7d764666f9-gsk6s" is not "Ready", error: <nil>
	W1227 10:06:12.500164  518484 pod_ready.go:104] pod "coredns-7d764666f9-gsk6s" is not "Ready", error: <nil>
	W1227 10:06:11.485616  515844 pod_ready.go:104] pod "coredns-7d764666f9-bdwpn" is not "Ready", error: <nil>
	W1227 10:06:13.485704  515844 pod_ready.go:104] pod "coredns-7d764666f9-bdwpn" is not "Ready", error: <nil>
	W1227 10:06:14.999260  518484 pod_ready.go:104] pod "coredns-7d764666f9-gsk6s" is not "Ready", error: <nil>
	W1227 10:06:17.001832  518484 pod_ready.go:104] pod "coredns-7d764666f9-gsk6s" is not "Ready", error: <nil>
	W1227 10:06:15.986510  515844 pod_ready.go:104] pod "coredns-7d764666f9-bdwpn" is not "Ready", error: <nil>
	W1227 10:06:18.489177  515844 pod_ready.go:104] pod "coredns-7d764666f9-bdwpn" is not "Ready", error: <nil>
	W1227 10:06:19.513985  518484 pod_ready.go:104] pod "coredns-7d764666f9-gsk6s" is not "Ready", error: <nil>
	W1227 10:06:21.998332  518484 pod_ready.go:104] pod "coredns-7d764666f9-gsk6s" is not "Ready", error: <nil>
	W1227 10:06:20.985392  515844 pod_ready.go:104] pod "coredns-7d764666f9-bdwpn" is not "Ready", error: <nil>
	W1227 10:06:23.485026  515844 pod_ready.go:104] pod "coredns-7d764666f9-bdwpn" is not "Ready", error: <nil>
	I1227 10:06:25.485519  515844 pod_ready.go:94] pod "coredns-7d764666f9-bdwpn" is "Ready"
	I1227 10:06:25.485550  515844 pod_ready.go:86] duration metric: took 36.506319777s for pod "coredns-7d764666f9-bdwpn" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 10:06:25.488371  515844 pod_ready.go:83] waiting for pod "etcd-embed-certs-017122" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 10:06:25.493600  515844 pod_ready.go:94] pod "etcd-embed-certs-017122" is "Ready"
	I1227 10:06:25.493627  515844 pod_ready.go:86] duration metric: took 5.226445ms for pod "etcd-embed-certs-017122" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 10:06:25.498914  515844 pod_ready.go:83] waiting for pod "kube-apiserver-embed-certs-017122" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 10:06:25.504246  515844 pod_ready.go:94] pod "kube-apiserver-embed-certs-017122" is "Ready"
	I1227 10:06:25.504277  515844 pod_ready.go:86] duration metric: took 5.330684ms for pod "kube-apiserver-embed-certs-017122" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 10:06:25.508132  515844 pod_ready.go:83] waiting for pod "kube-controller-manager-embed-certs-017122" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 10:06:25.683121  515844 pod_ready.go:94] pod "kube-controller-manager-embed-certs-017122" is "Ready"
	I1227 10:06:25.683151  515844 pod_ready.go:86] duration metric: took 174.98778ms for pod "kube-controller-manager-embed-certs-017122" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 10:06:25.883310  515844 pod_ready.go:83] waiting for pod "kube-proxy-knmrq" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 10:06:26.283381  515844 pod_ready.go:94] pod "kube-proxy-knmrq" is "Ready"
	I1227 10:06:26.283413  515844 pod_ready.go:86] duration metric: took 400.074163ms for pod "kube-proxy-knmrq" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 10:06:26.484285  515844 pod_ready.go:83] waiting for pod "kube-scheduler-embed-certs-017122" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 10:06:26.883081  515844 pod_ready.go:94] pod "kube-scheduler-embed-certs-017122" is "Ready"
	I1227 10:06:26.883111  515844 pod_ready.go:86] duration metric: took 398.792424ms for pod "kube-scheduler-embed-certs-017122" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 10:06:26.883126  515844 pod_ready.go:40] duration metric: took 37.908320795s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1227 10:06:26.944215  515844 start.go:625] kubectl: 1.33.2, cluster: 1.35.0 (minor skew: 2)
	I1227 10:06:26.947154  515844 out.go:203] 
	W1227 10:06:26.950475  515844 out.go:285] ! /usr/local/bin/kubectl is version 1.33.2, which may have incompatibilities with Kubernetes 1.35.0.
	I1227 10:06:26.953467  515844 out.go:179]   - Want kubectl v1.35.0? Try 'minikube kubectl -- get pods -A'
	I1227 10:06:26.956360  515844 out.go:179] * Done! kubectl is now configured to use "embed-certs-017122" cluster and "default" namespace by default
	W1227 10:06:24.498087  518484 pod_ready.go:104] pod "coredns-7d764666f9-gsk6s" is not "Ready", error: <nil>
	W1227 10:06:26.498310  518484 pod_ready.go:104] pod "coredns-7d764666f9-gsk6s" is not "Ready", error: <nil>
	W1227 10:06:28.498454  518484 pod_ready.go:104] pod "coredns-7d764666f9-gsk6s" is not "Ready", error: <nil>
	W1227 10:06:30.997868  518484 pod_ready.go:104] pod "coredns-7d764666f9-gsk6s" is not "Ready", error: <nil>
	W1227 10:06:32.997927  518484 pod_ready.go:104] pod "coredns-7d764666f9-gsk6s" is not "Ready", error: <nil>
	W1227 10:06:35.498312  518484 pod_ready.go:104] pod "coredns-7d764666f9-gsk6s" is not "Ready", error: <nil>
	W1227 10:06:37.998128  518484 pod_ready.go:104] pod "coredns-7d764666f9-gsk6s" is not "Ready", error: <nil>
	W1227 10:06:40.005781  518484 pod_ready.go:104] pod "coredns-7d764666f9-gsk6s" is not "Ready", error: <nil>
	W1227 10:06:42.498792  518484 pod_ready.go:104] pod "coredns-7d764666f9-gsk6s" is not "Ready", error: <nil>
	
	
	==> CRI-O <==
	Dec 27 10:06:29 embed-certs-017122 crio[654]: time="2025-12-27T10:06:29.03122619Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Dec 27 10:06:29 embed-certs-017122 crio[654]: time="2025-12-27T10:06:29.03479343Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 27 10:06:29 embed-certs-017122 crio[654]: time="2025-12-27T10:06:29.034830871Z" level=info msg="Updated default CNI network name to kindnet"
	Dec 27 10:06:29 embed-certs-017122 crio[654]: time="2025-12-27T10:06:29.034855224Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Dec 27 10:06:29 embed-certs-017122 crio[654]: time="2025-12-27T10:06:29.038527655Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 27 10:06:29 embed-certs-017122 crio[654]: time="2025-12-27T10:06:29.038563438Z" level=info msg="Updated default CNI network name to kindnet"
	Dec 27 10:06:29 embed-certs-017122 crio[654]: time="2025-12-27T10:06:29.038586955Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Dec 27 10:06:29 embed-certs-017122 crio[654]: time="2025-12-27T10:06:29.042104053Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 27 10:06:29 embed-certs-017122 crio[654]: time="2025-12-27T10:06:29.042141042Z" level=info msg="Updated default CNI network name to kindnet"
	Dec 27 10:06:29 embed-certs-017122 crio[654]: time="2025-12-27T10:06:29.042263357Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Dec 27 10:06:29 embed-certs-017122 crio[654]: time="2025-12-27T10:06:29.045978333Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 27 10:06:29 embed-certs-017122 crio[654]: time="2025-12-27T10:06:29.046021533Z" level=info msg="Updated default CNI network name to kindnet"
	Dec 27 10:06:32 embed-certs-017122 crio[654]: time="2025-12-27T10:06:32.310369286Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=57480d83-7b08-4437-a8c1-25cdd5b1234d name=/runtime.v1.ImageService/ImageStatus
	Dec 27 10:06:32 embed-certs-017122 crio[654]: time="2025-12-27T10:06:32.311792828Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=bab83d87-8e5d-43ca-b0b7-d9d9d5bd3e98 name=/runtime.v1.ImageService/ImageStatus
	Dec 27 10:06:32 embed-certs-017122 crio[654]: time="2025-12-27T10:06:32.312736306Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-tkk2r/dashboard-metrics-scraper" id=47e734af-b569-4202-ab62-5205941fda1b name=/runtime.v1.RuntimeService/CreateContainer
	Dec 27 10:06:32 embed-certs-017122 crio[654]: time="2025-12-27T10:06:32.312832152Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 27 10:06:32 embed-certs-017122 crio[654]: time="2025-12-27T10:06:32.319137223Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 27 10:06:32 embed-certs-017122 crio[654]: time="2025-12-27T10:06:32.319651855Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 27 10:06:32 embed-certs-017122 crio[654]: time="2025-12-27T10:06:32.338806646Z" level=info msg="Created container 9bb2eed3a1a16c237734c407b363356f266324a7f0f4221b794d2a8ea991e392: kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-tkk2r/dashboard-metrics-scraper" id=47e734af-b569-4202-ab62-5205941fda1b name=/runtime.v1.RuntimeService/CreateContainer
	Dec 27 10:06:32 embed-certs-017122 crio[654]: time="2025-12-27T10:06:32.339804336Z" level=info msg="Starting container: 9bb2eed3a1a16c237734c407b363356f266324a7f0f4221b794d2a8ea991e392" id=a947d877-70b8-4fd1-8b40-a5f80ceb5a07 name=/runtime.v1.RuntimeService/StartContainer
	Dec 27 10:06:32 embed-certs-017122 crio[654]: time="2025-12-27T10:06:32.341680807Z" level=info msg="Started container" PID=1736 containerID=9bb2eed3a1a16c237734c407b363356f266324a7f0f4221b794d2a8ea991e392 description=kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-tkk2r/dashboard-metrics-scraper id=a947d877-70b8-4fd1-8b40-a5f80ceb5a07 name=/runtime.v1.RuntimeService/StartContainer sandboxID=bdd1e74d83ebdf713c8a4d9e688cbd687b52ddf461bc8fe53f65f0fbb6e20787
	Dec 27 10:06:32 embed-certs-017122 conmon[1734]: conmon 9bb2eed3a1a16c237734 <ninfo>: container 1736 exited with status 1
	Dec 27 10:06:32 embed-certs-017122 crio[654]: time="2025-12-27T10:06:32.55074687Z" level=info msg="Removing container: a435ea347be6e685a8fb8efdde0c13f38f60709d949072677d1c021195c31c94" id=853a0721-c06d-4133-ac50-98645a29f60e name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 27 10:06:32 embed-certs-017122 crio[654]: time="2025-12-27T10:06:32.56244437Z" level=info msg="Error loading conmon cgroup of container a435ea347be6e685a8fb8efdde0c13f38f60709d949072677d1c021195c31c94: cgroup deleted" id=853a0721-c06d-4133-ac50-98645a29f60e name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 27 10:06:32 embed-certs-017122 crio[654]: time="2025-12-27T10:06:32.56705415Z" level=info msg="Removed container a435ea347be6e685a8fb8efdde0c13f38f60709d949072677d1c021195c31c94: kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-tkk2r/dashboard-metrics-scraper" id=853a0721-c06d-4133-ac50-98645a29f60e name=/runtime.v1.RuntimeService/RemoveContainer
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED              STATE               NAME                        ATTEMPT             POD ID              POD                                          NAMESPACE
	9bb2eed3a1a16       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           11 seconds ago       Exited              dashboard-metrics-scraper   3                   bdd1e74d83ebd       dashboard-metrics-scraper-867fb5f87b-tkk2r   kubernetes-dashboard
	c42ed83b7c4b6       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           24 seconds ago       Running             storage-provisioner         2                   3294cb6595897       storage-provisioner                          kube-system
	185eb58aa58d9       docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf   39 seconds ago       Running             kubernetes-dashboard        0                   a6cc41a704ad5       kubernetes-dashboard-b84665fb8-zzkkj         kubernetes-dashboard
	4c9ab6fe53512       e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf                                           55 seconds ago       Running             coredns                     1                   f352b63e7f341       coredns-7d764666f9-bdwpn                     kube-system
	2a07a957c39cf       1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c                                           55 seconds ago       Running             busybox                     1                   62c36ee471702       busybox                                      default
	d20768561f33e       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           55 seconds ago       Exited              storage-provisioner         1                   3294cb6595897       storage-provisioner                          kube-system
	2fd5b0ea30513       de369f46c2ff55c31ea783a663eb203caa820f3db1f9b9c935e79e7d1e9fd9e5                                           55 seconds ago       Running             kube-proxy                  1                   9714334b98c84       kube-proxy-knmrq                             kube-system
	db5a895de4aa9       c96ee3c17498748ccc544ba99ee8ffeb020fc335b230b43cd28bf43bed229a13                                           55 seconds ago       Running             kindnet-cni                 1                   6342ce0a345e4       kindnet-7ts9b                                kube-system
	0cc3bd645f392       ddc8422d4d35a6fc66c34be61e24df795e5cebf197eb546f62740d0bafef874f                                           About a minute ago   Running             kube-scheduler              1                   fa657458e88fc       kube-scheduler-embed-certs-017122            kube-system
	76f3b93c8f471       88898f1d1a62a3ea9db5d4d099dee7aa52ebe8191016c5b3c721388a309983e0                                           About a minute ago   Running             kube-controller-manager     1                   79e5aeb30e0de       kube-controller-manager-embed-certs-017122   kube-system
	6cfa79ecfd13f       c3fcf259c473a57a5d7da116e29161904491091743512d27467c907c5516f856                                           About a minute ago   Running             kube-apiserver              1                   519be58815797       kube-apiserver-embed-certs-017122            kube-system
	44cdcc6347ae5       271e49a0ebc56647476845128fcd2a73bb138beeca3878cc3bf52b4ff1172a57                                           About a minute ago   Running             etcd                        1                   45e83682d1e47       etcd-embed-certs-017122                      kube-system
	
	
	==> coredns [4c9ab6fe53512b9ed7548d8e5e3e67c177bc266d4293df4e51df87ff6e091014] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 3e2243e8b9e7116f563b83b1933f477a68ba9ad4a829ed5d7e54629fb2ce53528b9bc6023030be20be434ad805fd246296dd428c64e9bbef3a70f22b8621f560
	CoreDNS-1.13.1
	linux/arm64, go1.25.2, 1db4568
	[INFO] 127.0.0.1:57344 - 64438 "HINFO IN 7110569481248417269.1535362450882622996. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.011535578s
	[INFO] plugin/ready: Plugins not ready: "kubernetes"
	[INFO] plugin/ready: Plugins not ready: "kubernetes"
	[INFO] plugin/ready: Plugins not ready: "kubernetes"
	[INFO] plugin/ready: Plugins not ready: "kubernetes"
	[ERROR] plugin/kubernetes: Failed to watch
	[ERROR] plugin/kubernetes: Failed to watch
	[ERROR] plugin/kubernetes: Failed to watch
	
	
	==> describe nodes <==
	Name:               embed-certs-017122
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=embed-certs-017122
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=a2daf445edf4872fd9586416ba5dbf507613db86
	                    minikube.k8s.io/name=embed-certs-017122
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_27T10_04_48_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 27 Dec 2025 10:04:44 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-017122
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 27 Dec 2025 10:06:38 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 27 Dec 2025 10:06:18 +0000   Sat, 27 Dec 2025 10:04:41 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 27 Dec 2025 10:06:18 +0000   Sat, 27 Dec 2025 10:04:41 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 27 Dec 2025 10:06:18 +0000   Sat, 27 Dec 2025 10:04:41 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 27 Dec 2025 10:06:18 +0000   Sat, 27 Dec 2025 10:05:08 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    embed-certs-017122
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022304Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022304Ki
	  pods:               110
	System Info:
	  Machine ID:                 56495bcb733b49eb3642bf15694bbc8c
	  System UUID:                65221525-5166-4f0b-9b53-9db790e49fde
	  Boot ID:                    bc2ae459-717b-455e-ba6b-b9cf89c5e3c8
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.3
	  Kubelet Version:            v1.35.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         93s
	  kube-system                 coredns-7d764666f9-bdwpn                      100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     111s
	  kube-system                 etcd-embed-certs-017122                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         118s
	  kube-system                 kindnet-7ts9b                                 100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      111s
	  kube-system                 kube-apiserver-embed-certs-017122             250m (12%)    0 (0%)      0 (0%)           0 (0%)         117s
	  kube-system                 kube-controller-manager-embed-certs-017122    200m (10%)    0 (0%)      0 (0%)           0 (0%)         117s
	  kube-system                 kube-proxy-knmrq                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         111s
	  kube-system                 kube-scheduler-embed-certs-017122             100m (5%)     0 (0%)      0 (0%)           0 (0%)         117s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         108s
	  kubernetes-dashboard        dashboard-metrics-scraper-867fb5f87b-tkk2r    0 (0%)        0 (0%)      0 (0%)           0 (0%)         53s
	  kubernetes-dashboard        kubernetes-dashboard-b84665fb8-zzkkj          0 (0%)        0 (0%)      0 (0%)           0 (0%)         53s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type    Reason          Age   From             Message
	  ----    ------          ----  ----             -------
	  Normal  RegisteredNode  112s  node-controller  Node embed-certs-017122 event: Registered Node embed-certs-017122 in Controller
	  Normal  RegisteredNode  54s   node-controller  Node embed-certs-017122 event: Registered Node embed-certs-017122 in Controller
	
	
	==> dmesg <==
	[Dec27 09:35] overlayfs: idmapped layers are currently not supported
	[ +35.855481] overlayfs: idmapped layers are currently not supported
	[Dec27 09:36] overlayfs: idmapped layers are currently not supported
	[Dec27 09:37] overlayfs: idmapped layers are currently not supported
	[  +4.962986] overlayfs: idmapped layers are currently not supported
	[Dec27 09:38] overlayfs: idmapped layers are currently not supported
	[ +23.051169] overlayfs: idmapped layers are currently not supported
	[Dec27 09:39] overlayfs: idmapped layers are currently not supported
	[ +34.383287] overlayfs: idmapped layers are currently not supported
	[Dec27 09:41] overlayfs: idmapped layers are currently not supported
	[Dec27 09:43] overlayfs: idmapped layers are currently not supported
	[ +35.522529] overlayfs: idmapped layers are currently not supported
	[Dec27 09:48] overlayfs: idmapped layers are currently not supported
	[Dec27 09:49] overlayfs: idmapped layers are currently not supported
	[Dec27 09:51] overlayfs: idmapped layers are currently not supported
	[Dec27 09:52] overlayfs: idmapped layers are currently not supported
	[Dec27 09:59] overlayfs: idmapped layers are currently not supported
	[ +37.649191] overlayfs: idmapped layers are currently not supported
	[Dec27 10:01] overlayfs: idmapped layers are currently not supported
	[Dec27 10:02] overlayfs: idmapped layers are currently not supported
	[Dec27 10:03] overlayfs: idmapped layers are currently not supported
	[Dec27 10:04] overlayfs: idmapped layers are currently not supported
	[Dec27 10:05] overlayfs: idmapped layers are currently not supported
	[ +42.108139] overlayfs: idmapped layers are currently not supported
	[Dec27 10:06] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [44cdcc6347ae5077ffdabfa2362bee311b3b59c6c54028ea82f59bab340bbb83] <==
	{"level":"info","ts":"2025-12-27T10:05:44.382751Z","caller":"membership/cluster.go:433","msg":"ignore already added member","cluster-id":"6f20f2c4b2fb5f8a","local-member-id":"ea7e25599daad906","added-peer-id":"ea7e25599daad906","added-peer-peer-urls":["https://192.168.76.2:2380"],"added-peer-is-learner":false}
	{"level":"info","ts":"2025-12-27T10:05:44.382915Z","caller":"membership/cluster.go:674","msg":"updated cluster version","cluster-id":"6f20f2c4b2fb5f8a","local-member-id":"ea7e25599daad906","from":"3.6","to":"3.6"}
	{"level":"info","ts":"2025-12-27T10:05:44.384387Z","caller":"embed/etcd.go:766","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2025-12-27T10:05:44.398472Z","caller":"embed/etcd.go:640","msg":"serving peer traffic","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2025-12-27T10:05:44.398508Z","caller":"embed/etcd.go:611","msg":"cmux::serve","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2025-12-27T10:05:44.399255Z","caller":"embed/etcd.go:292","msg":"now serving peer/client/metrics","local-member-id":"ea7e25599daad906","initial-advertise-peer-urls":["https://192.168.76.2:2380"],"listen-peer-urls":["https://192.168.76.2:2380"],"advertise-client-urls":["https://192.168.76.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.76.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2025-12-27T10:05:44.399292Z","caller":"embed/etcd.go:890","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2025-12-27T10:05:44.767705Z","logger":"raft","caller":"v3@v3.6.0/raft.go:988","msg":"ea7e25599daad906 is starting a new election at term 2"}
	{"level":"info","ts":"2025-12-27T10:05:44.767812Z","logger":"raft","caller":"v3@v3.6.0/raft.go:930","msg":"ea7e25599daad906 became pre-candidate at term 2"}
	{"level":"info","ts":"2025-12-27T10:05:44.767888Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1077","msg":"ea7e25599daad906 received MsgPreVoteResp from ea7e25599daad906 at term 2"}
	{"level":"info","ts":"2025-12-27T10:05:44.767930Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1693","msg":"ea7e25599daad906 has received 1 MsgPreVoteResp votes and 0 vote rejections"}
	{"level":"info","ts":"2025-12-27T10:05:44.767976Z","logger":"raft","caller":"v3@v3.6.0/raft.go:912","msg":"ea7e25599daad906 became candidate at term 3"}
	{"level":"info","ts":"2025-12-27T10:05:44.770193Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1077","msg":"ea7e25599daad906 received MsgVoteResp from ea7e25599daad906 at term 3"}
	{"level":"info","ts":"2025-12-27T10:05:44.770264Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1693","msg":"ea7e25599daad906 has received 1 MsgVoteResp votes and 0 vote rejections"}
	{"level":"info","ts":"2025-12-27T10:05:44.770308Z","logger":"raft","caller":"v3@v3.6.0/raft.go:970","msg":"ea7e25599daad906 became leader at term 3"}
	{"level":"info","ts":"2025-12-27T10:05:44.770345Z","logger":"raft","caller":"v3@v3.6.0/node.go:370","msg":"raft.node: ea7e25599daad906 elected leader ea7e25599daad906 at term 3"}
	{"level":"info","ts":"2025-12-27T10:05:44.774454Z","caller":"etcdserver/server.go:1820","msg":"published local member to cluster through raft","local-member-id":"ea7e25599daad906","local-member-attributes":"{Name:embed-certs-017122 ClientURLs:[https://192.168.76.2:2379]}","cluster-id":"6f20f2c4b2fb5f8a","publish-timeout":"7s"}
	{"level":"info","ts":"2025-12-27T10:05:44.774625Z","caller":"embed/serve.go:138","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-12-27T10:05:44.775542Z","caller":"v3rpc/health.go:63","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-12-27T10:05:44.791079Z","caller":"embed/serve.go:138","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-12-27T10:05:44.792046Z","caller":"v3rpc/health.go:63","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-12-27T10:05:44.795208Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-12-27T10:05:44.795638Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-12-27T10:05:44.796213Z","caller":"embed/serve.go:283","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2025-12-27T10:05:44.830666Z","caller":"embed/serve.go:283","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.76.2:2379"}
	
	
	==> kernel <==
	 10:06:44 up  2:49,  0 user,  load average: 3.46, 2.74, 2.33
	Linux embed-certs-017122 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [db5a895de4aa9bea7fd27e010a93c2e73b2cd31487927aa4bc444480a74acabc] <==
	I1227 10:05:48.822311       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1227 10:05:48.822725       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1227 10:05:48.822912       1 main.go:148] setting mtu 1500 for CNI 
	I1227 10:05:48.822959       1 main.go:178] kindnetd IP family: "ipv4"
	I1227 10:05:48.823012       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-12-27T10:05:49Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1227 10:05:49.021621       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1227 10:05:49.021763       1 controller.go:381] "Waiting for informer caches to sync"
	I1227 10:05:49.021801       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1227 10:05:49.023139       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1227 10:06:19.024603       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1227 10:06:19.024606       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1227 10:06:19.024759       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1227 10:06:19.024825       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	I1227 10:06:20.222806       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1227 10:06:20.222839       1 metrics.go:72] Registering metrics
	I1227 10:06:20.222910       1 controller.go:711] "Syncing nftables rules"
	I1227 10:06:29.021979       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1227 10:06:29.022673       1 main.go:301] handling current node
	I1227 10:06:39.026826       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1227 10:06:39.026860       1 main.go:301] handling current node
	
	
	==> kube-apiserver [6cfa79ecfd13f3a2204b0eca76862e1ae58e5961230bbbb0e2c311e1886de756] <==
	I1227 10:05:47.403925       1 cidrallocator.go:302] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1227 10:05:47.416093       1 shared_informer.go:377] "Caches are synced"
	I1227 10:05:47.416123       1 policy_source.go:248] refreshing policies
	I1227 10:05:47.416352       1 shared_informer.go:377] "Caches are synced"
	I1227 10:05:47.416431       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1227 10:05:47.416444       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1227 10:05:47.416583       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1227 10:05:47.418835       1 shared_informer.go:377] "Caches are synced"
	I1227 10:05:47.418883       1 shared_informer.go:377] "Caches are synced"
	I1227 10:05:47.418904       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1227 10:05:47.419115       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1227 10:05:47.419318       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1227 10:05:47.435666       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	E1227 10:05:47.442334       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1227 10:05:48.004505       1 storage_scheduling.go:139] all system priority classes are created successfully or already exist.
	I1227 10:05:48.077663       1 controller.go:667] quota admission added evaluator for: namespaces
	I1227 10:05:48.135046       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1227 10:05:48.183655       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1227 10:05:48.200815       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1227 10:05:48.217646       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1227 10:05:48.389610       1 alloc.go:329] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.110.196.96"}
	I1227 10:05:48.422481       1 alloc.go:329] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.100.163.170"}
	I1227 10:05:50.862115       1 controller.go:667] quota admission added evaluator for: endpoints
	I1227 10:05:50.911551       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1227 10:05:50.963216       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	
	
	==> kube-controller-manager [76f3b93c8f471b74c03f3058edede420056a0cf37682f580aa788c86b60dd759] <==
	I1227 10:05:50.483046       1 shared_informer.go:377] "Caches are synced"
	I1227 10:05:50.483146       1 range_allocator.go:181] "Starting range CIDR allocator"
	I1227 10:05:50.483184       1 shared_informer.go:370] "Waiting for caches to sync"
	I1227 10:05:50.483210       1 shared_informer.go:377] "Caches are synced"
	I1227 10:05:50.483279       1 shared_informer.go:377] "Caches are synced"
	I1227 10:05:50.485449       1 shared_informer.go:377] "Caches are synced"
	I1227 10:05:50.486372       1 shared_informer.go:377] "Caches are synced"
	I1227 10:05:50.486787       1 node_lifecycle_controller.go:886] "Missing timestamp for Node. Assuming now as a timestamp" node="embed-certs-017122"
	I1227 10:05:50.488469       1 node_lifecycle_controller.go:1080] "Controller detected that zone is now in new state" zone="" newState="Normal"
	I1227 10:05:50.488059       1 shared_informer.go:377] "Caches are synced"
	I1227 10:05:50.488070       1 shared_informer.go:377] "Caches are synced"
	I1227 10:05:50.488077       1 shared_informer.go:377] "Caches are synced"
	I1227 10:05:50.488083       1 shared_informer.go:377] "Caches are synced"
	I1227 10:05:50.488091       1 shared_informer.go:377] "Caches are synced"
	I1227 10:05:50.488099       1 shared_informer.go:377] "Caches are synced"
	I1227 10:05:50.488105       1 shared_informer.go:377] "Caches are synced"
	I1227 10:05:50.488110       1 shared_informer.go:377] "Caches are synced"
	I1227 10:05:50.488116       1 shared_informer.go:377] "Caches are synced"
	I1227 10:05:50.486965       1 shared_informer.go:377] "Caches are synced"
	I1227 10:05:50.512064       1 shared_informer.go:370] "Waiting for caches to sync"
	I1227 10:05:50.518961       1 shared_informer.go:377] "Caches are synced"
	I1227 10:05:50.585117       1 shared_informer.go:377] "Caches are synced"
	I1227 10:05:50.585238       1 garbagecollector.go:166] "Garbage collector: all resource monitors have synced"
	I1227 10:05:50.585276       1 garbagecollector.go:169] "Proceeding to collect garbage"
	I1227 10:05:50.612882       1 shared_informer.go:377] "Caches are synced"
	
	
	==> kube-proxy [2fd5b0ea3051372087c322ed48f365491b0c576c41d49c831586bb295e8cd4b1] <==
	I1227 10:05:48.814432       1 server_linux.go:53] "Using iptables proxy"
	I1227 10:05:48.899724       1 shared_informer.go:370] "Waiting for caches to sync"
	I1227 10:05:49.000476       1 shared_informer.go:377] "Caches are synced"
	I1227 10:05:49.000615       1 server.go:218] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1227 10:05:49.000745       1 server.go:255] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1227 10:05:49.025223       1 server.go:264] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1227 10:05:49.025289       1 server_linux.go:136] "Using iptables Proxier"
	I1227 10:05:49.030744       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1227 10:05:49.031098       1 server.go:529] "Version info" version="v1.35.0"
	I1227 10:05:49.031133       1 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1227 10:05:49.032176       1 config.go:200] "Starting service config controller"
	I1227 10:05:49.032198       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1227 10:05:49.032507       1 config.go:106] "Starting endpoint slice config controller"
	I1227 10:05:49.032522       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1227 10:05:49.032542       1 config.go:403] "Starting serviceCIDR config controller"
	I1227 10:05:49.032546       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1227 10:05:49.033199       1 config.go:309] "Starting node config controller"
	I1227 10:05:49.033217       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1227 10:05:49.033224       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1227 10:05:49.133262       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1227 10:05:49.133283       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1227 10:05:49.133301       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [0cc3bd645f392c02eb74608b63e52a0c2ca4f3ab5d2fa6e9de3815e6b3f84037] <==
	I1227 10:05:45.285744       1 serving.go:386] Generated self-signed cert in-memory
	W1227 10:05:47.250582       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1227 10:05:47.250617       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1227 10:05:47.250627       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1227 10:05:47.250634       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1227 10:05:47.369265       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.35.0"
	I1227 10:05:47.369299       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1227 10:05:47.371704       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1227 10:05:47.371835       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1227 10:05:47.371846       1 shared_informer.go:370] "Waiting for caches to sync"
	I1227 10:05:47.371872       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1227 10:05:47.474431       1 shared_informer.go:377] "Caches are synced"
	
	
	==> kubelet <==
	Dec 27 10:06:00 embed-certs-017122 kubelet[781]: E1227 10:06:00.430878     781 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-867fb5f87b-tkk2r_kubernetes-dashboard(c43e6502-6e83-4533-bf67-bf3d100bb4c6)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-tkk2r" podUID="c43e6502-6e83-4533-bf67-bf3d100bb4c6"
	Dec 27 10:06:04 embed-certs-017122 kubelet[781]: E1227 10:06:04.455076     781 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/kubernetes-dashboard-b84665fb8-zzkkj" containerName="kubernetes-dashboard"
	Dec 27 10:06:05 embed-certs-017122 kubelet[781]: E1227 10:06:05.457506     781 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/kubernetes-dashboard-b84665fb8-zzkkj" containerName="kubernetes-dashboard"
	Dec 27 10:06:10 embed-certs-017122 kubelet[781]: E1227 10:06:10.309501     781 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-tkk2r" containerName="dashboard-metrics-scraper"
	Dec 27 10:06:10 embed-certs-017122 kubelet[781]: I1227 10:06:10.309554     781 scope.go:122] "RemoveContainer" containerID="c16f9f833240b7ca49b7c9bae5e01d879dbec8f3ec59b3a638d825cb21992277"
	Dec 27 10:06:10 embed-certs-017122 kubelet[781]: I1227 10:06:10.471998     781 scope.go:122] "RemoveContainer" containerID="c16f9f833240b7ca49b7c9bae5e01d879dbec8f3ec59b3a638d825cb21992277"
	Dec 27 10:06:10 embed-certs-017122 kubelet[781]: E1227 10:06:10.472335     781 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-tkk2r" containerName="dashboard-metrics-scraper"
	Dec 27 10:06:10 embed-certs-017122 kubelet[781]: I1227 10:06:10.472363     781 scope.go:122] "RemoveContainer" containerID="a435ea347be6e685a8fb8efdde0c13f38f60709d949072677d1c021195c31c94"
	Dec 27 10:06:10 embed-certs-017122 kubelet[781]: E1227 10:06:10.472519     781 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-867fb5f87b-tkk2r_kubernetes-dashboard(c43e6502-6e83-4533-bf67-bf3d100bb4c6)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-tkk2r" podUID="c43e6502-6e83-4533-bf67-bf3d100bb4c6"
	Dec 27 10:06:10 embed-certs-017122 kubelet[781]: I1227 10:06:10.496530     781 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-b84665fb8-zzkkj" podStartSLOduration=7.025446374 podStartE2EDuration="19.496509795s" podCreationTimestamp="2025-12-27 10:05:51 +0000 UTC" firstStartedPulling="2025-12-27 10:05:51.531659875 +0000 UTC m=+8.418332403" lastFinishedPulling="2025-12-27 10:06:04.002723296 +0000 UTC m=+20.889395824" observedRunningTime="2025-12-27 10:06:04.479480376 +0000 UTC m=+21.366152920" watchObservedRunningTime="2025-12-27 10:06:10.496509795 +0000 UTC m=+27.383182323"
	Dec 27 10:06:19 embed-certs-017122 kubelet[781]: I1227 10:06:19.503765     781 scope.go:122] "RemoveContainer" containerID="d20768561f33e54160822873cd11a005f1ff46dbc38b0abae2e8ecd8d9636275"
	Dec 27 10:06:20 embed-certs-017122 kubelet[781]: E1227 10:06:20.093704     781 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-tkk2r" containerName="dashboard-metrics-scraper"
	Dec 27 10:06:20 embed-certs-017122 kubelet[781]: I1227 10:06:20.093751     781 scope.go:122] "RemoveContainer" containerID="a435ea347be6e685a8fb8efdde0c13f38f60709d949072677d1c021195c31c94"
	Dec 27 10:06:20 embed-certs-017122 kubelet[781]: E1227 10:06:20.093921     781 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-867fb5f87b-tkk2r_kubernetes-dashboard(c43e6502-6e83-4533-bf67-bf3d100bb4c6)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-tkk2r" podUID="c43e6502-6e83-4533-bf67-bf3d100bb4c6"
	Dec 27 10:06:25 embed-certs-017122 kubelet[781]: E1227 10:06:25.318404     781 prober_manager.go:209] "Readiness probe already exists for container" pod="kube-system/coredns-7d764666f9-bdwpn" containerName="coredns"
	Dec 27 10:06:32 embed-certs-017122 kubelet[781]: E1227 10:06:32.309786     781 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-tkk2r" containerName="dashboard-metrics-scraper"
	Dec 27 10:06:32 embed-certs-017122 kubelet[781]: I1227 10:06:32.309847     781 scope.go:122] "RemoveContainer" containerID="a435ea347be6e685a8fb8efdde0c13f38f60709d949072677d1c021195c31c94"
	Dec 27 10:06:32 embed-certs-017122 kubelet[781]: I1227 10:06:32.547731     781 scope.go:122] "RemoveContainer" containerID="a435ea347be6e685a8fb8efdde0c13f38f60709d949072677d1c021195c31c94"
	Dec 27 10:06:32 embed-certs-017122 kubelet[781]: E1227 10:06:32.547973     781 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-tkk2r" containerName="dashboard-metrics-scraper"
	Dec 27 10:06:32 embed-certs-017122 kubelet[781]: I1227 10:06:32.548275     781 scope.go:122] "RemoveContainer" containerID="9bb2eed3a1a16c237734c407b363356f266324a7f0f4221b794d2a8ea991e392"
	Dec 27 10:06:32 embed-certs-017122 kubelet[781]: E1227 10:06:32.548494     781 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-867fb5f87b-tkk2r_kubernetes-dashboard(c43e6502-6e83-4533-bf67-bf3d100bb4c6)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-tkk2r" podUID="c43e6502-6e83-4533-bf67-bf3d100bb4c6"
	Dec 27 10:06:39 embed-certs-017122 kubelet[781]: I1227 10:06:39.287105     781 dynamic_cafile_content.go:175] "Shutting down controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	Dec 27 10:06:39 embed-certs-017122 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Dec 27 10:06:39 embed-certs-017122 systemd[1]: kubelet.service: Deactivated successfully.
	Dec 27 10:06:39 embed-certs-017122 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	
	
	==> kubernetes-dashboard [185eb58aa58d9d750b982bfdc9c22d6399ce253489b24c332510411e62876981] <==
	2025/12/27 10:06:04 Starting overwatch
	2025/12/27 10:06:04 Using namespace: kubernetes-dashboard
	2025/12/27 10:06:04 Using in-cluster config to connect to apiserver
	2025/12/27 10:06:04 Using secret token for csrf signing
	2025/12/27 10:06:04 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/12/27 10:06:04 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/12/27 10:06:04 Successful initial request to the apiserver, version: v1.35.0
	2025/12/27 10:06:04 Generating JWE encryption key
	2025/12/27 10:06:04 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/12/27 10:06:04 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/12/27 10:06:05 Initializing JWE encryption key from synchronized object
	2025/12/27 10:06:05 Creating in-cluster Sidecar client
	2025/12/27 10:06:05 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/12/27 10:06:05 Serving insecurely on HTTP port: 9090
	2025/12/27 10:06:35 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	
	
	==> storage-provisioner [c42ed83b7c4b6ba64ecae6ede25519b2ab7b1b1805784e124265c4431dd093ac] <==
	I1227 10:06:19.597695       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1227 10:06:19.619299       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1227 10:06:19.619857       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1227 10:06:19.623361       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1227 10:06:23.078470       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1227 10:06:27.339318       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1227 10:06:30.937381       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1227 10:06:33.990711       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1227 10:06:37.013465       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1227 10:06:37.019180       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1227 10:06:37.019638       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1227 10:06:37.019930       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-017122_25db5ca2-1039-49ca-b74c-77cbd74e1175!
	I1227 10:06:37.020652       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"0d9ebc47-35a9-4be4-b5b3-d21c89072018", APIVersion:"v1", ResourceVersion:"687", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-017122_25db5ca2-1039-49ca-b74c-77cbd74e1175 became leader
	W1227 10:06:37.029130       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1227 10:06:37.033581       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1227 10:06:37.120976       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-017122_25db5ca2-1039-49ca-b74c-77cbd74e1175!
	W1227 10:06:39.040648       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1227 10:06:39.052556       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1227 10:06:41.056122       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1227 10:06:41.064288       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1227 10:06:43.067065       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1227 10:06:43.074632       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	
	
	==> storage-provisioner [d20768561f33e54160822873cd11a005f1ff46dbc38b0abae2e8ecd8d9636275] <==
	I1227 10:05:48.738135       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1227 10:06:18.744527       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-017122 -n embed-certs-017122
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-017122 -n embed-certs-017122: exit status 2 (492.733245ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:270: (dbg) Run:  kubectl --context embed-certs-017122 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:294: <<< TestStartStop/group/embed-certs/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:295: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/embed-certs/serial/Pause (6.52s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (7.82s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p default-k8s-diff-port-681744 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 pause -p default-k8s-diff-port-681744 --alsologtostderr -v=1: exit status 80 (2.295492568s)

                                                
                                                
-- stdout --
	* Pausing node default-k8s-diff-port-681744 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1227 10:06:59.432027  523731 out.go:360] Setting OutFile to fd 1 ...
	I1227 10:06:59.432270  523731 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 10:06:59.432298  523731 out.go:374] Setting ErrFile to fd 2...
	I1227 10:06:59.432318  523731 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 10:06:59.432611  523731 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22344-301174/.minikube/bin
	I1227 10:06:59.432934  523731 out.go:368] Setting JSON to false
	I1227 10:06:59.432987  523731 mustload.go:66] Loading cluster: default-k8s-diff-port-681744
	I1227 10:06:59.435288  523731 config.go:182] Loaded profile config "default-k8s-diff-port-681744": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1227 10:06:59.435896  523731 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-681744 --format={{.State.Status}}
	I1227 10:06:59.473154  523731 host.go:66] Checking if "default-k8s-diff-port-681744" exists ...
	I1227 10:06:59.473498  523731 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1227 10:06:59.545505  523731 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:52 OomKillDisable:true NGoroutines:63 SystemTime:2025-12-27 10:06:59.53555321 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pa
th:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1227 10:06:59.546215  523731 pause.go:60] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/22158/minikube-v1.37.0-1766719468-22158-arm64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1766719468-22158/minikube-v1.37.0-1766719468-22158-arm64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1766719468-22158-arm64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) preload-source:auto profile:default-k8s-diff-port-681744 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) rosetta:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarni
ng:%!s(bool=true) wantupdatenotification:%!s(bool=true) wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1227 10:06:59.553043  523731 out.go:179] * Pausing node default-k8s-diff-port-681744 ... 
	I1227 10:06:59.556812  523731 host.go:66] Checking if "default-k8s-diff-port-681744" exists ...
	I1227 10:06:59.557138  523731 ssh_runner.go:195] Run: systemctl --version
	I1227 10:06:59.557190  523731 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-681744
	I1227 10:06:59.576416  523731 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33456 SSHKeyPath:/home/jenkins/minikube-integration/22344-301174/.minikube/machines/default-k8s-diff-port-681744/id_rsa Username:docker}
	I1227 10:06:59.691736  523731 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1227 10:06:59.713233  523731 pause.go:52] kubelet running: true
	I1227 10:06:59.713318  523731 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1227 10:07:00.090356  523731 cri.go:61] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1227 10:07:00.090456  523731 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1227 10:07:00.342260  523731 cri.go:96] found id: "c5e4c2046964bfb93c146b52bc46a0c95d3b41f37952360670a9c58bd4b89311"
	I1227 10:07:00.342298  523731 cri.go:96] found id: "1ebbbaa41e609904387bf6f6ddcce7e4ba4736940bdbc05e10eb8944ddb23cab"
	I1227 10:07:00.342311  523731 cri.go:96] found id: "d9b232bb33745d9367c3276c09c36ce009adeee44f23352439ae08e719cd1485"
	I1227 10:07:00.342316  523731 cri.go:96] found id: "f21c10c67705231af4210c6ae61ccc093460f828467de0700b89eaf1cfcaed8e"
	I1227 10:07:00.342322  523731 cri.go:96] found id: "1f4229e7da039fc2a87cf4691415876ed662e1bc499beefa038042f87efd93b9"
	I1227 10:07:00.342326  523731 cri.go:96] found id: "05ed911c9437337bd74f43e8478b89cc420bd0d57d7c4b74775f9f242d146fd0"
	I1227 10:07:00.342330  523731 cri.go:96] found id: "33fdbb0d08777749f0775d9538c2ddf0c2e1275e2fe8d32dc7d2e64e6ca81b94"
	I1227 10:07:00.342335  523731 cri.go:96] found id: "5c6646254efce08f32743176f38a716d497ca0e0aaa6740710647bf39a812092"
	I1227 10:07:00.342338  523731 cri.go:96] found id: "32a79604be9925f6e05bfd7503e0687ee2bac5349290a54929faec55c1325915"
	I1227 10:07:00.342395  523731 cri.go:96] found id: "a22c4cfddf705c6c7d8c65da2769b16a9385370f9c1db836c666d3c2e33c79cf"
	I1227 10:07:00.342407  523731 cri.go:96] found id: "c6e9ba892a5f9f923228744a2157ea2f243b0bdffa0791bb446d1579dcdc1777"
	I1227 10:07:00.342411  523731 cri.go:96] found id: "f8410dd3636ef8c5aa27aff620f03619584e1cd859e8390d1cfc0169a194e203"
	I1227 10:07:00.342414  523731 cri.go:96] found id: ""
	I1227 10:07:00.342495  523731 ssh_runner.go:195] Run: sudo runc list -f json
	I1227 10:07:00.368183  523731 retry.go:84] will retry after 200ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-27T10:07:00Z" level=error msg="open /run/runc: no such file or directory"
	I1227 10:07:00.519506  523731 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1227 10:07:00.544938  523731 pause.go:52] kubelet running: false
	I1227 10:07:00.545005  523731 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1227 10:07:00.783318  523731 cri.go:61] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1227 10:07:00.783410  523731 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1227 10:07:00.888949  523731 cri.go:96] found id: "c5e4c2046964bfb93c146b52bc46a0c95d3b41f37952360670a9c58bd4b89311"
	I1227 10:07:00.889028  523731 cri.go:96] found id: "1ebbbaa41e609904387bf6f6ddcce7e4ba4736940bdbc05e10eb8944ddb23cab"
	I1227 10:07:00.889058  523731 cri.go:96] found id: "d9b232bb33745d9367c3276c09c36ce009adeee44f23352439ae08e719cd1485"
	I1227 10:07:00.889082  523731 cri.go:96] found id: "f21c10c67705231af4210c6ae61ccc093460f828467de0700b89eaf1cfcaed8e"
	I1227 10:07:00.889104  523731 cri.go:96] found id: "1f4229e7da039fc2a87cf4691415876ed662e1bc499beefa038042f87efd93b9"
	I1227 10:07:00.889127  523731 cri.go:96] found id: "05ed911c9437337bd74f43e8478b89cc420bd0d57d7c4b74775f9f242d146fd0"
	I1227 10:07:00.889150  523731 cri.go:96] found id: "33fdbb0d08777749f0775d9538c2ddf0c2e1275e2fe8d32dc7d2e64e6ca81b94"
	I1227 10:07:00.889171  523731 cri.go:96] found id: "5c6646254efce08f32743176f38a716d497ca0e0aaa6740710647bf39a812092"
	I1227 10:07:00.889191  523731 cri.go:96] found id: "32a79604be9925f6e05bfd7503e0687ee2bac5349290a54929faec55c1325915"
	I1227 10:07:00.889214  523731 cri.go:96] found id: "a22c4cfddf705c6c7d8c65da2769b16a9385370f9c1db836c666d3c2e33c79cf"
	I1227 10:07:00.889235  523731 cri.go:96] found id: "c6e9ba892a5f9f923228744a2157ea2f243b0bdffa0791bb446d1579dcdc1777"
	I1227 10:07:00.889256  523731 cri.go:96] found id: "f8410dd3636ef8c5aa27aff620f03619584e1cd859e8390d1cfc0169a194e203"
	I1227 10:07:00.889274  523731 cri.go:96] found id: ""
	I1227 10:07:00.889343  523731 ssh_runner.go:195] Run: sudo runc list -f json
	I1227 10:07:01.269762  523731 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1227 10:07:01.285150  523731 pause.go:52] kubelet running: false
	I1227 10:07:01.285214  523731 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1227 10:07:01.494375  523731 cri.go:61] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1227 10:07:01.495291  523731 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1227 10:07:01.627877  523731 cri.go:96] found id: "c5e4c2046964bfb93c146b52bc46a0c95d3b41f37952360670a9c58bd4b89311"
	I1227 10:07:01.627899  523731 cri.go:96] found id: "1ebbbaa41e609904387bf6f6ddcce7e4ba4736940bdbc05e10eb8944ddb23cab"
	I1227 10:07:01.627904  523731 cri.go:96] found id: "d9b232bb33745d9367c3276c09c36ce009adeee44f23352439ae08e719cd1485"
	I1227 10:07:01.627908  523731 cri.go:96] found id: "f21c10c67705231af4210c6ae61ccc093460f828467de0700b89eaf1cfcaed8e"
	I1227 10:07:01.627911  523731 cri.go:96] found id: "1f4229e7da039fc2a87cf4691415876ed662e1bc499beefa038042f87efd93b9"
	I1227 10:07:01.627915  523731 cri.go:96] found id: "05ed911c9437337bd74f43e8478b89cc420bd0d57d7c4b74775f9f242d146fd0"
	I1227 10:07:01.627918  523731 cri.go:96] found id: "33fdbb0d08777749f0775d9538c2ddf0c2e1275e2fe8d32dc7d2e64e6ca81b94"
	I1227 10:07:01.627921  523731 cri.go:96] found id: "5c6646254efce08f32743176f38a716d497ca0e0aaa6740710647bf39a812092"
	I1227 10:07:01.627925  523731 cri.go:96] found id: "32a79604be9925f6e05bfd7503e0687ee2bac5349290a54929faec55c1325915"
	I1227 10:07:01.627947  523731 cri.go:96] found id: "a22c4cfddf705c6c7d8c65da2769b16a9385370f9c1db836c666d3c2e33c79cf"
	I1227 10:07:01.627951  523731 cri.go:96] found id: "c6e9ba892a5f9f923228744a2157ea2f243b0bdffa0791bb446d1579dcdc1777"
	I1227 10:07:01.627957  523731 cri.go:96] found id: "f8410dd3636ef8c5aa27aff620f03619584e1cd859e8390d1cfc0169a194e203"
	I1227 10:07:01.627961  523731 cri.go:96] found id: ""
	I1227 10:07:01.628012  523731 ssh_runner.go:195] Run: sudo runc list -f json
	I1227 10:07:01.644392  523731 out.go:203] 
	W1227 10:07:01.647365  523731 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-27T10:07:01Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-27T10:07:01Z" level=error msg="open /run/runc: no such file or directory"
	
	W1227 10:07:01.647386  523731 out.go:285] * 
	* 
	W1227 10:07:01.651574  523731 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1227 10:07:01.654970  523731 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:309: out/minikube-linux-arm64 pause -p default-k8s-diff-port-681744 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect default-k8s-diff-port-681744
helpers_test.go:244: (dbg) docker inspect default-k8s-diff-port-681744:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "d2370e32a3db686d85e4d6785307c138600396030b0352d75607381960c53a89",
	        "Created": "2025-12-27T10:04:44.730801241Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 518614,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-27T10:05:53.355574602Z",
	            "FinishedAt": "2025-12-27T10:05:52.246136882Z"
	        },
	        "Image": "sha256:713d59eed1cb7ea89d31e3cbef8f6274a6ab7509a421e96cf0197c334e76398d",
	        "ResolvConfPath": "/var/lib/docker/containers/d2370e32a3db686d85e4d6785307c138600396030b0352d75607381960c53a89/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/d2370e32a3db686d85e4d6785307c138600396030b0352d75607381960c53a89/hostname",
	        "HostsPath": "/var/lib/docker/containers/d2370e32a3db686d85e4d6785307c138600396030b0352d75607381960c53a89/hosts",
	        "LogPath": "/var/lib/docker/containers/d2370e32a3db686d85e4d6785307c138600396030b0352d75607381960c53a89/d2370e32a3db686d85e4d6785307c138600396030b0352d75607381960c53a89-json.log",
	        "Name": "/default-k8s-diff-port-681744",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "default-k8s-diff-port-681744:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "default-k8s-diff-port-681744",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "d2370e32a3db686d85e4d6785307c138600396030b0352d75607381960c53a89",
	                "LowerDir": "/var/lib/docker/overlay2/3da655765ee70512272e8bbdfdd19ad89be9ad5b15dd83cabbe503b7c3d1dd23-init/diff:/var/lib/docker/overlay2/888349771a41a46b09e7de676064af9effbe2f5ae2a8ba49ad062335fb2a70e5/diff",
	                "MergedDir": "/var/lib/docker/overlay2/3da655765ee70512272e8bbdfdd19ad89be9ad5b15dd83cabbe503b7c3d1dd23/merged",
	                "UpperDir": "/var/lib/docker/overlay2/3da655765ee70512272e8bbdfdd19ad89be9ad5b15dd83cabbe503b7c3d1dd23/diff",
	                "WorkDir": "/var/lib/docker/overlay2/3da655765ee70512272e8bbdfdd19ad89be9ad5b15dd83cabbe503b7c3d1dd23/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "default-k8s-diff-port-681744",
	                "Source": "/var/lib/docker/volumes/default-k8s-diff-port-681744/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "default-k8s-diff-port-681744",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8444/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "default-k8s-diff-port-681744",
	                "name.minikube.sigs.k8s.io": "default-k8s-diff-port-681744",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "6c075b4da0ee6b2b2fff3aa99b8375f2b763ab8e19555ef79d6e1a600a730d93",
	            "SandboxKey": "/var/run/docker/netns/6c075b4da0ee",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33456"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33457"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33460"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33458"
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33459"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "default-k8s-diff-port-681744": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "92:8e:4a:75:ec:54",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "a1f92b122a97b2834afb7ef2e15881b65b61b90adec9a9012e2ffcfe6970dabd",
	                    "EndpointID": "90d57dfead9e59ce27cea5186aaedc5e0df64513298b8667705e2024db1503d2",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "default-k8s-diff-port-681744",
	                        "d2370e32a3db"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-681744 -n default-k8s-diff-port-681744
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-681744 -n default-k8s-diff-port-681744: exit status 2 (501.949261ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestStartStop/group/default-k8s-diff-port/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p default-k8s-diff-port-681744 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-arm64 -p default-k8s-diff-port-681744 logs -n 25: (1.65151491s)
helpers_test.go:261: TestStartStop/group/default-k8s-diff-port/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ start   │ -p no-preload-021144 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0                                                                                       │ no-preload-021144            │ jenkins │ v1.37.0 │ 27 Dec 25 10:03 UTC │ 27 Dec 25 10:04 UTC │
	│ ssh     │ force-systemd-flag-779725 ssh cat /etc/crio/crio.conf.d/02-crio.conf                                                                                                                                                                          │ force-systemd-flag-779725    │ jenkins │ v1.37.0 │ 27 Dec 25 10:04 UTC │ 27 Dec 25 10:04 UTC │
	│ delete  │ -p force-systemd-flag-779725                                                                                                                                                                                                                  │ force-systemd-flag-779725    │ jenkins │ v1.37.0 │ 27 Dec 25 10:04 UTC │ 27 Dec 25 10:04 UTC │
	│ start   │ -p embed-certs-017122 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0                                                                                        │ embed-certs-017122           │ jenkins │ v1.37.0 │ 27 Dec 25 10:04 UTC │ 27 Dec 25 10:05 UTC │
	│ image   │ no-preload-021144 image list --format=json                                                                                                                                                                                                    │ no-preload-021144            │ jenkins │ v1.37.0 │ 27 Dec 25 10:04 UTC │ 27 Dec 25 10:04 UTC │
	│ pause   │ -p no-preload-021144 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-021144            │ jenkins │ v1.37.0 │ 27 Dec 25 10:04 UTC │                     │
	│ delete  │ -p no-preload-021144                                                                                                                                                                                                                          │ no-preload-021144            │ jenkins │ v1.37.0 │ 27 Dec 25 10:04 UTC │ 27 Dec 25 10:04 UTC │
	│ delete  │ -p no-preload-021144                                                                                                                                                                                                                          │ no-preload-021144            │ jenkins │ v1.37.0 │ 27 Dec 25 10:04 UTC │ 27 Dec 25 10:04 UTC │
	│ delete  │ -p disable-driver-mounts-242374                                                                                                                                                                                                               │ disable-driver-mounts-242374 │ jenkins │ v1.37.0 │ 27 Dec 25 10:04 UTC │ 27 Dec 25 10:04 UTC │
	│ start   │ -p default-k8s-diff-port-681744 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0                                                                      │ default-k8s-diff-port-681744 │ jenkins │ v1.37.0 │ 27 Dec 25 10:04 UTC │ 27 Dec 25 10:05 UTC │
	│ addons  │ enable metrics-server -p embed-certs-017122 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-017122           │ jenkins │ v1.37.0 │ 27 Dec 25 10:05 UTC │                     │
	│ stop    │ -p embed-certs-017122 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-017122           │ jenkins │ v1.37.0 │ 27 Dec 25 10:05 UTC │ 27 Dec 25 10:05 UTC │
	│ addons  │ enable dashboard -p embed-certs-017122 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-017122           │ jenkins │ v1.37.0 │ 27 Dec 25 10:05 UTC │ 27 Dec 25 10:05 UTC │
	│ start   │ -p embed-certs-017122 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0                                                                                        │ embed-certs-017122           │ jenkins │ v1.37.0 │ 27 Dec 25 10:05 UTC │ 27 Dec 25 10:06 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-681744 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-681744 │ jenkins │ v1.37.0 │ 27 Dec 25 10:05 UTC │                     │
	│ stop    │ -p default-k8s-diff-port-681744 --alsologtostderr -v=3                                                                                                                                                                                        │ default-k8s-diff-port-681744 │ jenkins │ v1.37.0 │ 27 Dec 25 10:05 UTC │ 27 Dec 25 10:05 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-681744 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                       │ default-k8s-diff-port-681744 │ jenkins │ v1.37.0 │ 27 Dec 25 10:05 UTC │ 27 Dec 25 10:05 UTC │
	│ start   │ -p default-k8s-diff-port-681744 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0                                                                      │ default-k8s-diff-port-681744 │ jenkins │ v1.37.0 │ 27 Dec 25 10:05 UTC │ 27 Dec 25 10:06 UTC │
	│ image   │ embed-certs-017122 image list --format=json                                                                                                                                                                                                   │ embed-certs-017122           │ jenkins │ v1.37.0 │ 27 Dec 25 10:06 UTC │ 27 Dec 25 10:06 UTC │
	│ pause   │ -p embed-certs-017122 --alsologtostderr -v=1                                                                                                                                                                                                  │ embed-certs-017122           │ jenkins │ v1.37.0 │ 27 Dec 25 10:06 UTC │                     │
	│ delete  │ -p embed-certs-017122                                                                                                                                                                                                                         │ embed-certs-017122           │ jenkins │ v1.37.0 │ 27 Dec 25 10:06 UTC │ 27 Dec 25 10:06 UTC │
	│ delete  │ -p embed-certs-017122                                                                                                                                                                                                                         │ embed-certs-017122           │ jenkins │ v1.37.0 │ 27 Dec 25 10:06 UTC │ 27 Dec 25 10:06 UTC │
	│ start   │ -p newest-cni-133340 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0 │ newest-cni-133340            │ jenkins │ v1.37.0 │ 27 Dec 25 10:06 UTC │                     │
	│ image   │ default-k8s-diff-port-681744 image list --format=json                                                                                                                                                                                         │ default-k8s-diff-port-681744 │ jenkins │ v1.37.0 │ 27 Dec 25 10:06 UTC │ 27 Dec 25 10:06 UTC │
	│ pause   │ -p default-k8s-diff-port-681744 --alsologtostderr -v=1                                                                                                                                                                                        │ default-k8s-diff-port-681744 │ jenkins │ v1.37.0 │ 27 Dec 25 10:06 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/27 10:06:48
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1227 10:06:48.765787  522415 out.go:360] Setting OutFile to fd 1 ...
	I1227 10:06:48.766009  522415 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 10:06:48.766039  522415 out.go:374] Setting ErrFile to fd 2...
	I1227 10:06:48.766059  522415 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 10:06:48.766500  522415 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22344-301174/.minikube/bin
	I1227 10:06:48.767074  522415 out.go:368] Setting JSON to false
	I1227 10:06:48.768050  522415 start.go:133] hostinfo: {"hostname":"ip-172-31-31-251","uptime":10158,"bootTime":1766819851,"procs":184,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1227 10:06:48.768169  522415 start.go:143] virtualization:  
	I1227 10:06:48.772170  522415 out.go:179] * [newest-cni-133340] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1227 10:06:48.776483  522415 out.go:179]   - MINIKUBE_LOCATION=22344
	I1227 10:06:48.776548  522415 notify.go:221] Checking for updates...
	I1227 10:06:48.782742  522415 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1227 10:06:48.786061  522415 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22344-301174/kubeconfig
	I1227 10:06:48.789052  522415 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22344-301174/.minikube
	I1227 10:06:48.792046  522415 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1227 10:06:48.795081  522415 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1227 10:06:48.798681  522415 config.go:182] Loaded profile config "default-k8s-diff-port-681744": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1227 10:06:48.798801  522415 driver.go:422] Setting default libvirt URI to qemu:///system
	I1227 10:06:48.820711  522415 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1227 10:06:48.820827  522415 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1227 10:06:48.879245  522415 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-27 10:06:48.869406055 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1227 10:06:48.879355  522415 docker.go:319] overlay module found
	I1227 10:06:48.882590  522415 out.go:179] * Using the docker driver based on user configuration
	I1227 10:06:48.885552  522415 start.go:309] selected driver: docker
	I1227 10:06:48.885571  522415 start.go:928] validating driver "docker" against <nil>
	I1227 10:06:48.885586  522415 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1227 10:06:48.886459  522415 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1227 10:06:48.942136  522415 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-27 10:06:48.932912635 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1227 10:06:48.942345  522415 start_flags.go:333] no existing cluster config was found, will generate one from the flags 
	W1227 10:06:48.942377  522415 out.go:285] ! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	I1227 10:06:48.942608  522415 start_flags.go:1038] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1227 10:06:48.945423  522415 out.go:179] * Using Docker driver with root privileges
	I1227 10:06:48.948313  522415 cni.go:84] Creating CNI manager for ""
	I1227 10:06:48.948387  522415 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1227 10:06:48.948402  522415 start_flags.go:342] Found "CNI" CNI - setting NetworkPlugin=cni
	I1227 10:06:48.948484  522415 start.go:353] cluster config:
	{Name:newest-cni-133340 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:newest-cni-133340 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnet
ClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1227 10:06:48.953594  522415 out.go:179] * Starting "newest-cni-133340" primary control-plane node in "newest-cni-133340" cluster
	I1227 10:06:48.956505  522415 cache.go:134] Beginning downloading kic base image for docker with crio
	I1227 10:06:48.960136  522415 out.go:179] * Pulling base image v0.0.48-1766570851-22316 ...
	I1227 10:06:48.963018  522415 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime crio
	I1227 10:06:48.963072  522415 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22344-301174/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-cri-o-overlay-arm64.tar.lz4
	I1227 10:06:48.963081  522415 cache.go:65] Caching tarball of preloaded images
	I1227 10:06:48.963168  522415 preload.go:251] Found /home/jenkins/minikube-integration/22344-301174/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1227 10:06:48.963184  522415 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0 on crio
	I1227 10:06:48.963311  522415 profile.go:143] Saving config to /home/jenkins/minikube-integration/22344-301174/.minikube/profiles/newest-cni-133340/config.json ...
	I1227 10:06:48.963336  522415 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22344-301174/.minikube/profiles/newest-cni-133340/config.json: {Name:mka98e5e41c61eb971db956a5c71d82577d33d19 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 10:06:48.963494  522415 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a in local docker daemon
	I1227 10:06:48.983663  522415 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a in local docker daemon, skipping pull
	I1227 10:06:48.983683  522415 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a exists in daemon, skipping load
	I1227 10:06:48.983697  522415 cache.go:243] Successfully downloaded all kic artifacts
	I1227 10:06:48.983728  522415 start.go:360] acquireMachinesLock for newest-cni-133340: {Name:mke43a3ebd8f4eaf65da86bf9dafee410f8229a8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1227 10:06:48.983833  522415 start.go:364] duration metric: took 86.688µs to acquireMachinesLock for "newest-cni-133340"
	I1227 10:06:48.983863  522415 start.go:93] Provisioning new machine with config: &{Name:newest-cni-133340 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:newest-cni-133340 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fals
e DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false} &{Name: IP: Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1227 10:06:48.983932  522415 start.go:125] createHost starting for "" (driver="docker")
	I1227 10:06:48.987230  522415 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1227 10:06:48.987455  522415 start.go:159] libmachine.API.Create for "newest-cni-133340" (driver="docker")
	I1227 10:06:48.987487  522415 client.go:173] LocalClient.Create starting
	I1227 10:06:48.987550  522415 main.go:144] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22344-301174/.minikube/certs/ca.pem
	I1227 10:06:48.987595  522415 main.go:144] libmachine: Decoding PEM data...
	I1227 10:06:48.987614  522415 main.go:144] libmachine: Parsing certificate...
	I1227 10:06:48.987675  522415 main.go:144] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22344-301174/.minikube/certs/cert.pem
	I1227 10:06:48.987694  522415 main.go:144] libmachine: Decoding PEM data...
	I1227 10:06:48.987713  522415 main.go:144] libmachine: Parsing certificate...
	I1227 10:06:48.988072  522415 cli_runner.go:164] Run: docker network inspect newest-cni-133340 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1227 10:06:49.004592  522415 cli_runner.go:211] docker network inspect newest-cni-133340 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1227 10:06:49.004699  522415 network_create.go:284] running [docker network inspect newest-cni-133340] to gather additional debugging logs...
	I1227 10:06:49.004724  522415 cli_runner.go:164] Run: docker network inspect newest-cni-133340
	W1227 10:06:49.021988  522415 cli_runner.go:211] docker network inspect newest-cni-133340 returned with exit code 1
	I1227 10:06:49.022058  522415 network_create.go:287] error running [docker network inspect newest-cni-133340]: docker network inspect newest-cni-133340: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network newest-cni-133340 not found
	I1227 10:06:49.022075  522415 network_create.go:289] output of [docker network inspect newest-cni-133340]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network newest-cni-133340 not found
	
	** /stderr **
	I1227 10:06:49.022237  522415 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1227 10:06:49.039674  522415 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-cc45c0939b74 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:be:a9:eb:a6:c1:01} reservation:<nil>}
	I1227 10:06:49.040150  522415 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-b122a856da6d IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:2e:eb:89:95:c1:62} reservation:<nil>}
	I1227 10:06:49.040440  522415 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-a1a16649dea9 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:9e:51:2b:90:d1:ea} reservation:<nil>}
	I1227 10:06:49.040874  522415 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001a50530}
	I1227 10:06:49.040898  522415 network_create.go:124] attempt to create docker network newest-cni-133340 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I1227 10:06:49.040964  522415 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=newest-cni-133340 newest-cni-133340
	I1227 10:06:49.102657  522415 network_create.go:108] docker network newest-cni-133340 192.168.76.0/24 created
	I1227 10:06:49.102689  522415 kic.go:121] calculated static IP "192.168.76.2" for the "newest-cni-133340" container
	I1227 10:06:49.102778  522415 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1227 10:06:49.118551  522415 cli_runner.go:164] Run: docker volume create newest-cni-133340 --label name.minikube.sigs.k8s.io=newest-cni-133340 --label created_by.minikube.sigs.k8s.io=true
	I1227 10:06:49.136165  522415 oci.go:103] Successfully created a docker volume newest-cni-133340
	I1227 10:06:49.136261  522415 cli_runner.go:164] Run: docker run --rm --name newest-cni-133340-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-133340 --entrypoint /usr/bin/test -v newest-cni-133340:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a -d /var/lib
	I1227 10:06:49.658339  522415 oci.go:107] Successfully prepared a docker volume newest-cni-133340
	I1227 10:06:49.658413  522415 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime crio
	I1227 10:06:49.658428  522415 kic.go:194] Starting extracting preloaded images to volume ...
	I1227 10:06:49.658521  522415 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22344-301174/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v newest-cni-133340:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a -I lz4 -xf /preloaded.tar -C /extractDir
	I1227 10:06:53.863051  522415 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22344-301174/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v newest-cni-133340:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a -I lz4 -xf /preloaded.tar -C /extractDir: (4.204459279s)
	I1227 10:06:53.863084  522415 kic.go:203] duration metric: took 4.204652691s to extract preloaded images to volume ...
	W1227 10:06:53.863220  522415 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1227 10:06:53.863329  522415 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1227 10:06:53.923891  522415 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname newest-cni-133340 --name newest-cni-133340 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-133340 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=newest-cni-133340 --network newest-cni-133340 --ip 192.168.76.2 --volume newest-cni-133340:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a
	I1227 10:06:54.225384  522415 cli_runner.go:164] Run: docker container inspect newest-cni-133340 --format={{.State.Running}}
	I1227 10:06:54.245637  522415 cli_runner.go:164] Run: docker container inspect newest-cni-133340 --format={{.State.Status}}
	I1227 10:06:54.264758  522415 cli_runner.go:164] Run: docker exec newest-cni-133340 stat /var/lib/dpkg/alternatives/iptables
	I1227 10:06:54.317513  522415 oci.go:144] the created container "newest-cni-133340" has a running status.
	I1227 10:06:54.317542  522415 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/22344-301174/.minikube/machines/newest-cni-133340/id_rsa...
	I1227 10:06:54.840922  522415 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22344-301174/.minikube/machines/newest-cni-133340/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1227 10:06:54.861377  522415 cli_runner.go:164] Run: docker container inspect newest-cni-133340 --format={{.State.Status}}
	I1227 10:06:54.878717  522415 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1227 10:06:54.878740  522415 kic_runner.go:114] Args: [docker exec --privileged newest-cni-133340 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1227 10:06:54.918703  522415 cli_runner.go:164] Run: docker container inspect newest-cni-133340 --format={{.State.Status}}
	I1227 10:06:54.939313  522415 machine.go:94] provisionDockerMachine start ...
	I1227 10:06:54.939415  522415 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-133340
	I1227 10:06:54.956396  522415 main.go:144] libmachine: Using SSH client type: native
	I1227 10:06:54.956736  522415 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dbbe0] 0x3de0e0 <nil>  [] 0s} 127.0.0.1 33461 <nil> <nil>}
	I1227 10:06:54.956751  522415 main.go:144] libmachine: About to run SSH command:
	hostname
	I1227 10:06:54.957392  522415 main.go:144] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1227 10:06:58.106087  522415 main.go:144] libmachine: SSH cmd err, output: <nil>: newest-cni-133340
	
	I1227 10:06:58.106110  522415 ubuntu.go:182] provisioning hostname "newest-cni-133340"
	I1227 10:06:58.106202  522415 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-133340
	I1227 10:06:58.130554  522415 main.go:144] libmachine: Using SSH client type: native
	I1227 10:06:58.130868  522415 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dbbe0] 0x3de0e0 <nil>  [] 0s} 127.0.0.1 33461 <nil> <nil>}
	I1227 10:06:58.130883  522415 main.go:144] libmachine: About to run SSH command:
	sudo hostname newest-cni-133340 && echo "newest-cni-133340" | sudo tee /etc/hostname
	I1227 10:06:58.279060  522415 main.go:144] libmachine: SSH cmd err, output: <nil>: newest-cni-133340
	
	I1227 10:06:58.279136  522415 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-133340
	I1227 10:06:58.296128  522415 main.go:144] libmachine: Using SSH client type: native
	I1227 10:06:58.296449  522415 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dbbe0] 0x3de0e0 <nil>  [] 0s} 127.0.0.1 33461 <nil> <nil>}
	I1227 10:06:58.296465  522415 main.go:144] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-133340' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-133340/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-133340' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1227 10:06:58.438713  522415 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	I1227 10:06:58.438739  522415 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22344-301174/.minikube CaCertPath:/home/jenkins/minikube-integration/22344-301174/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22344-301174/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22344-301174/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22344-301174/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22344-301174/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22344-301174/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22344-301174/.minikube}
	I1227 10:06:58.438757  522415 ubuntu.go:190] setting up certificates
	I1227 10:06:58.438767  522415 provision.go:84] configureAuth start
	I1227 10:06:58.438826  522415 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-133340
	I1227 10:06:58.462462  522415 provision.go:143] copyHostCerts
	I1227 10:06:58.462529  522415 exec_runner.go:144] found /home/jenkins/minikube-integration/22344-301174/.minikube/ca.pem, removing ...
	I1227 10:06:58.462543  522415 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22344-301174/.minikube/ca.pem
	I1227 10:06:58.462647  522415 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22344-301174/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22344-301174/.minikube/ca.pem (1082 bytes)
	I1227 10:06:58.462780  522415 exec_runner.go:144] found /home/jenkins/minikube-integration/22344-301174/.minikube/cert.pem, removing ...
	I1227 10:06:58.462795  522415 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22344-301174/.minikube/cert.pem
	I1227 10:06:58.462838  522415 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22344-301174/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22344-301174/.minikube/cert.pem (1123 bytes)
	I1227 10:06:58.462902  522415 exec_runner.go:144] found /home/jenkins/minikube-integration/22344-301174/.minikube/key.pem, removing ...
	I1227 10:06:58.462912  522415 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22344-301174/.minikube/key.pem
	I1227 10:06:58.462936  522415 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22344-301174/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22344-301174/.minikube/key.pem (1675 bytes)
	I1227 10:06:58.462988  522415 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22344-301174/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22344-301174/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22344-301174/.minikube/certs/ca-key.pem org=jenkins.newest-cni-133340 san=[127.0.0.1 192.168.76.2 localhost minikube newest-cni-133340]
	I1227 10:06:58.592685  522415 provision.go:177] copyRemoteCerts
	I1227 10:06:58.592758  522415 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1227 10:06:58.592808  522415 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-133340
	I1227 10:06:58.609587  522415 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33461 SSHKeyPath:/home/jenkins/minikube-integration/22344-301174/.minikube/machines/newest-cni-133340/id_rsa Username:docker}
	I1227 10:06:58.709878  522415 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22344-301174/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1227 10:06:58.728093  522415 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22344-301174/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1227 10:06:58.746128  522415 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22344-301174/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1227 10:06:58.765604  522415 provision.go:87] duration metric: took 326.823551ms to configureAuth
	I1227 10:06:58.765634  522415 ubuntu.go:206] setting minikube options for container-runtime
	I1227 10:06:58.765831  522415 config.go:182] Loaded profile config "newest-cni-133340": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1227 10:06:58.765956  522415 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-133340
	I1227 10:06:58.783926  522415 main.go:144] libmachine: Using SSH client type: native
	I1227 10:06:58.784227  522415 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dbbe0] 0x3de0e0 <nil>  [] 0s} 127.0.0.1 33461 <nil> <nil>}
	I1227 10:06:58.784241  522415 main.go:144] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1227 10:06:59.205882  522415 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1227 10:06:59.205908  522415 machine.go:97] duration metric: took 4.26657174s to provisionDockerMachine
	I1227 10:06:59.205919  522415 client.go:176] duration metric: took 10.218424112s to LocalClient.Create
	I1227 10:06:59.205932  522415 start.go:167] duration metric: took 10.218479186s to libmachine.API.Create "newest-cni-133340"
	I1227 10:06:59.205940  522415 start.go:293] postStartSetup for "newest-cni-133340" (driver="docker")
	I1227 10:06:59.205950  522415 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1227 10:06:59.206039  522415 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1227 10:06:59.206088  522415 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-133340
	I1227 10:06:59.229590  522415 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33461 SSHKeyPath:/home/jenkins/minikube-integration/22344-301174/.minikube/machines/newest-cni-133340/id_rsa Username:docker}
	I1227 10:06:59.332538  522415 ssh_runner.go:195] Run: cat /etc/os-release
	I1227 10:06:59.336756  522415 main.go:144] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1227 10:06:59.336782  522415 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1227 10:06:59.336793  522415 filesync.go:126] Scanning /home/jenkins/minikube-integration/22344-301174/.minikube/addons for local assets ...
	I1227 10:06:59.336849  522415 filesync.go:126] Scanning /home/jenkins/minikube-integration/22344-301174/.minikube/files for local assets ...
	I1227 10:06:59.336928  522415 filesync.go:149] local asset: /home/jenkins/minikube-integration/22344-301174/.minikube/files/etc/ssl/certs/3030432.pem -> 3030432.pem in /etc/ssl/certs
	I1227 10:06:59.337038  522415 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1227 10:06:59.346904  522415 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22344-301174/.minikube/files/etc/ssl/certs/3030432.pem --> /etc/ssl/certs/3030432.pem (1708 bytes)
	I1227 10:06:59.383988  522415 start.go:296] duration metric: took 178.03229ms for postStartSetup
	I1227 10:06:59.384380  522415 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-133340
	I1227 10:06:59.419398  522415 profile.go:143] Saving config to /home/jenkins/minikube-integration/22344-301174/.minikube/profiles/newest-cni-133340/config.json ...
	I1227 10:06:59.419680  522415 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1227 10:06:59.419733  522415 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-133340
	I1227 10:06:59.442610  522415 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33461 SSHKeyPath:/home/jenkins/minikube-integration/22344-301174/.minikube/machines/newest-cni-133340/id_rsa Username:docker}
	I1227 10:06:59.555501  522415 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1227 10:06:59.562777  522415 start.go:128] duration metric: took 10.578830736s to createHost
	I1227 10:06:59.562811  522415 start.go:83] releasing machines lock for "newest-cni-133340", held for 10.578964342s
	I1227 10:06:59.562885  522415 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-133340
	I1227 10:06:59.591042  522415 ssh_runner.go:195] Run: cat /version.json
	I1227 10:06:59.591103  522415 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-133340
	I1227 10:06:59.591370  522415 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1227 10:06:59.591428  522415 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-133340
	I1227 10:06:59.627727  522415 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33461 SSHKeyPath:/home/jenkins/minikube-integration/22344-301174/.minikube/machines/newest-cni-133340/id_rsa Username:docker}
	I1227 10:06:59.638092  522415 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33461 SSHKeyPath:/home/jenkins/minikube-integration/22344-301174/.minikube/machines/newest-cni-133340/id_rsa Username:docker}
	I1227 10:06:59.725896  522415 ssh_runner.go:195] Run: systemctl --version
	I1227 10:06:59.847506  522415 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1227 10:06:59.905052  522415 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1227 10:06:59.912750  522415 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1227 10:06:59.912933  522415 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1227 10:06:59.957419  522415 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1227 10:06:59.957510  522415 start.go:496] detecting cgroup driver to use...
	I1227 10:06:59.957572  522415 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1227 10:06:59.957657  522415 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1227 10:06:59.981596  522415 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1227 10:06:59.998949  522415 docker.go:218] disabling cri-docker service (if available) ...
	I1227 10:06:59.999073  522415 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1227 10:07:00.056378  522415 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1227 10:07:00.117601  522415 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1227 10:07:00.420084  522415 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1227 10:07:00.565968  522415 docker.go:234] disabling docker service ...
	I1227 10:07:00.566093  522415 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1227 10:07:00.598748  522415 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1227 10:07:00.634717  522415 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1227 10:07:00.773124  522415 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1227 10:07:00.922635  522415 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1227 10:07:00.936449  522415 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1227 10:07:00.953118  522415 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1227 10:07:00.953240  522415 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 10:07:00.962874  522415 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1227 10:07:00.963019  522415 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 10:07:00.972293  522415 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 10:07:00.980889  522415 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 10:07:00.989630  522415 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1227 10:07:00.998403  522415 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 10:07:01.008732  522415 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 10:07:01.023183  522415 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 10:07:01.032322  522415 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1227 10:07:01.039975  522415 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1227 10:07:01.047656  522415 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1227 10:07:01.159721  522415 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1227 10:07:01.319325  522415 start.go:553] Will wait 60s for socket path /var/run/crio/crio.sock
	I1227 10:07:01.319415  522415 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1227 10:07:01.324479  522415 start.go:574] Will wait 60s for crictl version
	I1227 10:07:01.324622  522415 ssh_runner.go:195] Run: which crictl
	I1227 10:07:01.328380  522415 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1227 10:07:01.363590  522415 start.go:590] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1227 10:07:01.363807  522415 ssh_runner.go:195] Run: crio --version
	I1227 10:07:01.409718  522415 ssh_runner.go:195] Run: crio --version
	I1227 10:07:01.450980  522415 out.go:179] * Preparing Kubernetes v1.35.0 on CRI-O 1.34.3 ...
	I1227 10:07:01.453978  522415 cli_runner.go:164] Run: docker network inspect newest-cni-133340 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1227 10:07:01.492010  522415 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1227 10:07:01.499136  522415 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1227 10:07:01.514617  522415 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	
	
	==> CRI-O <==
	Dec 27 10:06:39 default-k8s-diff-port-681744 crio[658]: time="2025-12-27T10:06:39.02354681Z" level=info msg="Created container c5e4c2046964bfb93c146b52bc46a0c95d3b41f37952360670a9c58bd4b89311: kube-system/storage-provisioner/storage-provisioner" id=27e1680f-d087-4561-bbe4-b625fa1911c7 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 27 10:06:39 default-k8s-diff-port-681744 crio[658]: time="2025-12-27T10:06:39.024965781Z" level=info msg="Starting container: c5e4c2046964bfb93c146b52bc46a0c95d3b41f37952360670a9c58bd4b89311" id=684b206b-d046-4a1c-8a4c-f1046fee1e7e name=/runtime.v1.RuntimeService/StartContainer
	Dec 27 10:06:39 default-k8s-diff-port-681744 crio[658]: time="2025-12-27T10:06:39.029103063Z" level=info msg="Started container" PID=1658 containerID=c5e4c2046964bfb93c146b52bc46a0c95d3b41f37952360670a9c58bd4b89311 description=kube-system/storage-provisioner/storage-provisioner id=684b206b-d046-4a1c-8a4c-f1046fee1e7e name=/runtime.v1.RuntimeService/StartContainer sandboxID=e35ea032155ff49f7f1ae6f6fc5c6ff590bc1dfb59d6d1216b8eb26b998db64e
	Dec 27 10:06:48 default-k8s-diff-port-681744 crio[658]: time="2025-12-27T10:06:48.622671247Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Dec 27 10:06:48 default-k8s-diff-port-681744 crio[658]: time="2025-12-27T10:06:48.628261077Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 27 10:06:48 default-k8s-diff-port-681744 crio[658]: time="2025-12-27T10:06:48.62845248Z" level=info msg="Updated default CNI network name to kindnet"
	Dec 27 10:06:48 default-k8s-diff-port-681744 crio[658]: time="2025-12-27T10:06:48.628547366Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Dec 27 10:06:48 default-k8s-diff-port-681744 crio[658]: time="2025-12-27T10:06:48.633866447Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 27 10:06:48 default-k8s-diff-port-681744 crio[658]: time="2025-12-27T10:06:48.634044837Z" level=info msg="Updated default CNI network name to kindnet"
	Dec 27 10:06:48 default-k8s-diff-port-681744 crio[658]: time="2025-12-27T10:06:48.634129343Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Dec 27 10:06:48 default-k8s-diff-port-681744 crio[658]: time="2025-12-27T10:06:48.637954556Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 27 10:06:48 default-k8s-diff-port-681744 crio[658]: time="2025-12-27T10:06:48.637987393Z" level=info msg="Updated default CNI network name to kindnet"
	Dec 27 10:06:48 default-k8s-diff-port-681744 crio[658]: time="2025-12-27T10:06:48.638018901Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Dec 27 10:06:48 default-k8s-diff-port-681744 crio[658]: time="2025-12-27T10:06:48.641253214Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 27 10:06:48 default-k8s-diff-port-681744 crio[658]: time="2025-12-27T10:06:48.64128871Z" level=info msg="Updated default CNI network name to kindnet"
	Dec 27 10:06:59 default-k8s-diff-port-681744 crio[658]: time="2025-12-27T10:06:59.766249722Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=c7bc6579-cd84-4bcc-8f7b-0911224ffac8 name=/runtime.v1.ImageService/ImageStatus
	Dec 27 10:06:59 default-k8s-diff-port-681744 crio[658]: time="2025-12-27T10:06:59.76788568Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=b9cb42d5-c05b-4940-9636-f7cbf138d8aa name=/runtime.v1.ImageService/ImageStatus
	Dec 27 10:06:59 default-k8s-diff-port-681744 crio[658]: time="2025-12-27T10:06:59.770286416Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-qx5tq/dashboard-metrics-scraper" id=e8fed25d-f3a5-4b35-9e7f-3cc01d5ada1b name=/runtime.v1.RuntimeService/CreateContainer
	Dec 27 10:06:59 default-k8s-diff-port-681744 crio[658]: time="2025-12-27T10:06:59.770403357Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 27 10:06:59 default-k8s-diff-port-681744 crio[658]: time="2025-12-27T10:06:59.789485531Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 27 10:06:59 default-k8s-diff-port-681744 crio[658]: time="2025-12-27T10:06:59.790240535Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 27 10:06:59 default-k8s-diff-port-681744 crio[658]: time="2025-12-27T10:06:59.825278158Z" level=info msg="Created container a22c4cfddf705c6c7d8c65da2769b16a9385370f9c1db836c666d3c2e33c79cf: kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-qx5tq/dashboard-metrics-scraper" id=e8fed25d-f3a5-4b35-9e7f-3cc01d5ada1b name=/runtime.v1.RuntimeService/CreateContainer
	Dec 27 10:06:59 default-k8s-diff-port-681744 crio[658]: time="2025-12-27T10:06:59.82740808Z" level=info msg="Starting container: a22c4cfddf705c6c7d8c65da2769b16a9385370f9c1db836c666d3c2e33c79cf" id=6144c3ae-8886-43b6-861e-c13464bad9d4 name=/runtime.v1.RuntimeService/StartContainer
	Dec 27 10:06:59 default-k8s-diff-port-681744 crio[658]: time="2025-12-27T10:06:59.831781238Z" level=info msg="Started container" PID=1773 containerID=a22c4cfddf705c6c7d8c65da2769b16a9385370f9c1db836c666d3c2e33c79cf description=kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-qx5tq/dashboard-metrics-scraper id=6144c3ae-8886-43b6-861e-c13464bad9d4 name=/runtime.v1.RuntimeService/StartContainer sandboxID=9f0fd8832e29d9c4e362c2100efea92b6cc264ed4acd01cc046c142c31623735
	Dec 27 10:06:59 default-k8s-diff-port-681744 conmon[1771]: conmon a22c4cfddf705c6c7d8c <ninfo>: container 1773 exited with status 1
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED              STATE               NAME                        ATTEMPT             POD ID              POD                                                    NAMESPACE
	a22c4cfddf705       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           3 seconds ago        Exited              dashboard-metrics-scraper   3                   9f0fd8832e29d       dashboard-metrics-scraper-867fb5f87b-qx5tq             kubernetes-dashboard
	c5e4c2046964b       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           24 seconds ago       Running             storage-provisioner         2                   e35ea032155ff       storage-provisioner                                    kube-system
	c6e9ba892a5f9       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           30 seconds ago       Exited              dashboard-metrics-scraper   2                   9f0fd8832e29d       dashboard-metrics-scraper-867fb5f87b-qx5tq             kubernetes-dashboard
	f8410dd3636ef       docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf   47 seconds ago       Running             kubernetes-dashboard        0                   1b6a548e7d145       kubernetes-dashboard-b84665fb8-rmdxj                   kubernetes-dashboard
	1590af91f4b09       1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c                                           54 seconds ago       Running             busybox                     1                   21dcfb7bfab8d       busybox                                                default
	1ebbbaa41e609       e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf                                           54 seconds ago       Running             coredns                     1                   3ce26d2397773       coredns-7d764666f9-gsk6s                               kube-system
	d9b232bb33745       c96ee3c17498748ccc544ba99ee8ffeb020fc335b230b43cd28bf43bed229a13                                           54 seconds ago       Running             kindnet-cni                 1                   f82566a408a23       kindnet-n6bcg                                          kube-system
	f21c10c677052       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           54 seconds ago       Exited              storage-provisioner         1                   e35ea032155ff       storage-provisioner                                    kube-system
	1f4229e7da039       de369f46c2ff55c31ea783a663eb203caa820f3db1f9b9c935e79e7d1e9fd9e5                                           54 seconds ago       Running             kube-proxy                  1                   cdbab87ed14cf       kube-proxy-6wq7w                                       kube-system
	05ed911c94373       c3fcf259c473a57a5d7da116e29161904491091743512d27467c907c5516f856                                           About a minute ago   Running             kube-apiserver              1                   79a06ae9bd326       kube-apiserver-default-k8s-diff-port-681744            kube-system
	33fdbb0d08777       88898f1d1a62a3ea9db5d4d099dee7aa52ebe8191016c5b3c721388a309983e0                                           About a minute ago   Running             kube-controller-manager     1                   1cf13a42fb3d8       kube-controller-manager-default-k8s-diff-port-681744   kube-system
	5c6646254efce       ddc8422d4d35a6fc66c34be61e24df795e5cebf197eb546f62740d0bafef874f                                           About a minute ago   Running             kube-scheduler              1                   0462d45b1cdd3       kube-scheduler-default-k8s-diff-port-681744            kube-system
	32a79604be992       271e49a0ebc56647476845128fcd2a73bb138beeca3878cc3bf52b4ff1172a57                                           About a minute ago   Running             etcd                        1                   405bcc6dd0b21       etcd-default-k8s-diff-port-681744                      kube-system
	
	
	==> coredns [1ebbbaa41e609904387bf6f6ddcce7e4ba4736940bdbc05e10eb8944ddb23cab] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.13.1
	linux/arm64, go1.25.2, 1db4568
	[INFO] 127.0.0.1:44056 - 52839 "HINFO IN 651255659691147961.3954095016706383786. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.01555555s
	[INFO] plugin/ready: Plugins not ready: "kubernetes"
	[INFO] plugin/ready: Plugins not ready: "kubernetes"
	[INFO] plugin/ready: Plugins not ready: "kubernetes"
	[INFO] plugin/ready: Plugins not ready: "kubernetes"
	[ERROR] plugin/kubernetes: Failed to watch
	[ERROR] plugin/kubernetes: Failed to watch
	[ERROR] plugin/kubernetes: Failed to watch
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-681744
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=default-k8s-diff-port-681744
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=a2daf445edf4872fd9586416ba5dbf507613db86
	                    minikube.k8s.io/name=default-k8s-diff-port-681744
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_27T10_05_08_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 27 Dec 2025 10:05:04 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-681744
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 27 Dec 2025 10:06:57 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 27 Dec 2025 10:06:57 +0000   Sat, 27 Dec 2025 10:05:01 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 27 Dec 2025 10:06:57 +0000   Sat, 27 Dec 2025 10:05:01 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 27 Dec 2025 10:06:57 +0000   Sat, 27 Dec 2025 10:05:01 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 27 Dec 2025 10:06:57 +0000   Sat, 27 Dec 2025 10:05:25 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    default-k8s-diff-port-681744
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022304Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022304Ki
	  pods:               110
	System Info:
	  Machine ID:                 56495bcb733b49eb3642bf15694bbc8c
	  System UUID:                aaa4a45e-c8b8-47d4-86bd-5fcd976160a4
	  Boot ID:                    bc2ae459-717b-455e-ba6b-b9cf89c5e3c8
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.3
	  Kubelet Version:            v1.35.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         95s
	  kube-system                 coredns-7d764666f9-gsk6s                                100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     111s
	  kube-system                 etcd-default-k8s-diff-port-681744                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         118s
	  kube-system                 kindnet-n6bcg                                           100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      111s
	  kube-system                 kube-apiserver-default-k8s-diff-port-681744             250m (12%)    0 (0%)      0 (0%)           0 (0%)         116s
	  kube-system                 kube-controller-manager-default-k8s-diff-port-681744    200m (10%)    0 (0%)      0 (0%)           0 (0%)         116s
	  kube-system                 kube-proxy-6wq7w                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         111s
	  kube-system                 kube-scheduler-default-k8s-diff-port-681744             100m (5%)     0 (0%)      0 (0%)           0 (0%)         116s
	  kube-system                 storage-provisioner                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         110s
	  kubernetes-dashboard        dashboard-metrics-scraper-867fb5f87b-qx5tq              0 (0%)        0 (0%)      0 (0%)           0 (0%)         53s
	  kubernetes-dashboard        kubernetes-dashboard-b84665fb8-rmdxj                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         53s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type    Reason          Age   From             Message
	  ----    ------          ----  ----             -------
	  Normal  RegisteredNode  112s  node-controller  Node default-k8s-diff-port-681744 event: Registered Node default-k8s-diff-port-681744 in Controller
	  Normal  RegisteredNode  53s   node-controller  Node default-k8s-diff-port-681744 event: Registered Node default-k8s-diff-port-681744 in Controller
	
	
	==> dmesg <==
	[Dec27 09:35] overlayfs: idmapped layers are currently not supported
	[ +35.855481] overlayfs: idmapped layers are currently not supported
	[Dec27 09:36] overlayfs: idmapped layers are currently not supported
	[Dec27 09:37] overlayfs: idmapped layers are currently not supported
	[  +4.962986] overlayfs: idmapped layers are currently not supported
	[Dec27 09:38] overlayfs: idmapped layers are currently not supported
	[ +23.051169] overlayfs: idmapped layers are currently not supported
	[Dec27 09:39] overlayfs: idmapped layers are currently not supported
	[ +34.383287] overlayfs: idmapped layers are currently not supported
	[Dec27 09:41] overlayfs: idmapped layers are currently not supported
	[Dec27 09:43] overlayfs: idmapped layers are currently not supported
	[ +35.522529] overlayfs: idmapped layers are currently not supported
	[Dec27 09:48] overlayfs: idmapped layers are currently not supported
	[Dec27 09:49] overlayfs: idmapped layers are currently not supported
	[Dec27 09:51] overlayfs: idmapped layers are currently not supported
	[Dec27 09:52] overlayfs: idmapped layers are currently not supported
	[Dec27 09:59] overlayfs: idmapped layers are currently not supported
	[ +37.649191] overlayfs: idmapped layers are currently not supported
	[Dec27 10:01] overlayfs: idmapped layers are currently not supported
	[Dec27 10:02] overlayfs: idmapped layers are currently not supported
	[Dec27 10:03] overlayfs: idmapped layers are currently not supported
	[Dec27 10:04] overlayfs: idmapped layers are currently not supported
	[Dec27 10:05] overlayfs: idmapped layers are currently not supported
	[ +42.108139] overlayfs: idmapped layers are currently not supported
	[Dec27 10:06] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [32a79604be9925f6e05bfd7503e0687ee2bac5349290a54929faec55c1325915] <==
	{"level":"info","ts":"2025-12-27T10:06:03.542969Z","caller":"embed/etcd.go:890","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2025-12-27T10:06:03.543053Z","caller":"embed/etcd.go:640","msg":"serving peer traffic","address":"192.168.85.2:2380"}
	{"level":"info","ts":"2025-12-27T10:06:03.543086Z","caller":"embed/etcd.go:611","msg":"cmux::serve","address":"192.168.85.2:2380"}
	{"level":"info","ts":"2025-12-27T10:06:03.552762Z","caller":"fileutil/purge.go:49","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-12-27T10:06:03.542255Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1981","msg":"9f0758e1c58a86ed switched to configuration voters=(11459225503572592365)"}
	{"level":"info","ts":"2025-12-27T10:06:03.553116Z","caller":"membership/cluster.go:433","msg":"ignore already added member","cluster-id":"68eaea490fab4e05","local-member-id":"9f0758e1c58a86ed","added-peer-id":"9f0758e1c58a86ed","added-peer-peer-urls":["https://192.168.85.2:2380"],"added-peer-is-learner":false}
	{"level":"info","ts":"2025-12-27T10:06:03.553266Z","caller":"membership/cluster.go:674","msg":"updated cluster version","cluster-id":"68eaea490fab4e05","local-member-id":"9f0758e1c58a86ed","from":"3.6","to":"3.6"}
	{"level":"info","ts":"2025-12-27T10:06:03.652496Z","logger":"raft","caller":"v3@v3.6.0/raft.go:988","msg":"9f0758e1c58a86ed is starting a new election at term 2"}
	{"level":"info","ts":"2025-12-27T10:06:03.652540Z","logger":"raft","caller":"v3@v3.6.0/raft.go:930","msg":"9f0758e1c58a86ed became pre-candidate at term 2"}
	{"level":"info","ts":"2025-12-27T10:06:03.652593Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1077","msg":"9f0758e1c58a86ed received MsgPreVoteResp from 9f0758e1c58a86ed at term 2"}
	{"level":"info","ts":"2025-12-27T10:06:03.652611Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1693","msg":"9f0758e1c58a86ed has received 1 MsgPreVoteResp votes and 0 vote rejections"}
	{"level":"info","ts":"2025-12-27T10:06:03.652625Z","logger":"raft","caller":"v3@v3.6.0/raft.go:912","msg":"9f0758e1c58a86ed became candidate at term 3"}
	{"level":"info","ts":"2025-12-27T10:06:03.677496Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1077","msg":"9f0758e1c58a86ed received MsgVoteResp from 9f0758e1c58a86ed at term 3"}
	{"level":"info","ts":"2025-12-27T10:06:03.677543Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1693","msg":"9f0758e1c58a86ed has received 1 MsgVoteResp votes and 0 vote rejections"}
	{"level":"info","ts":"2025-12-27T10:06:03.677563Z","logger":"raft","caller":"v3@v3.6.0/raft.go:970","msg":"9f0758e1c58a86ed became leader at term 3"}
	{"level":"info","ts":"2025-12-27T10:06:03.677574Z","logger":"raft","caller":"v3@v3.6.0/node.go:370","msg":"raft.node: 9f0758e1c58a86ed elected leader 9f0758e1c58a86ed at term 3"}
	{"level":"info","ts":"2025-12-27T10:06:03.681619Z","caller":"etcdserver/server.go:1820","msg":"published local member to cluster through raft","local-member-id":"9f0758e1c58a86ed","local-member-attributes":"{Name:default-k8s-diff-port-681744 ClientURLs:[https://192.168.85.2:2379]}","cluster-id":"68eaea490fab4e05","publish-timeout":"7s"}
	{"level":"info","ts":"2025-12-27T10:06:03.682649Z","caller":"embed/serve.go:138","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-12-27T10:06:03.682686Z","caller":"embed/serve.go:138","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-12-27T10:06:03.683658Z","caller":"v3rpc/health.go:63","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-12-27T10:06:03.685521Z","caller":"embed/serve.go:283","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2025-12-27T10:06:03.691353Z","caller":"v3rpc/health.go:63","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-12-27T10:06:03.790814Z","caller":"embed/serve.go:283","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.85.2:2379"}
	{"level":"info","ts":"2025-12-27T10:06:03.795186Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-12-27T10:06:03.795516Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	
	
	==> kernel <==
	 10:07:03 up  2:49,  0 user,  load average: 3.12, 2.71, 2.33
	Linux default-k8s-diff-port-681744 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [d9b232bb33745d9367c3276c09c36ce009adeee44f23352439ae08e719cd1485] <==
	I1227 10:06:08.421818       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1227 10:06:08.422252       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1227 10:06:08.422423       1 main.go:148] setting mtu 1500 for CNI 
	I1227 10:06:08.422464       1 main.go:178] kindnetd IP family: "ipv4"
	I1227 10:06:08.422502       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-12-27T10:06:08Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1227 10:06:08.620661       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1227 10:06:08.620742       1 controller.go:381] "Waiting for informer caches to sync"
	I1227 10:06:08.620778       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1227 10:06:08.621379       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1227 10:06:38.622664       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1227 10:06:38.622808       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1227 10:06:38.622899       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	E1227 10:06:38.622941       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	I1227 10:06:40.121786       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1227 10:06:40.121895       1 metrics.go:72] Registering metrics
	I1227 10:06:40.121981       1 controller.go:711] "Syncing nftables rules"
	I1227 10:06:48.620943       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1227 10:06:48.621711       1 main.go:301] handling current node
	I1227 10:06:58.620772       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1227 10:06:58.620814       1 main.go:301] handling current node
	
	
	==> kube-apiserver [05ed911c9437337bd74f43e8478b89cc420bd0d57d7c4b74775f9f242d146fd0] <==
	I1227 10:06:06.801008       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1227 10:06:06.802087       1 shared_informer.go:377] "Caches are synced"
	I1227 10:06:06.802234       1 shared_informer.go:377] "Caches are synced"
	E1227 10:06:06.841307       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1227 10:06:06.850725       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1227 10:06:06.870896       1 cidrallocator.go:302] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1227 10:06:06.875178       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1227 10:06:06.875996       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1227 10:06:06.877064       1 cache.go:39] Caches are synced for autoregister controller
	I1227 10:06:06.882466       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1227 10:06:06.890281       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1227 10:06:06.890789       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1227 10:06:06.901209       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1227 10:06:06.911792       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1227 10:06:07.465378       1 controller.go:667] quota admission added evaluator for: namespaces
	I1227 10:06:07.512399       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1227 10:06:07.547883       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1227 10:06:07.560039       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1227 10:06:07.569733       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1227 10:06:07.597014       1 storage_scheduling.go:139] all system priority classes are created successfully or already exist.
	I1227 10:06:07.652476       1 alloc.go:329] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.103.254.229"}
	I1227 10:06:07.696486       1 alloc.go:329] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.101.238.18"}
	I1227 10:06:10.409295       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1227 10:06:10.631878       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1227 10:06:10.709029       1 controller.go:667] quota admission added evaluator for: endpoints
	
	
	==> kube-controller-manager [33fdbb0d08777749f0775d9538c2ddf0c2e1275e2fe8d32dc7d2e64e6ca81b94] <==
	I1227 10:06:10.033372       1 shared_informer.go:377] "Caches are synced"
	I1227 10:06:10.033423       1 shared_informer.go:377] "Caches are synced"
	I1227 10:06:10.033589       1 shared_informer.go:377] "Caches are synced"
	I1227 10:06:10.036216       1 shared_informer.go:377] "Caches are synced"
	I1227 10:06:10.036343       1 shared_informer.go:377] "Caches are synced"
	I1227 10:06:10.036422       1 shared_informer.go:377] "Caches are synced"
	I1227 10:06:10.036486       1 shared_informer.go:377] "Caches are synced"
	I1227 10:06:10.036530       1 shared_informer.go:377] "Caches are synced"
	I1227 10:06:10.036746       1 shared_informer.go:377] "Caches are synced"
	I1227 10:06:10.032988       1 shared_informer.go:377] "Caches are synced"
	I1227 10:06:10.033084       1 shared_informer.go:377] "Caches are synced"
	I1227 10:06:10.038512       1 shared_informer.go:377] "Caches are synced"
	I1227 10:06:10.032262       1 shared_informer.go:377] "Caches are synced"
	I1227 10:06:10.032898       1 shared_informer.go:377] "Caches are synced"
	I1227 10:06:10.033041       1 shared_informer.go:377] "Caches are synced"
	I1227 10:06:10.038049       1 shared_informer.go:377] "Caches are synced"
	I1227 10:06:10.038062       1 shared_informer.go:377] "Caches are synced"
	I1227 10:06:10.042954       1 shared_informer.go:370] "Waiting for caches to sync"
	I1227 10:06:10.089406       1 shared_informer.go:377] "Caches are synced"
	I1227 10:06:10.129913       1 shared_informer.go:377] "Caches are synced"
	I1227 10:06:10.129939       1 garbagecollector.go:166] "Garbage collector: all resource monitors have synced"
	I1227 10:06:10.129946       1 garbagecollector.go:169] "Proceeding to collect garbage"
	I1227 10:06:10.146873       1 shared_informer.go:377] "Caches are synced"
	I1227 10:06:10.647194       1 endpointslice_controller.go:361] "Error syncing endpoint slices for service, retrying" key="kubernetes-dashboard/dashboard-metrics-scraper" err="EndpointSlice informer cache is out of date"
	I1227 10:06:10.649770       1 endpointslice_controller.go:361] "Error syncing endpoint slices for service, retrying" key="kubernetes-dashboard/kubernetes-dashboard" err="EndpointSlice informer cache is out of date"
	
	
	==> kube-proxy [1f4229e7da039fc2a87cf4691415876ed662e1bc499beefa038042f87efd93b9] <==
	I1227 10:06:08.290401       1 server_linux.go:53] "Using iptables proxy"
	I1227 10:06:08.415586       1 shared_informer.go:370] "Waiting for caches to sync"
	I1227 10:06:08.517975       1 shared_informer.go:377] "Caches are synced"
	I1227 10:06:08.518024       1 server.go:218] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1227 10:06:08.518106       1 server.go:255] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1227 10:06:08.543398       1 server.go:264] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1227 10:06:08.543461       1 server_linux.go:136] "Using iptables Proxier"
	I1227 10:06:08.547532       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1227 10:06:08.547860       1 server.go:529] "Version info" version="v1.35.0"
	I1227 10:06:08.547942       1 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1227 10:06:08.549704       1 config.go:200] "Starting service config controller"
	I1227 10:06:08.551064       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1227 10:06:08.550436       1 config.go:106] "Starting endpoint slice config controller"
	I1227 10:06:08.551184       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1227 10:06:08.550448       1 config.go:403] "Starting serviceCIDR config controller"
	I1227 10:06:08.551253       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1227 10:06:08.551335       1 config.go:309] "Starting node config controller"
	I1227 10:06:08.551399       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1227 10:06:08.551428       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1227 10:06:08.651866       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1227 10:06:08.652025       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1227 10:06:08.652109       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [5c6646254efce08f32743176f38a716d497ca0e0aaa6740710647bf39a812092] <==
	I1227 10:06:05.296709       1 serving.go:386] Generated self-signed cert in-memory
	I1227 10:06:06.857615       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.35.0"
	I1227 10:06:06.860010       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1227 10:06:06.871182       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I1227 10:06:06.871211       1 shared_informer.go:370] "Waiting for caches to sync"
	I1227 10:06:06.871253       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1227 10:06:06.871266       1 shared_informer.go:370] "Waiting for caches to sync"
	I1227 10:06:06.871281       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1227 10:06:06.871288       1 shared_informer.go:370] "Waiting for caches to sync"
	I1227 10:06:06.877959       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1227 10:06:06.878042       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1227 10:06:06.974251       1 shared_informer.go:377] "Caches are synced"
	I1227 10:06:06.974314       1 shared_informer.go:377] "Caches are synced"
	I1227 10:06:06.974416       1 shared_informer.go:377] "Caches are synced"
	
	
	==> kubelet <==
	Dec 27 10:06:21 default-k8s-diff-port-681744 kubelet[794]: I1227 10:06:21.936250     794 scope.go:122] "RemoveContainer" containerID="e2ce028b216cf0cfa975ee8215bef862f99c0df4b5da97fbceb48175106668a5"
	Dec 27 10:06:21 default-k8s-diff-port-681744 kubelet[794]: E1227 10:06:21.936365     794 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-867fb5f87b-qx5tq_kubernetes-dashboard(f1fc18c9-fdc6-471c-981e-47bf452c499c)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-qx5tq" podUID="f1fc18c9-fdc6-471c-981e-47bf452c499c"
	Dec 27 10:06:21 default-k8s-diff-port-681744 kubelet[794]: I1227 10:06:21.936859     794 scope.go:122] "RemoveContainer" containerID="79fafc77a4d88c519604daeda0da62f9f4c45135a365912e1d39694dc81da026"
	Dec 27 10:06:22 default-k8s-diff-port-681744 kubelet[794]: E1227 10:06:22.940685     794 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-qx5tq" containerName="dashboard-metrics-scraper"
	Dec 27 10:06:22 default-k8s-diff-port-681744 kubelet[794]: I1227 10:06:22.940722     794 scope.go:122] "RemoveContainer" containerID="e2ce028b216cf0cfa975ee8215bef862f99c0df4b5da97fbceb48175106668a5"
	Dec 27 10:06:22 default-k8s-diff-port-681744 kubelet[794]: E1227 10:06:22.940863     794 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-867fb5f87b-qx5tq_kubernetes-dashboard(f1fc18c9-fdc6-471c-981e-47bf452c499c)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-qx5tq" podUID="f1fc18c9-fdc6-471c-981e-47bf452c499c"
	Dec 27 10:06:26 default-k8s-diff-port-681744 kubelet[794]: E1227 10:06:26.165501     794 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-qx5tq" containerName="dashboard-metrics-scraper"
	Dec 27 10:06:26 default-k8s-diff-port-681744 kubelet[794]: I1227 10:06:26.165552     794 scope.go:122] "RemoveContainer" containerID="e2ce028b216cf0cfa975ee8215bef862f99c0df4b5da97fbceb48175106668a5"
	Dec 27 10:06:26 default-k8s-diff-port-681744 kubelet[794]: E1227 10:06:26.165721     794 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-867fb5f87b-qx5tq_kubernetes-dashboard(f1fc18c9-fdc6-471c-981e-47bf452c499c)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-qx5tq" podUID="f1fc18c9-fdc6-471c-981e-47bf452c499c"
	Dec 27 10:06:32 default-k8s-diff-port-681744 kubelet[794]: E1227 10:06:32.766222     794 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-qx5tq" containerName="dashboard-metrics-scraper"
	Dec 27 10:06:32 default-k8s-diff-port-681744 kubelet[794]: I1227 10:06:32.766690     794 scope.go:122] "RemoveContainer" containerID="e2ce028b216cf0cfa975ee8215bef862f99c0df4b5da97fbceb48175106668a5"
	Dec 27 10:06:32 default-k8s-diff-port-681744 kubelet[794]: I1227 10:06:32.967745     794 scope.go:122] "RemoveContainer" containerID="e2ce028b216cf0cfa975ee8215bef862f99c0df4b5da97fbceb48175106668a5"
	Dec 27 10:06:32 default-k8s-diff-port-681744 kubelet[794]: E1227 10:06:32.968014     794 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-qx5tq" containerName="dashboard-metrics-scraper"
	Dec 27 10:06:32 default-k8s-diff-port-681744 kubelet[794]: I1227 10:06:32.968040     794 scope.go:122] "RemoveContainer" containerID="c6e9ba892a5f9f923228744a2157ea2f243b0bdffa0791bb446d1579dcdc1777"
	Dec 27 10:06:32 default-k8s-diff-port-681744 kubelet[794]: E1227 10:06:32.968206     794 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-867fb5f87b-qx5tq_kubernetes-dashboard(f1fc18c9-fdc6-471c-981e-47bf452c499c)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-qx5tq" podUID="f1fc18c9-fdc6-471c-981e-47bf452c499c"
	Dec 27 10:06:36 default-k8s-diff-port-681744 kubelet[794]: E1227 10:06:36.165319     794 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-qx5tq" containerName="dashboard-metrics-scraper"
	Dec 27 10:06:36 default-k8s-diff-port-681744 kubelet[794]: I1227 10:06:36.165880     794 scope.go:122] "RemoveContainer" containerID="c6e9ba892a5f9f923228744a2157ea2f243b0bdffa0791bb446d1579dcdc1777"
	Dec 27 10:06:36 default-k8s-diff-port-681744 kubelet[794]: E1227 10:06:36.166142     794 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-867fb5f87b-qx5tq_kubernetes-dashboard(f1fc18c9-fdc6-471c-981e-47bf452c499c)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-qx5tq" podUID="f1fc18c9-fdc6-471c-981e-47bf452c499c"
	Dec 27 10:06:38 default-k8s-diff-port-681744 kubelet[794]: I1227 10:06:38.984111     794 scope.go:122] "RemoveContainer" containerID="f21c10c67705231af4210c6ae61ccc093460f828467de0700b89eaf1cfcaed8e"
	Dec 27 10:06:45 default-k8s-diff-port-681744 kubelet[794]: E1227 10:06:45.832611     794 prober_manager.go:209] "Readiness probe already exists for container" pod="kube-system/coredns-7d764666f9-gsk6s" containerName="coredns"
	Dec 27 10:06:59 default-k8s-diff-port-681744 kubelet[794]: E1227 10:06:59.765583     794 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-qx5tq" containerName="dashboard-metrics-scraper"
	Dec 27 10:06:59 default-k8s-diff-port-681744 kubelet[794]: I1227 10:06:59.765622     794 scope.go:122] "RemoveContainer" containerID="c6e9ba892a5f9f923228744a2157ea2f243b0bdffa0791bb446d1579dcdc1777"
	Dec 27 10:06:59 default-k8s-diff-port-681744 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Dec 27 10:07:00 default-k8s-diff-port-681744 systemd[1]: kubelet.service: Deactivated successfully.
	Dec 27 10:07:00 default-k8s-diff-port-681744 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	
	
	==> kubernetes-dashboard [f8410dd3636ef8c5aa27aff620f03619584e1cd859e8390d1cfc0169a194e203] <==
	2025/12/27 10:06:15 Using namespace: kubernetes-dashboard
	2025/12/27 10:06:15 Using in-cluster config to connect to apiserver
	2025/12/27 10:06:15 Using secret token for csrf signing
	2025/12/27 10:06:15 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/12/27 10:06:15 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/12/27 10:06:15 Successful initial request to the apiserver, version: v1.35.0
	2025/12/27 10:06:15 Generating JWE encryption key
	2025/12/27 10:06:15 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/12/27 10:06:15 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/12/27 10:06:17 Initializing JWE encryption key from synchronized object
	2025/12/27 10:06:17 Creating in-cluster Sidecar client
	2025/12/27 10:06:17 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/12/27 10:06:17 Serving insecurely on HTTP port: 9090
	2025/12/27 10:06:47 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/12/27 10:06:15 Starting overwatch
	
	
	==> storage-provisioner [c5e4c2046964bfb93c146b52bc46a0c95d3b41f37952360670a9c58bd4b89311] <==
	I1227 10:06:39.055735       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1227 10:06:39.099323       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1227 10:06:39.099395       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1227 10:06:39.109334       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1227 10:06:42.565407       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1227 10:06:46.830118       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1227 10:06:50.428726       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1227 10:06:53.482901       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1227 10:06:56.504761       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1227 10:06:56.510032       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1227 10:06:56.510292       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1227 10:06:56.510521       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-681744_0f29a285-f0bd-4ff3-863d-feb7994819f7!
	I1227 10:06:56.510694       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"b784311c-5962-4e1e-afb9-963a396928d5", APIVersion:"v1", ResourceVersion:"685", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-681744_0f29a285-f0bd-4ff3-863d-feb7994819f7 became leader
	W1227 10:06:56.516203       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1227 10:06:56.519223       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1227 10:06:56.611116       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-681744_0f29a285-f0bd-4ff3-863d-feb7994819f7!
	W1227 10:06:58.522927       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1227 10:06:58.528326       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1227 10:07:00.533215       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1227 10:07:00.546406       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1227 10:07:02.549153       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1227 10:07:02.557445       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	
	
	==> storage-provisioner [f21c10c67705231af4210c6ae61ccc093460f828467de0700b89eaf1cfcaed8e] <==
	I1227 10:06:08.200546       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1227 10:06:38.204381       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-681744 -n default-k8s-diff-port-681744
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-681744 -n default-k8s-diff-port-681744: exit status 2 (428.164095ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:270: (dbg) Run:  kubectl --context default-k8s-diff-port-681744 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:294: <<< TestStartStop/group/default-k8s-diff-port/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:295: ---------------------/post-mortem---------------------------------
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect default-k8s-diff-port-681744
helpers_test.go:244: (dbg) docker inspect default-k8s-diff-port-681744:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "d2370e32a3db686d85e4d6785307c138600396030b0352d75607381960c53a89",
	        "Created": "2025-12-27T10:04:44.730801241Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 518614,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-27T10:05:53.355574602Z",
	            "FinishedAt": "2025-12-27T10:05:52.246136882Z"
	        },
	        "Image": "sha256:713d59eed1cb7ea89d31e3cbef8f6274a6ab7509a421e96cf0197c334e76398d",
	        "ResolvConfPath": "/var/lib/docker/containers/d2370e32a3db686d85e4d6785307c138600396030b0352d75607381960c53a89/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/d2370e32a3db686d85e4d6785307c138600396030b0352d75607381960c53a89/hostname",
	        "HostsPath": "/var/lib/docker/containers/d2370e32a3db686d85e4d6785307c138600396030b0352d75607381960c53a89/hosts",
	        "LogPath": "/var/lib/docker/containers/d2370e32a3db686d85e4d6785307c138600396030b0352d75607381960c53a89/d2370e32a3db686d85e4d6785307c138600396030b0352d75607381960c53a89-json.log",
	        "Name": "/default-k8s-diff-port-681744",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "default-k8s-diff-port-681744:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "default-k8s-diff-port-681744",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "d2370e32a3db686d85e4d6785307c138600396030b0352d75607381960c53a89",
	                "LowerDir": "/var/lib/docker/overlay2/3da655765ee70512272e8bbdfdd19ad89be9ad5b15dd83cabbe503b7c3d1dd23-init/diff:/var/lib/docker/overlay2/888349771a41a46b09e7de676064af9effbe2f5ae2a8ba49ad062335fb2a70e5/diff",
	                "MergedDir": "/var/lib/docker/overlay2/3da655765ee70512272e8bbdfdd19ad89be9ad5b15dd83cabbe503b7c3d1dd23/merged",
	                "UpperDir": "/var/lib/docker/overlay2/3da655765ee70512272e8bbdfdd19ad89be9ad5b15dd83cabbe503b7c3d1dd23/diff",
	                "WorkDir": "/var/lib/docker/overlay2/3da655765ee70512272e8bbdfdd19ad89be9ad5b15dd83cabbe503b7c3d1dd23/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "default-k8s-diff-port-681744",
	                "Source": "/var/lib/docker/volumes/default-k8s-diff-port-681744/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "default-k8s-diff-port-681744",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8444/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "default-k8s-diff-port-681744",
	                "name.minikube.sigs.k8s.io": "default-k8s-diff-port-681744",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "6c075b4da0ee6b2b2fff3aa99b8375f2b763ab8e19555ef79d6e1a600a730d93",
	            "SandboxKey": "/var/run/docker/netns/6c075b4da0ee",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33456"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33457"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33460"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33458"
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33459"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "default-k8s-diff-port-681744": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "92:8e:4a:75:ec:54",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "a1f92b122a97b2834afb7ef2e15881b65b61b90adec9a9012e2ffcfe6970dabd",
	                    "EndpointID": "90d57dfead9e59ce27cea5186aaedc5e0df64513298b8667705e2024db1503d2",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "default-k8s-diff-port-681744",
	                        "d2370e32a3db"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-681744 -n default-k8s-diff-port-681744
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-681744 -n default-k8s-diff-port-681744: exit status 2 (464.787592ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestStartStop/group/default-k8s-diff-port/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p default-k8s-diff-port-681744 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-arm64 -p default-k8s-diff-port-681744 logs -n 25: (1.559444799s)
helpers_test.go:261: TestStartStop/group/default-k8s-diff-port/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ start   │ -p no-preload-021144 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0                                                                                       │ no-preload-021144            │ jenkins │ v1.37.0 │ 27 Dec 25 10:03 UTC │ 27 Dec 25 10:04 UTC │
	│ ssh     │ force-systemd-flag-779725 ssh cat /etc/crio/crio.conf.d/02-crio.conf                                                                                                                                                                          │ force-systemd-flag-779725    │ jenkins │ v1.37.0 │ 27 Dec 25 10:04 UTC │ 27 Dec 25 10:04 UTC │
	│ delete  │ -p force-systemd-flag-779725                                                                                                                                                                                                                  │ force-systemd-flag-779725    │ jenkins │ v1.37.0 │ 27 Dec 25 10:04 UTC │ 27 Dec 25 10:04 UTC │
	│ start   │ -p embed-certs-017122 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0                                                                                        │ embed-certs-017122           │ jenkins │ v1.37.0 │ 27 Dec 25 10:04 UTC │ 27 Dec 25 10:05 UTC │
	│ image   │ no-preload-021144 image list --format=json                                                                                                                                                                                                    │ no-preload-021144            │ jenkins │ v1.37.0 │ 27 Dec 25 10:04 UTC │ 27 Dec 25 10:04 UTC │
	│ pause   │ -p no-preload-021144 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-021144            │ jenkins │ v1.37.0 │ 27 Dec 25 10:04 UTC │                     │
	│ delete  │ -p no-preload-021144                                                                                                                                                                                                                          │ no-preload-021144            │ jenkins │ v1.37.0 │ 27 Dec 25 10:04 UTC │ 27 Dec 25 10:04 UTC │
	│ delete  │ -p no-preload-021144                                                                                                                                                                                                                          │ no-preload-021144            │ jenkins │ v1.37.0 │ 27 Dec 25 10:04 UTC │ 27 Dec 25 10:04 UTC │
	│ delete  │ -p disable-driver-mounts-242374                                                                                                                                                                                                               │ disable-driver-mounts-242374 │ jenkins │ v1.37.0 │ 27 Dec 25 10:04 UTC │ 27 Dec 25 10:04 UTC │
	│ start   │ -p default-k8s-diff-port-681744 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0                                                                      │ default-k8s-diff-port-681744 │ jenkins │ v1.37.0 │ 27 Dec 25 10:04 UTC │ 27 Dec 25 10:05 UTC │
	│ addons  │ enable metrics-server -p embed-certs-017122 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-017122           │ jenkins │ v1.37.0 │ 27 Dec 25 10:05 UTC │                     │
	│ stop    │ -p embed-certs-017122 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-017122           │ jenkins │ v1.37.0 │ 27 Dec 25 10:05 UTC │ 27 Dec 25 10:05 UTC │
	│ addons  │ enable dashboard -p embed-certs-017122 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-017122           │ jenkins │ v1.37.0 │ 27 Dec 25 10:05 UTC │ 27 Dec 25 10:05 UTC │
	│ start   │ -p embed-certs-017122 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0                                                                                        │ embed-certs-017122           │ jenkins │ v1.37.0 │ 27 Dec 25 10:05 UTC │ 27 Dec 25 10:06 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-681744 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-681744 │ jenkins │ v1.37.0 │ 27 Dec 25 10:05 UTC │                     │
	│ stop    │ -p default-k8s-diff-port-681744 --alsologtostderr -v=3                                                                                                                                                                                        │ default-k8s-diff-port-681744 │ jenkins │ v1.37.0 │ 27 Dec 25 10:05 UTC │ 27 Dec 25 10:05 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-681744 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                       │ default-k8s-diff-port-681744 │ jenkins │ v1.37.0 │ 27 Dec 25 10:05 UTC │ 27 Dec 25 10:05 UTC │
	│ start   │ -p default-k8s-diff-port-681744 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0                                                                      │ default-k8s-diff-port-681744 │ jenkins │ v1.37.0 │ 27 Dec 25 10:05 UTC │ 27 Dec 25 10:06 UTC │
	│ image   │ embed-certs-017122 image list --format=json                                                                                                                                                                                                   │ embed-certs-017122           │ jenkins │ v1.37.0 │ 27 Dec 25 10:06 UTC │ 27 Dec 25 10:06 UTC │
	│ pause   │ -p embed-certs-017122 --alsologtostderr -v=1                                                                                                                                                                                                  │ embed-certs-017122           │ jenkins │ v1.37.0 │ 27 Dec 25 10:06 UTC │                     │
	│ delete  │ -p embed-certs-017122                                                                                                                                                                                                                         │ embed-certs-017122           │ jenkins │ v1.37.0 │ 27 Dec 25 10:06 UTC │ 27 Dec 25 10:06 UTC │
	│ delete  │ -p embed-certs-017122                                                                                                                                                                                                                         │ embed-certs-017122           │ jenkins │ v1.37.0 │ 27 Dec 25 10:06 UTC │ 27 Dec 25 10:06 UTC │
	│ start   │ -p newest-cni-133340 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0 │ newest-cni-133340            │ jenkins │ v1.37.0 │ 27 Dec 25 10:06 UTC │                     │
	│ image   │ default-k8s-diff-port-681744 image list --format=json                                                                                                                                                                                         │ default-k8s-diff-port-681744 │ jenkins │ v1.37.0 │ 27 Dec 25 10:06 UTC │ 27 Dec 25 10:06 UTC │
	│ pause   │ -p default-k8s-diff-port-681744 --alsologtostderr -v=1                                                                                                                                                                                        │ default-k8s-diff-port-681744 │ jenkins │ v1.37.0 │ 27 Dec 25 10:06 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/27 10:06:48
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1227 10:06:48.765787  522415 out.go:360] Setting OutFile to fd 1 ...
	I1227 10:06:48.766009  522415 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 10:06:48.766039  522415 out.go:374] Setting ErrFile to fd 2...
	I1227 10:06:48.766059  522415 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 10:06:48.766500  522415 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22344-301174/.minikube/bin
	I1227 10:06:48.767074  522415 out.go:368] Setting JSON to false
	I1227 10:06:48.768050  522415 start.go:133] hostinfo: {"hostname":"ip-172-31-31-251","uptime":10158,"bootTime":1766819851,"procs":184,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1227 10:06:48.768169  522415 start.go:143] virtualization:  
	I1227 10:06:48.772170  522415 out.go:179] * [newest-cni-133340] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1227 10:06:48.776483  522415 out.go:179]   - MINIKUBE_LOCATION=22344
	I1227 10:06:48.776548  522415 notify.go:221] Checking for updates...
	I1227 10:06:48.782742  522415 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1227 10:06:48.786061  522415 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22344-301174/kubeconfig
	I1227 10:06:48.789052  522415 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22344-301174/.minikube
	I1227 10:06:48.792046  522415 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1227 10:06:48.795081  522415 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1227 10:06:48.798681  522415 config.go:182] Loaded profile config "default-k8s-diff-port-681744": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1227 10:06:48.798801  522415 driver.go:422] Setting default libvirt URI to qemu:///system
	I1227 10:06:48.820711  522415 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1227 10:06:48.820827  522415 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1227 10:06:48.879245  522415 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-27 10:06:48.869406055 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1227 10:06:48.879355  522415 docker.go:319] overlay module found
	I1227 10:06:48.882590  522415 out.go:179] * Using the docker driver based on user configuration
	I1227 10:06:48.885552  522415 start.go:309] selected driver: docker
	I1227 10:06:48.885571  522415 start.go:928] validating driver "docker" against <nil>
	I1227 10:06:48.885586  522415 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1227 10:06:48.886459  522415 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1227 10:06:48.942136  522415 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-27 10:06:48.932912635 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1227 10:06:48.942345  522415 start_flags.go:333] no existing cluster config was found, will generate one from the flags 
	W1227 10:06:48.942377  522415 out.go:285] ! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	I1227 10:06:48.942608  522415 start_flags.go:1038] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1227 10:06:48.945423  522415 out.go:179] * Using Docker driver with root privileges
	I1227 10:06:48.948313  522415 cni.go:84] Creating CNI manager for ""
	I1227 10:06:48.948387  522415 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1227 10:06:48.948402  522415 start_flags.go:342] Found "CNI" CNI - setting NetworkPlugin=cni
	I1227 10:06:48.948484  522415 start.go:353] cluster config:
	{Name:newest-cni-133340 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:newest-cni-133340 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnet
ClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1227 10:06:48.953594  522415 out.go:179] * Starting "newest-cni-133340" primary control-plane node in "newest-cni-133340" cluster
	I1227 10:06:48.956505  522415 cache.go:134] Beginning downloading kic base image for docker with crio
	I1227 10:06:48.960136  522415 out.go:179] * Pulling base image v0.0.48-1766570851-22316 ...
	I1227 10:06:48.963018  522415 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime crio
	I1227 10:06:48.963072  522415 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22344-301174/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-cri-o-overlay-arm64.tar.lz4
	I1227 10:06:48.963081  522415 cache.go:65] Caching tarball of preloaded images
	I1227 10:06:48.963168  522415 preload.go:251] Found /home/jenkins/minikube-integration/22344-301174/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1227 10:06:48.963184  522415 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0 on crio
	I1227 10:06:48.963311  522415 profile.go:143] Saving config to /home/jenkins/minikube-integration/22344-301174/.minikube/profiles/newest-cni-133340/config.json ...
	I1227 10:06:48.963336  522415 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22344-301174/.minikube/profiles/newest-cni-133340/config.json: {Name:mka98e5e41c61eb971db956a5c71d82577d33d19 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 10:06:48.963494  522415 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a in local docker daemon
	I1227 10:06:48.983663  522415 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a in local docker daemon, skipping pull
	I1227 10:06:48.983683  522415 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a exists in daemon, skipping load
	I1227 10:06:48.983697  522415 cache.go:243] Successfully downloaded all kic artifacts
	I1227 10:06:48.983728  522415 start.go:360] acquireMachinesLock for newest-cni-133340: {Name:mke43a3ebd8f4eaf65da86bf9dafee410f8229a8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1227 10:06:48.983833  522415 start.go:364] duration metric: took 86.688µs to acquireMachinesLock for "newest-cni-133340"
	I1227 10:06:48.983863  522415 start.go:93] Provisioning new machine with config: &{Name:newest-cni-133340 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:newest-cni-133340 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fals
e DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false} &{Name: IP: Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1227 10:06:48.983932  522415 start.go:125] createHost starting for "" (driver="docker")
	I1227 10:06:48.987230  522415 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1227 10:06:48.987455  522415 start.go:159] libmachine.API.Create for "newest-cni-133340" (driver="docker")
	I1227 10:06:48.987487  522415 client.go:173] LocalClient.Create starting
	I1227 10:06:48.987550  522415 main.go:144] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22344-301174/.minikube/certs/ca.pem
	I1227 10:06:48.987595  522415 main.go:144] libmachine: Decoding PEM data...
	I1227 10:06:48.987614  522415 main.go:144] libmachine: Parsing certificate...
	I1227 10:06:48.987675  522415 main.go:144] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22344-301174/.minikube/certs/cert.pem
	I1227 10:06:48.987694  522415 main.go:144] libmachine: Decoding PEM data...
	I1227 10:06:48.987713  522415 main.go:144] libmachine: Parsing certificate...
	I1227 10:06:48.988072  522415 cli_runner.go:164] Run: docker network inspect newest-cni-133340 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1227 10:06:49.004592  522415 cli_runner.go:211] docker network inspect newest-cni-133340 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1227 10:06:49.004699  522415 network_create.go:284] running [docker network inspect newest-cni-133340] to gather additional debugging logs...
	I1227 10:06:49.004724  522415 cli_runner.go:164] Run: docker network inspect newest-cni-133340
	W1227 10:06:49.021988  522415 cli_runner.go:211] docker network inspect newest-cni-133340 returned with exit code 1
	I1227 10:06:49.022058  522415 network_create.go:287] error running [docker network inspect newest-cni-133340]: docker network inspect newest-cni-133340: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network newest-cni-133340 not found
	I1227 10:06:49.022075  522415 network_create.go:289] output of [docker network inspect newest-cni-133340]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network newest-cni-133340 not found
	
	** /stderr **
	I1227 10:06:49.022237  522415 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1227 10:06:49.039674  522415 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-cc45c0939b74 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:be:a9:eb:a6:c1:01} reservation:<nil>}
	I1227 10:06:49.040150  522415 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-b122a856da6d IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:2e:eb:89:95:c1:62} reservation:<nil>}
	I1227 10:06:49.040440  522415 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-a1a16649dea9 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:9e:51:2b:90:d1:ea} reservation:<nil>}
	I1227 10:06:49.040874  522415 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001a50530}
	I1227 10:06:49.040898  522415 network_create.go:124] attempt to create docker network newest-cni-133340 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I1227 10:06:49.040964  522415 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=newest-cni-133340 newest-cni-133340
	I1227 10:06:49.102657  522415 network_create.go:108] docker network newest-cni-133340 192.168.76.0/24 created
	I1227 10:06:49.102689  522415 kic.go:121] calculated static IP "192.168.76.2" for the "newest-cni-133340" container
	I1227 10:06:49.102778  522415 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1227 10:06:49.118551  522415 cli_runner.go:164] Run: docker volume create newest-cni-133340 --label name.minikube.sigs.k8s.io=newest-cni-133340 --label created_by.minikube.sigs.k8s.io=true
	I1227 10:06:49.136165  522415 oci.go:103] Successfully created a docker volume newest-cni-133340
	I1227 10:06:49.136261  522415 cli_runner.go:164] Run: docker run --rm --name newest-cni-133340-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-133340 --entrypoint /usr/bin/test -v newest-cni-133340:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a -d /var/lib
	I1227 10:06:49.658339  522415 oci.go:107] Successfully prepared a docker volume newest-cni-133340
	I1227 10:06:49.658413  522415 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime crio
	I1227 10:06:49.658428  522415 kic.go:194] Starting extracting preloaded images to volume ...
	I1227 10:06:49.658521  522415 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22344-301174/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v newest-cni-133340:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a -I lz4 -xf /preloaded.tar -C /extractDir
	I1227 10:06:53.863051  522415 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22344-301174/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v newest-cni-133340:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a -I lz4 -xf /preloaded.tar -C /extractDir: (4.204459279s)
	I1227 10:06:53.863084  522415 kic.go:203] duration metric: took 4.204652691s to extract preloaded images to volume ...
	W1227 10:06:53.863220  522415 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1227 10:06:53.863329  522415 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1227 10:06:53.923891  522415 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname newest-cni-133340 --name newest-cni-133340 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-133340 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=newest-cni-133340 --network newest-cni-133340 --ip 192.168.76.2 --volume newest-cni-133340:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a
	I1227 10:06:54.225384  522415 cli_runner.go:164] Run: docker container inspect newest-cni-133340 --format={{.State.Running}}
	I1227 10:06:54.245637  522415 cli_runner.go:164] Run: docker container inspect newest-cni-133340 --format={{.State.Status}}
	I1227 10:06:54.264758  522415 cli_runner.go:164] Run: docker exec newest-cni-133340 stat /var/lib/dpkg/alternatives/iptables
	I1227 10:06:54.317513  522415 oci.go:144] the created container "newest-cni-133340" has a running status.
	I1227 10:06:54.317542  522415 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/22344-301174/.minikube/machines/newest-cni-133340/id_rsa...
	I1227 10:06:54.840922  522415 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22344-301174/.minikube/machines/newest-cni-133340/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1227 10:06:54.861377  522415 cli_runner.go:164] Run: docker container inspect newest-cni-133340 --format={{.State.Status}}
	I1227 10:06:54.878717  522415 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1227 10:06:54.878740  522415 kic_runner.go:114] Args: [docker exec --privileged newest-cni-133340 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1227 10:06:54.918703  522415 cli_runner.go:164] Run: docker container inspect newest-cni-133340 --format={{.State.Status}}
	I1227 10:06:54.939313  522415 machine.go:94] provisionDockerMachine start ...
	I1227 10:06:54.939415  522415 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-133340
	I1227 10:06:54.956396  522415 main.go:144] libmachine: Using SSH client type: native
	I1227 10:06:54.956736  522415 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dbbe0] 0x3de0e0 <nil>  [] 0s} 127.0.0.1 33461 <nil> <nil>}
	I1227 10:06:54.956751  522415 main.go:144] libmachine: About to run SSH command:
	hostname
	I1227 10:06:54.957392  522415 main.go:144] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1227 10:06:58.106087  522415 main.go:144] libmachine: SSH cmd err, output: <nil>: newest-cni-133340
	
	I1227 10:06:58.106110  522415 ubuntu.go:182] provisioning hostname "newest-cni-133340"
	I1227 10:06:58.106202  522415 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-133340
	I1227 10:06:58.130554  522415 main.go:144] libmachine: Using SSH client type: native
	I1227 10:06:58.130868  522415 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dbbe0] 0x3de0e0 <nil>  [] 0s} 127.0.0.1 33461 <nil> <nil>}
	I1227 10:06:58.130883  522415 main.go:144] libmachine: About to run SSH command:
	sudo hostname newest-cni-133340 && echo "newest-cni-133340" | sudo tee /etc/hostname
	I1227 10:06:58.279060  522415 main.go:144] libmachine: SSH cmd err, output: <nil>: newest-cni-133340
	
	I1227 10:06:58.279136  522415 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-133340
	I1227 10:06:58.296128  522415 main.go:144] libmachine: Using SSH client type: native
	I1227 10:06:58.296449  522415 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dbbe0] 0x3de0e0 <nil>  [] 0s} 127.0.0.1 33461 <nil> <nil>}
	I1227 10:06:58.296465  522415 main.go:144] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-133340' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-133340/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-133340' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1227 10:06:58.438713  522415 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	I1227 10:06:58.438739  522415 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22344-301174/.minikube CaCertPath:/home/jenkins/minikube-integration/22344-301174/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22344-301174/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22344-301174/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22344-301174/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22344-301174/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22344-301174/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22344-301174/.minikube}
	I1227 10:06:58.438757  522415 ubuntu.go:190] setting up certificates
	I1227 10:06:58.438767  522415 provision.go:84] configureAuth start
	I1227 10:06:58.438826  522415 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-133340
	I1227 10:06:58.462462  522415 provision.go:143] copyHostCerts
	I1227 10:06:58.462529  522415 exec_runner.go:144] found /home/jenkins/minikube-integration/22344-301174/.minikube/ca.pem, removing ...
	I1227 10:06:58.462543  522415 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22344-301174/.minikube/ca.pem
	I1227 10:06:58.462647  522415 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22344-301174/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22344-301174/.minikube/ca.pem (1082 bytes)
	I1227 10:06:58.462780  522415 exec_runner.go:144] found /home/jenkins/minikube-integration/22344-301174/.minikube/cert.pem, removing ...
	I1227 10:06:58.462795  522415 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22344-301174/.minikube/cert.pem
	I1227 10:06:58.462838  522415 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22344-301174/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22344-301174/.minikube/cert.pem (1123 bytes)
	I1227 10:06:58.462902  522415 exec_runner.go:144] found /home/jenkins/minikube-integration/22344-301174/.minikube/key.pem, removing ...
	I1227 10:06:58.462912  522415 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22344-301174/.minikube/key.pem
	I1227 10:06:58.462936  522415 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22344-301174/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22344-301174/.minikube/key.pem (1675 bytes)
	I1227 10:06:58.462988  522415 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22344-301174/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22344-301174/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22344-301174/.minikube/certs/ca-key.pem org=jenkins.newest-cni-133340 san=[127.0.0.1 192.168.76.2 localhost minikube newest-cni-133340]
	I1227 10:06:58.592685  522415 provision.go:177] copyRemoteCerts
	I1227 10:06:58.592758  522415 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1227 10:06:58.592808  522415 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-133340
	I1227 10:06:58.609587  522415 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33461 SSHKeyPath:/home/jenkins/minikube-integration/22344-301174/.minikube/machines/newest-cni-133340/id_rsa Username:docker}
	I1227 10:06:58.709878  522415 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22344-301174/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1227 10:06:58.728093  522415 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22344-301174/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1227 10:06:58.746128  522415 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22344-301174/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1227 10:06:58.765604  522415 provision.go:87] duration metric: took 326.823551ms to configureAuth
	I1227 10:06:58.765634  522415 ubuntu.go:206] setting minikube options for container-runtime
	I1227 10:06:58.765831  522415 config.go:182] Loaded profile config "newest-cni-133340": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1227 10:06:58.765956  522415 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-133340
	I1227 10:06:58.783926  522415 main.go:144] libmachine: Using SSH client type: native
	I1227 10:06:58.784227  522415 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dbbe0] 0x3de0e0 <nil>  [] 0s} 127.0.0.1 33461 <nil> <nil>}
	I1227 10:06:58.784241  522415 main.go:144] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1227 10:06:59.205882  522415 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1227 10:06:59.205908  522415 machine.go:97] duration metric: took 4.26657174s to provisionDockerMachine
	I1227 10:06:59.205919  522415 client.go:176] duration metric: took 10.218424112s to LocalClient.Create
	I1227 10:06:59.205932  522415 start.go:167] duration metric: took 10.218479186s to libmachine.API.Create "newest-cni-133340"
	I1227 10:06:59.205940  522415 start.go:293] postStartSetup for "newest-cni-133340" (driver="docker")
	I1227 10:06:59.205950  522415 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1227 10:06:59.206039  522415 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1227 10:06:59.206088  522415 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-133340
	I1227 10:06:59.229590  522415 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33461 SSHKeyPath:/home/jenkins/minikube-integration/22344-301174/.minikube/machines/newest-cni-133340/id_rsa Username:docker}
	I1227 10:06:59.332538  522415 ssh_runner.go:195] Run: cat /etc/os-release
	I1227 10:06:59.336756  522415 main.go:144] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1227 10:06:59.336782  522415 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1227 10:06:59.336793  522415 filesync.go:126] Scanning /home/jenkins/minikube-integration/22344-301174/.minikube/addons for local assets ...
	I1227 10:06:59.336849  522415 filesync.go:126] Scanning /home/jenkins/minikube-integration/22344-301174/.minikube/files for local assets ...
	I1227 10:06:59.336928  522415 filesync.go:149] local asset: /home/jenkins/minikube-integration/22344-301174/.minikube/files/etc/ssl/certs/3030432.pem -> 3030432.pem in /etc/ssl/certs
	I1227 10:06:59.337038  522415 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1227 10:06:59.346904  522415 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22344-301174/.minikube/files/etc/ssl/certs/3030432.pem --> /etc/ssl/certs/3030432.pem (1708 bytes)
	I1227 10:06:59.383988  522415 start.go:296] duration metric: took 178.03229ms for postStartSetup
	I1227 10:06:59.384380  522415 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-133340
	I1227 10:06:59.419398  522415 profile.go:143] Saving config to /home/jenkins/minikube-integration/22344-301174/.minikube/profiles/newest-cni-133340/config.json ...
	I1227 10:06:59.419680  522415 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1227 10:06:59.419733  522415 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-133340
	I1227 10:06:59.442610  522415 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33461 SSHKeyPath:/home/jenkins/minikube-integration/22344-301174/.minikube/machines/newest-cni-133340/id_rsa Username:docker}
	I1227 10:06:59.555501  522415 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1227 10:06:59.562777  522415 start.go:128] duration metric: took 10.578830736s to createHost
	I1227 10:06:59.562811  522415 start.go:83] releasing machines lock for "newest-cni-133340", held for 10.578964342s
	I1227 10:06:59.562885  522415 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-133340
	I1227 10:06:59.591042  522415 ssh_runner.go:195] Run: cat /version.json
	I1227 10:06:59.591103  522415 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-133340
	I1227 10:06:59.591370  522415 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1227 10:06:59.591428  522415 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-133340
	I1227 10:06:59.627727  522415 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33461 SSHKeyPath:/home/jenkins/minikube-integration/22344-301174/.minikube/machines/newest-cni-133340/id_rsa Username:docker}
	I1227 10:06:59.638092  522415 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33461 SSHKeyPath:/home/jenkins/minikube-integration/22344-301174/.minikube/machines/newest-cni-133340/id_rsa Username:docker}
	I1227 10:06:59.725896  522415 ssh_runner.go:195] Run: systemctl --version
	I1227 10:06:59.847506  522415 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1227 10:06:59.905052  522415 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1227 10:06:59.912750  522415 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1227 10:06:59.912933  522415 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1227 10:06:59.957419  522415 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1227 10:06:59.957510  522415 start.go:496] detecting cgroup driver to use...
	I1227 10:06:59.957572  522415 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1227 10:06:59.957657  522415 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1227 10:06:59.981596  522415 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1227 10:06:59.998949  522415 docker.go:218] disabling cri-docker service (if available) ...
	I1227 10:06:59.999073  522415 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1227 10:07:00.056378  522415 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1227 10:07:00.117601  522415 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1227 10:07:00.420084  522415 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1227 10:07:00.565968  522415 docker.go:234] disabling docker service ...
	I1227 10:07:00.566093  522415 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1227 10:07:00.598748  522415 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1227 10:07:00.634717  522415 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1227 10:07:00.773124  522415 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1227 10:07:00.922635  522415 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1227 10:07:00.936449  522415 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1227 10:07:00.953118  522415 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1227 10:07:00.953240  522415 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 10:07:00.962874  522415 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1227 10:07:00.963019  522415 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 10:07:00.972293  522415 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 10:07:00.980889  522415 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 10:07:00.989630  522415 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1227 10:07:00.998403  522415 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 10:07:01.008732  522415 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 10:07:01.023183  522415 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 10:07:01.032322  522415 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1227 10:07:01.039975  522415 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1227 10:07:01.047656  522415 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1227 10:07:01.159721  522415 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1227 10:07:01.319325  522415 start.go:553] Will wait 60s for socket path /var/run/crio/crio.sock
	I1227 10:07:01.319415  522415 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1227 10:07:01.324479  522415 start.go:574] Will wait 60s for crictl version
	I1227 10:07:01.324622  522415 ssh_runner.go:195] Run: which crictl
	I1227 10:07:01.328380  522415 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1227 10:07:01.363590  522415 start.go:590] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1227 10:07:01.363807  522415 ssh_runner.go:195] Run: crio --version
	I1227 10:07:01.409718  522415 ssh_runner.go:195] Run: crio --version
	I1227 10:07:01.450980  522415 out.go:179] * Preparing Kubernetes v1.35.0 on CRI-O 1.34.3 ...
	I1227 10:07:01.453978  522415 cli_runner.go:164] Run: docker network inspect newest-cni-133340 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1227 10:07:01.492010  522415 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1227 10:07:01.499136  522415 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1227 10:07:01.514617  522415 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I1227 10:07:01.517408  522415 kubeadm.go:884] updating cluster {Name:newest-cni-133340 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:newest-cni-133340 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Disab
leMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false} ...
	I1227 10:07:01.517580  522415 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime crio
	I1227 10:07:01.517655  522415 ssh_runner.go:195] Run: sudo crictl images --output json
	I1227 10:07:01.569422  522415 crio.go:561] all images are preloaded for cri-o runtime.
	I1227 10:07:01.569444  522415 crio.go:433] Images already preloaded, skipping extraction
	I1227 10:07:01.569503  522415 ssh_runner.go:195] Run: sudo crictl images --output json
	I1227 10:07:01.604902  522415 crio.go:561] all images are preloaded for cri-o runtime.
	I1227 10:07:01.604923  522415 cache_images.go:86] Images are preloaded, skipping loading
	I1227 10:07:01.604931  522415 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.35.0 crio true true} ...
	I1227 10:07:01.605013  522415 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=newest-cni-133340 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0 ClusterName:newest-cni-133340 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1227 10:07:01.605096  522415 ssh_runner.go:195] Run: crio config
	I1227 10:07:01.683127  522415 cni.go:84] Creating CNI manager for ""
	I1227 10:07:01.683170  522415 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1227 10:07:01.683195  522415 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1227 10:07:01.683222  522415 kubeadm.go:197] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.35.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-133340 NodeName:newest-cni-133340 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/
kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock failCgroupV1:false hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1227 10:07:01.683379  522415 kubeadm.go:203] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-133340"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	failCgroupV1: false
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1227 10:07:01.683454  522415 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0
	I1227 10:07:01.698489  522415 binaries.go:51] Found k8s binaries, skipping transfer
	I1227 10:07:01.698582  522415 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1227 10:07:01.709501  522415 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I1227 10:07:01.727703  522415 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1227 10:07:01.746890  522415 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2232 bytes)
	I1227 10:07:01.761659  522415 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1227 10:07:01.765575  522415 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1227 10:07:01.777312  522415 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1227 10:07:01.945234  522415 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1227 10:07:01.964083  522415 certs.go:69] Setting up /home/jenkins/minikube-integration/22344-301174/.minikube/profiles/newest-cni-133340 for IP: 192.168.76.2
	I1227 10:07:01.964102  522415 certs.go:195] generating shared ca certs ...
	I1227 10:07:01.964119  522415 certs.go:227] acquiring lock for ca certs: {Name:mk8bd99999eb300c33129cb11fd55a1be3001328 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 10:07:01.964263  522415 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22344-301174/.minikube/ca.key
	I1227 10:07:01.964304  522415 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22344-301174/.minikube/proxy-client-ca.key
	I1227 10:07:01.964313  522415 certs.go:257] generating profile certs ...
	I1227 10:07:01.964367  522415 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22344-301174/.minikube/profiles/newest-cni-133340/client.key
	I1227 10:07:01.964378  522415 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22344-301174/.minikube/profiles/newest-cni-133340/client.crt with IP's: []
	I1227 10:07:02.281656  522415 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22344-301174/.minikube/profiles/newest-cni-133340/client.crt ...
	I1227 10:07:02.281687  522415 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22344-301174/.minikube/profiles/newest-cni-133340/client.crt: {Name:mkdce8b289d01281f2780d43a77c30ced7ad46af Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 10:07:02.282008  522415 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22344-301174/.minikube/profiles/newest-cni-133340/client.key ...
	I1227 10:07:02.282050  522415 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22344-301174/.minikube/profiles/newest-cni-133340/client.key: {Name:mk1d5904ea2114c1100a2a773295f03dc9de835b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 10:07:02.282223  522415 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22344-301174/.minikube/profiles/newest-cni-133340/apiserver.key.5a59841a
	I1227 10:07:02.282270  522415 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22344-301174/.minikube/profiles/newest-cni-133340/apiserver.crt.5a59841a with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.76.2]
	I1227 10:07:02.559432  522415 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22344-301174/.minikube/profiles/newest-cni-133340/apiserver.crt.5a59841a ...
	I1227 10:07:02.559518  522415 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22344-301174/.minikube/profiles/newest-cni-133340/apiserver.crt.5a59841a: {Name:mk4a08b481cc9a57b6b2c8df2abfc40e9dd15f49 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 10:07:02.559741  522415 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22344-301174/.minikube/profiles/newest-cni-133340/apiserver.key.5a59841a ...
	I1227 10:07:02.559782  522415 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22344-301174/.minikube/profiles/newest-cni-133340/apiserver.key.5a59841a: {Name:mk7cd08d8aef1f75c094d6d1fff81714bae9409c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 10:07:02.559913  522415 certs.go:382] copying /home/jenkins/minikube-integration/22344-301174/.minikube/profiles/newest-cni-133340/apiserver.crt.5a59841a -> /home/jenkins/minikube-integration/22344-301174/.minikube/profiles/newest-cni-133340/apiserver.crt
	I1227 10:07:02.560028  522415 certs.go:386] copying /home/jenkins/minikube-integration/22344-301174/.minikube/profiles/newest-cni-133340/apiserver.key.5a59841a -> /home/jenkins/minikube-integration/22344-301174/.minikube/profiles/newest-cni-133340/apiserver.key
	I1227 10:07:02.560122  522415 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22344-301174/.minikube/profiles/newest-cni-133340/proxy-client.key
	I1227 10:07:02.560159  522415 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22344-301174/.minikube/profiles/newest-cni-133340/proxy-client.crt with IP's: []
	I1227 10:07:02.669782  522415 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22344-301174/.minikube/profiles/newest-cni-133340/proxy-client.crt ...
	I1227 10:07:02.669981  522415 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22344-301174/.minikube/profiles/newest-cni-133340/proxy-client.crt: {Name:mk66f69f0764f338d91f22a97b3ea74a21382f7b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 10:07:02.670231  522415 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22344-301174/.minikube/profiles/newest-cni-133340/proxy-client.key ...
	I1227 10:07:02.670274  522415 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22344-301174/.minikube/profiles/newest-cni-133340/proxy-client.key: {Name:mkeff225309afdff31d22b7707cecd692841bb8a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 10:07:02.670517  522415 certs.go:484] found cert: /home/jenkins/minikube-integration/22344-301174/.minikube/certs/303043.pem (1338 bytes)
	W1227 10:07:02.670625  522415 certs.go:480] ignoring /home/jenkins/minikube-integration/22344-301174/.minikube/certs/303043_empty.pem, impossibly tiny 0 bytes
	I1227 10:07:02.670659  522415 certs.go:484] found cert: /home/jenkins/minikube-integration/22344-301174/.minikube/certs/ca-key.pem (1675 bytes)
	I1227 10:07:02.670705  522415 certs.go:484] found cert: /home/jenkins/minikube-integration/22344-301174/.minikube/certs/ca.pem (1082 bytes)
	I1227 10:07:02.670761  522415 certs.go:484] found cert: /home/jenkins/minikube-integration/22344-301174/.minikube/certs/cert.pem (1123 bytes)
	I1227 10:07:02.670816  522415 certs.go:484] found cert: /home/jenkins/minikube-integration/22344-301174/.minikube/certs/key.pem (1675 bytes)
	I1227 10:07:02.670883  522415 certs.go:484] found cert: /home/jenkins/minikube-integration/22344-301174/.minikube/files/etc/ssl/certs/3030432.pem (1708 bytes)
	I1227 10:07:02.671510  522415 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22344-301174/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1227 10:07:02.692422  522415 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22344-301174/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1227 10:07:02.720356  522415 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22344-301174/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1227 10:07:02.740451  522415 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22344-301174/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1227 10:07:02.768157  522415 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22344-301174/.minikube/profiles/newest-cni-133340/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1227 10:07:02.786519  522415 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22344-301174/.minikube/profiles/newest-cni-133340/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1227 10:07:02.808164  522415 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22344-301174/.minikube/profiles/newest-cni-133340/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1227 10:07:02.826887  522415 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22344-301174/.minikube/profiles/newest-cni-133340/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1227 10:07:02.850027  522415 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22344-301174/.minikube/certs/303043.pem --> /usr/share/ca-certificates/303043.pem (1338 bytes)
	I1227 10:07:02.869192  522415 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22344-301174/.minikube/files/etc/ssl/certs/3030432.pem --> /usr/share/ca-certificates/3030432.pem (1708 bytes)
	I1227 10:07:02.898964  522415 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22344-301174/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1227 10:07:02.951349  522415 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (722 bytes)
	I1227 10:07:02.976444  522415 ssh_runner.go:195] Run: openssl version
	I1227 10:07:02.991240  522415 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/3030432.pem
	I1227 10:07:02.999859  522415 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/3030432.pem /etc/ssl/certs/3030432.pem
	I1227 10:07:03.014399  522415 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3030432.pem
	I1227 10:07:03.018988  522415 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 27 09:17 /usr/share/ca-certificates/3030432.pem
	I1227 10:07:03.019060  522415 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3030432.pem
	I1227 10:07:03.068933  522415 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1227 10:07:03.077701  522415 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/3030432.pem /etc/ssl/certs/3ec20f2e.0
	I1227 10:07:03.086706  522415 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1227 10:07:03.094963  522415 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1227 10:07:03.103209  522415 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1227 10:07:03.107772  522415 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 27 09:13 /usr/share/ca-certificates/minikubeCA.pem
	I1227 10:07:03.107901  522415 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1227 10:07:03.151581  522415 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1227 10:07:03.159635  522415 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1227 10:07:03.167672  522415 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/303043.pem
	I1227 10:07:03.176164  522415 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/303043.pem /etc/ssl/certs/303043.pem
	I1227 10:07:03.184533  522415 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/303043.pem
	I1227 10:07:03.188813  522415 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 27 09:17 /usr/share/ca-certificates/303043.pem
	I1227 10:07:03.188927  522415 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/303043.pem
	I1227 10:07:03.239165  522415 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1227 10:07:03.247067  522415 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/303043.pem /etc/ssl/certs/51391683.0
	I1227 10:07:03.255859  522415 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1227 10:07:03.259808  522415 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1227 10:07:03.259895  522415 kubeadm.go:401] StartCluster: {Name:newest-cni-133340 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:newest-cni-133340 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableM
etrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1227 10:07:03.259993  522415 cri.go:61] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1227 10:07:03.260056  522415 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1227 10:07:03.298801  522415 cri.go:96] found id: ""
	I1227 10:07:03.298876  522415 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1227 10:07:03.307650  522415 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1227 10:07:03.318354  522415 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1227 10:07:03.318470  522415 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1227 10:07:03.329408  522415 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1227 10:07:03.329481  522415 kubeadm.go:158] found existing configuration files:
	
	I1227 10:07:03.329571  522415 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1227 10:07:03.339051  522415 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1227 10:07:03.339118  522415 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1227 10:07:03.347372  522415 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1227 10:07:03.356844  522415 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1227 10:07:03.356909  522415 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1227 10:07:03.365196  522415 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1227 10:07:03.375860  522415 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1227 10:07:03.376015  522415 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1227 10:07:03.385330  522415 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1227 10:07:03.394768  522415 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1227 10:07:03.394890  522415 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1227 10:07:03.403710  522415 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1227 10:07:03.460474  522415 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0
	I1227 10:07:03.460731  522415 kubeadm.go:319] [preflight] Running pre-flight checks
	I1227 10:07:03.564711  522415 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1227 10:07:03.564815  522415 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1227 10:07:03.564878  522415 kubeadm.go:319] OS: Linux
	I1227 10:07:03.564953  522415 kubeadm.go:319] CGROUPS_CPU: enabled
	I1227 10:07:03.565030  522415 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1227 10:07:03.565106  522415 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1227 10:07:03.565186  522415 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1227 10:07:03.565265  522415 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1227 10:07:03.565345  522415 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1227 10:07:03.565422  522415 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1227 10:07:03.565507  522415 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1227 10:07:03.565582  522415 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1227 10:07:03.657708  522415 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1227 10:07:03.657884  522415 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1227 10:07:03.658017  522415 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1227 10:07:03.680180  522415 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1227 10:07:03.685422  522415 out.go:252]   - Generating certificates and keys ...
	I1227 10:07:03.685574  522415 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1227 10:07:03.685788  522415 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	
	
	==> CRI-O <==
	Dec 27 10:06:39 default-k8s-diff-port-681744 crio[658]: time="2025-12-27T10:06:39.02354681Z" level=info msg="Created container c5e4c2046964bfb93c146b52bc46a0c95d3b41f37952360670a9c58bd4b89311: kube-system/storage-provisioner/storage-provisioner" id=27e1680f-d087-4561-bbe4-b625fa1911c7 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 27 10:06:39 default-k8s-diff-port-681744 crio[658]: time="2025-12-27T10:06:39.024965781Z" level=info msg="Starting container: c5e4c2046964bfb93c146b52bc46a0c95d3b41f37952360670a9c58bd4b89311" id=684b206b-d046-4a1c-8a4c-f1046fee1e7e name=/runtime.v1.RuntimeService/StartContainer
	Dec 27 10:06:39 default-k8s-diff-port-681744 crio[658]: time="2025-12-27T10:06:39.029103063Z" level=info msg="Started container" PID=1658 containerID=c5e4c2046964bfb93c146b52bc46a0c95d3b41f37952360670a9c58bd4b89311 description=kube-system/storage-provisioner/storage-provisioner id=684b206b-d046-4a1c-8a4c-f1046fee1e7e name=/runtime.v1.RuntimeService/StartContainer sandboxID=e35ea032155ff49f7f1ae6f6fc5c6ff590bc1dfb59d6d1216b8eb26b998db64e
	Dec 27 10:06:48 default-k8s-diff-port-681744 crio[658]: time="2025-12-27T10:06:48.622671247Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Dec 27 10:06:48 default-k8s-diff-port-681744 crio[658]: time="2025-12-27T10:06:48.628261077Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 27 10:06:48 default-k8s-diff-port-681744 crio[658]: time="2025-12-27T10:06:48.62845248Z" level=info msg="Updated default CNI network name to kindnet"
	Dec 27 10:06:48 default-k8s-diff-port-681744 crio[658]: time="2025-12-27T10:06:48.628547366Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Dec 27 10:06:48 default-k8s-diff-port-681744 crio[658]: time="2025-12-27T10:06:48.633866447Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 27 10:06:48 default-k8s-diff-port-681744 crio[658]: time="2025-12-27T10:06:48.634044837Z" level=info msg="Updated default CNI network name to kindnet"
	Dec 27 10:06:48 default-k8s-diff-port-681744 crio[658]: time="2025-12-27T10:06:48.634129343Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Dec 27 10:06:48 default-k8s-diff-port-681744 crio[658]: time="2025-12-27T10:06:48.637954556Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 27 10:06:48 default-k8s-diff-port-681744 crio[658]: time="2025-12-27T10:06:48.637987393Z" level=info msg="Updated default CNI network name to kindnet"
	Dec 27 10:06:48 default-k8s-diff-port-681744 crio[658]: time="2025-12-27T10:06:48.638018901Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Dec 27 10:06:48 default-k8s-diff-port-681744 crio[658]: time="2025-12-27T10:06:48.641253214Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 27 10:06:48 default-k8s-diff-port-681744 crio[658]: time="2025-12-27T10:06:48.64128871Z" level=info msg="Updated default CNI network name to kindnet"
	Dec 27 10:06:59 default-k8s-diff-port-681744 crio[658]: time="2025-12-27T10:06:59.766249722Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=c7bc6579-cd84-4bcc-8f7b-0911224ffac8 name=/runtime.v1.ImageService/ImageStatus
	Dec 27 10:06:59 default-k8s-diff-port-681744 crio[658]: time="2025-12-27T10:06:59.76788568Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=b9cb42d5-c05b-4940-9636-f7cbf138d8aa name=/runtime.v1.ImageService/ImageStatus
	Dec 27 10:06:59 default-k8s-diff-port-681744 crio[658]: time="2025-12-27T10:06:59.770286416Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-qx5tq/dashboard-metrics-scraper" id=e8fed25d-f3a5-4b35-9e7f-3cc01d5ada1b name=/runtime.v1.RuntimeService/CreateContainer
	Dec 27 10:06:59 default-k8s-diff-port-681744 crio[658]: time="2025-12-27T10:06:59.770403357Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 27 10:06:59 default-k8s-diff-port-681744 crio[658]: time="2025-12-27T10:06:59.789485531Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 27 10:06:59 default-k8s-diff-port-681744 crio[658]: time="2025-12-27T10:06:59.790240535Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 27 10:06:59 default-k8s-diff-port-681744 crio[658]: time="2025-12-27T10:06:59.825278158Z" level=info msg="Created container a22c4cfddf705c6c7d8c65da2769b16a9385370f9c1db836c666d3c2e33c79cf: kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-qx5tq/dashboard-metrics-scraper" id=e8fed25d-f3a5-4b35-9e7f-3cc01d5ada1b name=/runtime.v1.RuntimeService/CreateContainer
	Dec 27 10:06:59 default-k8s-diff-port-681744 crio[658]: time="2025-12-27T10:06:59.82740808Z" level=info msg="Starting container: a22c4cfddf705c6c7d8c65da2769b16a9385370f9c1db836c666d3c2e33c79cf" id=6144c3ae-8886-43b6-861e-c13464bad9d4 name=/runtime.v1.RuntimeService/StartContainer
	Dec 27 10:06:59 default-k8s-diff-port-681744 crio[658]: time="2025-12-27T10:06:59.831781238Z" level=info msg="Started container" PID=1773 containerID=a22c4cfddf705c6c7d8c65da2769b16a9385370f9c1db836c666d3c2e33c79cf description=kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-qx5tq/dashboard-metrics-scraper id=6144c3ae-8886-43b6-861e-c13464bad9d4 name=/runtime.v1.RuntimeService/StartContainer sandboxID=9f0fd8832e29d9c4e362c2100efea92b6cc264ed4acd01cc046c142c31623735
	Dec 27 10:06:59 default-k8s-diff-port-681744 conmon[1771]: conmon a22c4cfddf705c6c7d8c <ninfo>: container 1773 exited with status 1
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED              STATE               NAME                        ATTEMPT             POD ID              POD                                                    NAMESPACE
	a22c4cfddf705       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           5 seconds ago        Exited              dashboard-metrics-scraper   3                   9f0fd8832e29d       dashboard-metrics-scraper-867fb5f87b-qx5tq             kubernetes-dashboard
	c5e4c2046964b       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           26 seconds ago       Running             storage-provisioner         2                   e35ea032155ff       storage-provisioner                                    kube-system
	c6e9ba892a5f9       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           32 seconds ago       Exited              dashboard-metrics-scraper   2                   9f0fd8832e29d       dashboard-metrics-scraper-867fb5f87b-qx5tq             kubernetes-dashboard
	f8410dd3636ef       docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf   49 seconds ago       Running             kubernetes-dashboard        0                   1b6a548e7d145       kubernetes-dashboard-b84665fb8-rmdxj                   kubernetes-dashboard
	1590af91f4b09       1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c                                           57 seconds ago       Running             busybox                     1                   21dcfb7bfab8d       busybox                                                default
	1ebbbaa41e609       e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf                                           57 seconds ago       Running             coredns                     1                   3ce26d2397773       coredns-7d764666f9-gsk6s                               kube-system
	d9b232bb33745       c96ee3c17498748ccc544ba99ee8ffeb020fc335b230b43cd28bf43bed229a13                                           57 seconds ago       Running             kindnet-cni                 1                   f82566a408a23       kindnet-n6bcg                                          kube-system
	f21c10c677052       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           57 seconds ago       Exited              storage-provisioner         1                   e35ea032155ff       storage-provisioner                                    kube-system
	1f4229e7da039       de369f46c2ff55c31ea783a663eb203caa820f3db1f9b9c935e79e7d1e9fd9e5                                           57 seconds ago       Running             kube-proxy                  1                   cdbab87ed14cf       kube-proxy-6wq7w                                       kube-system
	05ed911c94373       c3fcf259c473a57a5d7da116e29161904491091743512d27467c907c5516f856                                           About a minute ago   Running             kube-apiserver              1                   79a06ae9bd326       kube-apiserver-default-k8s-diff-port-681744            kube-system
	33fdbb0d08777       88898f1d1a62a3ea9db5d4d099dee7aa52ebe8191016c5b3c721388a309983e0                                           About a minute ago   Running             kube-controller-manager     1                   1cf13a42fb3d8       kube-controller-manager-default-k8s-diff-port-681744   kube-system
	5c6646254efce       ddc8422d4d35a6fc66c34be61e24df795e5cebf197eb546f62740d0bafef874f                                           About a minute ago   Running             kube-scheduler              1                   0462d45b1cdd3       kube-scheduler-default-k8s-diff-port-681744            kube-system
	32a79604be992       271e49a0ebc56647476845128fcd2a73bb138beeca3878cc3bf52b4ff1172a57                                           About a minute ago   Running             etcd                        1                   405bcc6dd0b21       etcd-default-k8s-diff-port-681744                      kube-system
	
	
	==> coredns [1ebbbaa41e609904387bf6f6ddcce7e4ba4736940bdbc05e10eb8944ddb23cab] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.13.1
	linux/arm64, go1.25.2, 1db4568
	[INFO] 127.0.0.1:44056 - 52839 "HINFO IN 651255659691147961.3954095016706383786. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.01555555s
	[INFO] plugin/ready: Plugins not ready: "kubernetes"
	[INFO] plugin/ready: Plugins not ready: "kubernetes"
	[INFO] plugin/ready: Plugins not ready: "kubernetes"
	[INFO] plugin/ready: Plugins not ready: "kubernetes"
	[ERROR] plugin/kubernetes: Failed to watch
	[ERROR] plugin/kubernetes: Failed to watch
	[ERROR] plugin/kubernetes: Failed to watch
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-681744
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=default-k8s-diff-port-681744
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=a2daf445edf4872fd9586416ba5dbf507613db86
	                    minikube.k8s.io/name=default-k8s-diff-port-681744
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_27T10_05_08_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 27 Dec 2025 10:05:04 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-681744
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 27 Dec 2025 10:06:57 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 27 Dec 2025 10:06:57 +0000   Sat, 27 Dec 2025 10:05:01 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 27 Dec 2025 10:06:57 +0000   Sat, 27 Dec 2025 10:05:01 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 27 Dec 2025 10:06:57 +0000   Sat, 27 Dec 2025 10:05:01 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 27 Dec 2025 10:06:57 +0000   Sat, 27 Dec 2025 10:05:25 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    default-k8s-diff-port-681744
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022304Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022304Ki
	  pods:               110
	System Info:
	  Machine ID:                 56495bcb733b49eb3642bf15694bbc8c
	  System UUID:                aaa4a45e-c8b8-47d4-86bd-5fcd976160a4
	  Boot ID:                    bc2ae459-717b-455e-ba6b-b9cf89c5e3c8
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.3
	  Kubelet Version:            v1.35.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         97s
	  kube-system                 coredns-7d764666f9-gsk6s                                100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     113s
	  kube-system                 etcd-default-k8s-diff-port-681744                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         2m
	  kube-system                 kindnet-n6bcg                                           100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      113s
	  kube-system                 kube-apiserver-default-k8s-diff-port-681744             250m (12%)    0 (0%)      0 (0%)           0 (0%)         118s
	  kube-system                 kube-controller-manager-default-k8s-diff-port-681744    200m (10%)    0 (0%)      0 (0%)           0 (0%)         118s
	  kube-system                 kube-proxy-6wq7w                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         113s
	  kube-system                 kube-scheduler-default-k8s-diff-port-681744             100m (5%)     0 (0%)      0 (0%)           0 (0%)         118s
	  kube-system                 storage-provisioner                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         112s
	  kubernetes-dashboard        dashboard-metrics-scraper-867fb5f87b-qx5tq              0 (0%)        0 (0%)      0 (0%)           0 (0%)         55s
	  kubernetes-dashboard        kubernetes-dashboard-b84665fb8-rmdxj                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         55s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type    Reason          Age   From             Message
	  ----    ------          ----  ----             -------
	  Normal  RegisteredNode  114s  node-controller  Node default-k8s-diff-port-681744 event: Registered Node default-k8s-diff-port-681744 in Controller
	  Normal  RegisteredNode  55s   node-controller  Node default-k8s-diff-port-681744 event: Registered Node default-k8s-diff-port-681744 in Controller
	
	
	==> dmesg <==
	[Dec27 09:35] overlayfs: idmapped layers are currently not supported
	[ +35.855481] overlayfs: idmapped layers are currently not supported
	[Dec27 09:36] overlayfs: idmapped layers are currently not supported
	[Dec27 09:37] overlayfs: idmapped layers are currently not supported
	[  +4.962986] overlayfs: idmapped layers are currently not supported
	[Dec27 09:38] overlayfs: idmapped layers are currently not supported
	[ +23.051169] overlayfs: idmapped layers are currently not supported
	[Dec27 09:39] overlayfs: idmapped layers are currently not supported
	[ +34.383287] overlayfs: idmapped layers are currently not supported
	[Dec27 09:41] overlayfs: idmapped layers are currently not supported
	[Dec27 09:43] overlayfs: idmapped layers are currently not supported
	[ +35.522529] overlayfs: idmapped layers are currently not supported
	[Dec27 09:48] overlayfs: idmapped layers are currently not supported
	[Dec27 09:49] overlayfs: idmapped layers are currently not supported
	[Dec27 09:51] overlayfs: idmapped layers are currently not supported
	[Dec27 09:52] overlayfs: idmapped layers are currently not supported
	[Dec27 09:59] overlayfs: idmapped layers are currently not supported
	[ +37.649191] overlayfs: idmapped layers are currently not supported
	[Dec27 10:01] overlayfs: idmapped layers are currently not supported
	[Dec27 10:02] overlayfs: idmapped layers are currently not supported
	[Dec27 10:03] overlayfs: idmapped layers are currently not supported
	[Dec27 10:04] overlayfs: idmapped layers are currently not supported
	[Dec27 10:05] overlayfs: idmapped layers are currently not supported
	[ +42.108139] overlayfs: idmapped layers are currently not supported
	[Dec27 10:06] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [32a79604be9925f6e05bfd7503e0687ee2bac5349290a54929faec55c1325915] <==
	{"level":"info","ts":"2025-12-27T10:06:03.542969Z","caller":"embed/etcd.go:890","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2025-12-27T10:06:03.543053Z","caller":"embed/etcd.go:640","msg":"serving peer traffic","address":"192.168.85.2:2380"}
	{"level":"info","ts":"2025-12-27T10:06:03.543086Z","caller":"embed/etcd.go:611","msg":"cmux::serve","address":"192.168.85.2:2380"}
	{"level":"info","ts":"2025-12-27T10:06:03.552762Z","caller":"fileutil/purge.go:49","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-12-27T10:06:03.542255Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1981","msg":"9f0758e1c58a86ed switched to configuration voters=(11459225503572592365)"}
	{"level":"info","ts":"2025-12-27T10:06:03.553116Z","caller":"membership/cluster.go:433","msg":"ignore already added member","cluster-id":"68eaea490fab4e05","local-member-id":"9f0758e1c58a86ed","added-peer-id":"9f0758e1c58a86ed","added-peer-peer-urls":["https://192.168.85.2:2380"],"added-peer-is-learner":false}
	{"level":"info","ts":"2025-12-27T10:06:03.553266Z","caller":"membership/cluster.go:674","msg":"updated cluster version","cluster-id":"68eaea490fab4e05","local-member-id":"9f0758e1c58a86ed","from":"3.6","to":"3.6"}
	{"level":"info","ts":"2025-12-27T10:06:03.652496Z","logger":"raft","caller":"v3@v3.6.0/raft.go:988","msg":"9f0758e1c58a86ed is starting a new election at term 2"}
	{"level":"info","ts":"2025-12-27T10:06:03.652540Z","logger":"raft","caller":"v3@v3.6.0/raft.go:930","msg":"9f0758e1c58a86ed became pre-candidate at term 2"}
	{"level":"info","ts":"2025-12-27T10:06:03.652593Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1077","msg":"9f0758e1c58a86ed received MsgPreVoteResp from 9f0758e1c58a86ed at term 2"}
	{"level":"info","ts":"2025-12-27T10:06:03.652611Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1693","msg":"9f0758e1c58a86ed has received 1 MsgPreVoteResp votes and 0 vote rejections"}
	{"level":"info","ts":"2025-12-27T10:06:03.652625Z","logger":"raft","caller":"v3@v3.6.0/raft.go:912","msg":"9f0758e1c58a86ed became candidate at term 3"}
	{"level":"info","ts":"2025-12-27T10:06:03.677496Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1077","msg":"9f0758e1c58a86ed received MsgVoteResp from 9f0758e1c58a86ed at term 3"}
	{"level":"info","ts":"2025-12-27T10:06:03.677543Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1693","msg":"9f0758e1c58a86ed has received 1 MsgVoteResp votes and 0 vote rejections"}
	{"level":"info","ts":"2025-12-27T10:06:03.677563Z","logger":"raft","caller":"v3@v3.6.0/raft.go:970","msg":"9f0758e1c58a86ed became leader at term 3"}
	{"level":"info","ts":"2025-12-27T10:06:03.677574Z","logger":"raft","caller":"v3@v3.6.0/node.go:370","msg":"raft.node: 9f0758e1c58a86ed elected leader 9f0758e1c58a86ed at term 3"}
	{"level":"info","ts":"2025-12-27T10:06:03.681619Z","caller":"etcdserver/server.go:1820","msg":"published local member to cluster through raft","local-member-id":"9f0758e1c58a86ed","local-member-attributes":"{Name:default-k8s-diff-port-681744 ClientURLs:[https://192.168.85.2:2379]}","cluster-id":"68eaea490fab4e05","publish-timeout":"7s"}
	{"level":"info","ts":"2025-12-27T10:06:03.682649Z","caller":"embed/serve.go:138","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-12-27T10:06:03.682686Z","caller":"embed/serve.go:138","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-12-27T10:06:03.683658Z","caller":"v3rpc/health.go:63","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-12-27T10:06:03.685521Z","caller":"embed/serve.go:283","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2025-12-27T10:06:03.691353Z","caller":"v3rpc/health.go:63","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-12-27T10:06:03.790814Z","caller":"embed/serve.go:283","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.85.2:2379"}
	{"level":"info","ts":"2025-12-27T10:06:03.795186Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-12-27T10:06:03.795516Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	
	
	==> kernel <==
	 10:07:06 up  2:49,  0 user,  load average: 3.11, 2.71, 2.33
	Linux default-k8s-diff-port-681744 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [d9b232bb33745d9367c3276c09c36ce009adeee44f23352439ae08e719cd1485] <==
	I1227 10:06:08.421818       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1227 10:06:08.422252       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1227 10:06:08.422423       1 main.go:148] setting mtu 1500 for CNI 
	I1227 10:06:08.422464       1 main.go:178] kindnetd IP family: "ipv4"
	I1227 10:06:08.422502       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-12-27T10:06:08Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1227 10:06:08.620661       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1227 10:06:08.620742       1 controller.go:381] "Waiting for informer caches to sync"
	I1227 10:06:08.620778       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1227 10:06:08.621379       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1227 10:06:38.622664       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1227 10:06:38.622808       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1227 10:06:38.622899       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	E1227 10:06:38.622941       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	I1227 10:06:40.121786       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1227 10:06:40.121895       1 metrics.go:72] Registering metrics
	I1227 10:06:40.121981       1 controller.go:711] "Syncing nftables rules"
	I1227 10:06:48.620943       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1227 10:06:48.621711       1 main.go:301] handling current node
	I1227 10:06:58.620772       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1227 10:06:58.620814       1 main.go:301] handling current node
	
	
	==> kube-apiserver [05ed911c9437337bd74f43e8478b89cc420bd0d57d7c4b74775f9f242d146fd0] <==
	I1227 10:06:06.801008       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1227 10:06:06.802087       1 shared_informer.go:377] "Caches are synced"
	I1227 10:06:06.802234       1 shared_informer.go:377] "Caches are synced"
	E1227 10:06:06.841307       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1227 10:06:06.850725       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1227 10:06:06.870896       1 cidrallocator.go:302] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1227 10:06:06.875178       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1227 10:06:06.875996       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1227 10:06:06.877064       1 cache.go:39] Caches are synced for autoregister controller
	I1227 10:06:06.882466       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1227 10:06:06.890281       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1227 10:06:06.890789       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1227 10:06:06.901209       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1227 10:06:06.911792       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1227 10:06:07.465378       1 controller.go:667] quota admission added evaluator for: namespaces
	I1227 10:06:07.512399       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1227 10:06:07.547883       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1227 10:06:07.560039       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1227 10:06:07.569733       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1227 10:06:07.597014       1 storage_scheduling.go:139] all system priority classes are created successfully or already exist.
	I1227 10:06:07.652476       1 alloc.go:329] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.103.254.229"}
	I1227 10:06:07.696486       1 alloc.go:329] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.101.238.18"}
	I1227 10:06:10.409295       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1227 10:06:10.631878       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1227 10:06:10.709029       1 controller.go:667] quota admission added evaluator for: endpoints
	
	
	==> kube-controller-manager [33fdbb0d08777749f0775d9538c2ddf0c2e1275e2fe8d32dc7d2e64e6ca81b94] <==
	I1227 10:06:10.033372       1 shared_informer.go:377] "Caches are synced"
	I1227 10:06:10.033423       1 shared_informer.go:377] "Caches are synced"
	I1227 10:06:10.033589       1 shared_informer.go:377] "Caches are synced"
	I1227 10:06:10.036216       1 shared_informer.go:377] "Caches are synced"
	I1227 10:06:10.036343       1 shared_informer.go:377] "Caches are synced"
	I1227 10:06:10.036422       1 shared_informer.go:377] "Caches are synced"
	I1227 10:06:10.036486       1 shared_informer.go:377] "Caches are synced"
	I1227 10:06:10.036530       1 shared_informer.go:377] "Caches are synced"
	I1227 10:06:10.036746       1 shared_informer.go:377] "Caches are synced"
	I1227 10:06:10.032988       1 shared_informer.go:377] "Caches are synced"
	I1227 10:06:10.033084       1 shared_informer.go:377] "Caches are synced"
	I1227 10:06:10.038512       1 shared_informer.go:377] "Caches are synced"
	I1227 10:06:10.032262       1 shared_informer.go:377] "Caches are synced"
	I1227 10:06:10.032898       1 shared_informer.go:377] "Caches are synced"
	I1227 10:06:10.033041       1 shared_informer.go:377] "Caches are synced"
	I1227 10:06:10.038049       1 shared_informer.go:377] "Caches are synced"
	I1227 10:06:10.038062       1 shared_informer.go:377] "Caches are synced"
	I1227 10:06:10.042954       1 shared_informer.go:370] "Waiting for caches to sync"
	I1227 10:06:10.089406       1 shared_informer.go:377] "Caches are synced"
	I1227 10:06:10.129913       1 shared_informer.go:377] "Caches are synced"
	I1227 10:06:10.129939       1 garbagecollector.go:166] "Garbage collector: all resource monitors have synced"
	I1227 10:06:10.129946       1 garbagecollector.go:169] "Proceeding to collect garbage"
	I1227 10:06:10.146873       1 shared_informer.go:377] "Caches are synced"
	I1227 10:06:10.647194       1 endpointslice_controller.go:361] "Error syncing endpoint slices for service, retrying" key="kubernetes-dashboard/dashboard-metrics-scraper" err="EndpointSlice informer cache is out of date"
	I1227 10:06:10.649770       1 endpointslice_controller.go:361] "Error syncing endpoint slices for service, retrying" key="kubernetes-dashboard/kubernetes-dashboard" err="EndpointSlice informer cache is out of date"
	
	
	==> kube-proxy [1f4229e7da039fc2a87cf4691415876ed662e1bc499beefa038042f87efd93b9] <==
	I1227 10:06:08.290401       1 server_linux.go:53] "Using iptables proxy"
	I1227 10:06:08.415586       1 shared_informer.go:370] "Waiting for caches to sync"
	I1227 10:06:08.517975       1 shared_informer.go:377] "Caches are synced"
	I1227 10:06:08.518024       1 server.go:218] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1227 10:06:08.518106       1 server.go:255] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1227 10:06:08.543398       1 server.go:264] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1227 10:06:08.543461       1 server_linux.go:136] "Using iptables Proxier"
	I1227 10:06:08.547532       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1227 10:06:08.547860       1 server.go:529] "Version info" version="v1.35.0"
	I1227 10:06:08.547942       1 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1227 10:06:08.549704       1 config.go:200] "Starting service config controller"
	I1227 10:06:08.551064       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1227 10:06:08.550436       1 config.go:106] "Starting endpoint slice config controller"
	I1227 10:06:08.551184       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1227 10:06:08.550448       1 config.go:403] "Starting serviceCIDR config controller"
	I1227 10:06:08.551253       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1227 10:06:08.551335       1 config.go:309] "Starting node config controller"
	I1227 10:06:08.551399       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1227 10:06:08.551428       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1227 10:06:08.651866       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1227 10:06:08.652025       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1227 10:06:08.652109       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [5c6646254efce08f32743176f38a716d497ca0e0aaa6740710647bf39a812092] <==
	I1227 10:06:05.296709       1 serving.go:386] Generated self-signed cert in-memory
	I1227 10:06:06.857615       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.35.0"
	I1227 10:06:06.860010       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1227 10:06:06.871182       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I1227 10:06:06.871211       1 shared_informer.go:370] "Waiting for caches to sync"
	I1227 10:06:06.871253       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1227 10:06:06.871266       1 shared_informer.go:370] "Waiting for caches to sync"
	I1227 10:06:06.871281       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1227 10:06:06.871288       1 shared_informer.go:370] "Waiting for caches to sync"
	I1227 10:06:06.877959       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1227 10:06:06.878042       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1227 10:06:06.974251       1 shared_informer.go:377] "Caches are synced"
	I1227 10:06:06.974314       1 shared_informer.go:377] "Caches are synced"
	I1227 10:06:06.974416       1 shared_informer.go:377] "Caches are synced"
	
	
	==> kubelet <==
	Dec 27 10:06:21 default-k8s-diff-port-681744 kubelet[794]: I1227 10:06:21.936250     794 scope.go:122] "RemoveContainer" containerID="e2ce028b216cf0cfa975ee8215bef862f99c0df4b5da97fbceb48175106668a5"
	Dec 27 10:06:21 default-k8s-diff-port-681744 kubelet[794]: E1227 10:06:21.936365     794 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-867fb5f87b-qx5tq_kubernetes-dashboard(f1fc18c9-fdc6-471c-981e-47bf452c499c)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-qx5tq" podUID="f1fc18c9-fdc6-471c-981e-47bf452c499c"
	Dec 27 10:06:21 default-k8s-diff-port-681744 kubelet[794]: I1227 10:06:21.936859     794 scope.go:122] "RemoveContainer" containerID="79fafc77a4d88c519604daeda0da62f9f4c45135a365912e1d39694dc81da026"
	Dec 27 10:06:22 default-k8s-diff-port-681744 kubelet[794]: E1227 10:06:22.940685     794 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-qx5tq" containerName="dashboard-metrics-scraper"
	Dec 27 10:06:22 default-k8s-diff-port-681744 kubelet[794]: I1227 10:06:22.940722     794 scope.go:122] "RemoveContainer" containerID="e2ce028b216cf0cfa975ee8215bef862f99c0df4b5da97fbceb48175106668a5"
	Dec 27 10:06:22 default-k8s-diff-port-681744 kubelet[794]: E1227 10:06:22.940863     794 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-867fb5f87b-qx5tq_kubernetes-dashboard(f1fc18c9-fdc6-471c-981e-47bf452c499c)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-qx5tq" podUID="f1fc18c9-fdc6-471c-981e-47bf452c499c"
	Dec 27 10:06:26 default-k8s-diff-port-681744 kubelet[794]: E1227 10:06:26.165501     794 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-qx5tq" containerName="dashboard-metrics-scraper"
	Dec 27 10:06:26 default-k8s-diff-port-681744 kubelet[794]: I1227 10:06:26.165552     794 scope.go:122] "RemoveContainer" containerID="e2ce028b216cf0cfa975ee8215bef862f99c0df4b5da97fbceb48175106668a5"
	Dec 27 10:06:26 default-k8s-diff-port-681744 kubelet[794]: E1227 10:06:26.165721     794 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-867fb5f87b-qx5tq_kubernetes-dashboard(f1fc18c9-fdc6-471c-981e-47bf452c499c)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-qx5tq" podUID="f1fc18c9-fdc6-471c-981e-47bf452c499c"
	Dec 27 10:06:32 default-k8s-diff-port-681744 kubelet[794]: E1227 10:06:32.766222     794 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-qx5tq" containerName="dashboard-metrics-scraper"
	Dec 27 10:06:32 default-k8s-diff-port-681744 kubelet[794]: I1227 10:06:32.766690     794 scope.go:122] "RemoveContainer" containerID="e2ce028b216cf0cfa975ee8215bef862f99c0df4b5da97fbceb48175106668a5"
	Dec 27 10:06:32 default-k8s-diff-port-681744 kubelet[794]: I1227 10:06:32.967745     794 scope.go:122] "RemoveContainer" containerID="e2ce028b216cf0cfa975ee8215bef862f99c0df4b5da97fbceb48175106668a5"
	Dec 27 10:06:32 default-k8s-diff-port-681744 kubelet[794]: E1227 10:06:32.968014     794 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-qx5tq" containerName="dashboard-metrics-scraper"
	Dec 27 10:06:32 default-k8s-diff-port-681744 kubelet[794]: I1227 10:06:32.968040     794 scope.go:122] "RemoveContainer" containerID="c6e9ba892a5f9f923228744a2157ea2f243b0bdffa0791bb446d1579dcdc1777"
	Dec 27 10:06:32 default-k8s-diff-port-681744 kubelet[794]: E1227 10:06:32.968206     794 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-867fb5f87b-qx5tq_kubernetes-dashboard(f1fc18c9-fdc6-471c-981e-47bf452c499c)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-qx5tq" podUID="f1fc18c9-fdc6-471c-981e-47bf452c499c"
	Dec 27 10:06:36 default-k8s-diff-port-681744 kubelet[794]: E1227 10:06:36.165319     794 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-qx5tq" containerName="dashboard-metrics-scraper"
	Dec 27 10:06:36 default-k8s-diff-port-681744 kubelet[794]: I1227 10:06:36.165880     794 scope.go:122] "RemoveContainer" containerID="c6e9ba892a5f9f923228744a2157ea2f243b0bdffa0791bb446d1579dcdc1777"
	Dec 27 10:06:36 default-k8s-diff-port-681744 kubelet[794]: E1227 10:06:36.166142     794 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-867fb5f87b-qx5tq_kubernetes-dashboard(f1fc18c9-fdc6-471c-981e-47bf452c499c)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-qx5tq" podUID="f1fc18c9-fdc6-471c-981e-47bf452c499c"
	Dec 27 10:06:38 default-k8s-diff-port-681744 kubelet[794]: I1227 10:06:38.984111     794 scope.go:122] "RemoveContainer" containerID="f21c10c67705231af4210c6ae61ccc093460f828467de0700b89eaf1cfcaed8e"
	Dec 27 10:06:45 default-k8s-diff-port-681744 kubelet[794]: E1227 10:06:45.832611     794 prober_manager.go:209] "Readiness probe already exists for container" pod="kube-system/coredns-7d764666f9-gsk6s" containerName="coredns"
	Dec 27 10:06:59 default-k8s-diff-port-681744 kubelet[794]: E1227 10:06:59.765583     794 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-qx5tq" containerName="dashboard-metrics-scraper"
	Dec 27 10:06:59 default-k8s-diff-port-681744 kubelet[794]: I1227 10:06:59.765622     794 scope.go:122] "RemoveContainer" containerID="c6e9ba892a5f9f923228744a2157ea2f243b0bdffa0791bb446d1579dcdc1777"
	Dec 27 10:06:59 default-k8s-diff-port-681744 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Dec 27 10:07:00 default-k8s-diff-port-681744 systemd[1]: kubelet.service: Deactivated successfully.
	Dec 27 10:07:00 default-k8s-diff-port-681744 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	
	
	==> kubernetes-dashboard [f8410dd3636ef8c5aa27aff620f03619584e1cd859e8390d1cfc0169a194e203] <==
	2025/12/27 10:06:15 Using namespace: kubernetes-dashboard
	2025/12/27 10:06:15 Using in-cluster config to connect to apiserver
	2025/12/27 10:06:15 Using secret token for csrf signing
	2025/12/27 10:06:15 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/12/27 10:06:15 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/12/27 10:06:15 Successful initial request to the apiserver, version: v1.35.0
	2025/12/27 10:06:15 Generating JWE encryption key
	2025/12/27 10:06:15 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/12/27 10:06:15 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/12/27 10:06:17 Initializing JWE encryption key from synchronized object
	2025/12/27 10:06:17 Creating in-cluster Sidecar client
	2025/12/27 10:06:17 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/12/27 10:06:17 Serving insecurely on HTTP port: 9090
	2025/12/27 10:06:47 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/12/27 10:06:15 Starting overwatch
	
	
	==> storage-provisioner [c5e4c2046964bfb93c146b52bc46a0c95d3b41f37952360670a9c58bd4b89311] <==
	I1227 10:06:39.055735       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1227 10:06:39.099323       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1227 10:06:39.099395       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1227 10:06:39.109334       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1227 10:06:42.565407       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1227 10:06:46.830118       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1227 10:06:50.428726       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1227 10:06:53.482901       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1227 10:06:56.504761       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1227 10:06:56.510032       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1227 10:06:56.510292       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1227 10:06:56.510521       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-681744_0f29a285-f0bd-4ff3-863d-feb7994819f7!
	I1227 10:06:56.510694       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"b784311c-5962-4e1e-afb9-963a396928d5", APIVersion:"v1", ResourceVersion:"685", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-681744_0f29a285-f0bd-4ff3-863d-feb7994819f7 became leader
	W1227 10:06:56.516203       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1227 10:06:56.519223       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1227 10:06:56.611116       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-681744_0f29a285-f0bd-4ff3-863d-feb7994819f7!
	W1227 10:06:58.522927       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1227 10:06:58.528326       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1227 10:07:00.533215       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1227 10:07:00.546406       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1227 10:07:02.549153       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1227 10:07:02.557445       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1227 10:07:04.564017       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1227 10:07:04.579636       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	
	
	==> storage-provisioner [f21c10c67705231af4210c6ae61ccc093460f828467de0700b89eaf1cfcaed8e] <==
	I1227 10:06:08.200546       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1227 10:06:38.204381       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-681744 -n default-k8s-diff-port-681744
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-681744 -n default-k8s-diff-port-681744: exit status 2 (531.686304ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:270: (dbg) Run:  kubectl --context default-k8s-diff-port-681744 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:294: <<< TestStartStop/group/default-k8s-diff-port/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:295: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/Pause (7.82s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (3.94s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-133340 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-133340 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 11 (379.829044ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-27T10:07:25Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-133340 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 11
start_stop_delete_test.go:209: WARNING: cni mode requires additional setup before pods can schedule :(
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/newest-cni/serial/EnableAddonWhileActive]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/newest-cni/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect newest-cni-133340
helpers_test.go:244: (dbg) docker inspect newest-cni-133340:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "83f3564ca7865c8f0fff28cd70ed0fbf3baa50a4f4aaaa707de637f5214ccf50",
	        "Created": "2025-12-27T10:06:53.938355809Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 522807,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-27T10:06:54.00014839Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:713d59eed1cb7ea89d31e3cbef8f6274a6ab7509a421e96cf0197c334e76398d",
	        "ResolvConfPath": "/var/lib/docker/containers/83f3564ca7865c8f0fff28cd70ed0fbf3baa50a4f4aaaa707de637f5214ccf50/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/83f3564ca7865c8f0fff28cd70ed0fbf3baa50a4f4aaaa707de637f5214ccf50/hostname",
	        "HostsPath": "/var/lib/docker/containers/83f3564ca7865c8f0fff28cd70ed0fbf3baa50a4f4aaaa707de637f5214ccf50/hosts",
	        "LogPath": "/var/lib/docker/containers/83f3564ca7865c8f0fff28cd70ed0fbf3baa50a4f4aaaa707de637f5214ccf50/83f3564ca7865c8f0fff28cd70ed0fbf3baa50a4f4aaaa707de637f5214ccf50-json.log",
	        "Name": "/newest-cni-133340",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "newest-cni-133340:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "newest-cni-133340",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "83f3564ca7865c8f0fff28cd70ed0fbf3baa50a4f4aaaa707de637f5214ccf50",
	                "LowerDir": "/var/lib/docker/overlay2/0c9d74f728c6a9877244e39be65b6ef5c79623cfc825950897d2d18fa4c76de0-init/diff:/var/lib/docker/overlay2/888349771a41a46b09e7de676064af9effbe2f5ae2a8ba49ad062335fb2a70e5/diff",
	                "MergedDir": "/var/lib/docker/overlay2/0c9d74f728c6a9877244e39be65b6ef5c79623cfc825950897d2d18fa4c76de0/merged",
	                "UpperDir": "/var/lib/docker/overlay2/0c9d74f728c6a9877244e39be65b6ef5c79623cfc825950897d2d18fa4c76de0/diff",
	                "WorkDir": "/var/lib/docker/overlay2/0c9d74f728c6a9877244e39be65b6ef5c79623cfc825950897d2d18fa4c76de0/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "newest-cni-133340",
	                "Source": "/var/lib/docker/volumes/newest-cni-133340/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "newest-cni-133340",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "newest-cni-133340",
	                "name.minikube.sigs.k8s.io": "newest-cni-133340",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "7c085fc97018088b0e07846bf02c645e31bae122e9d0ccbe63688695883c2e0a",
	            "SandboxKey": "/var/run/docker/netns/7c085fc97018",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33461"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33462"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33465"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33463"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33464"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "newest-cni-133340": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "16:8e:59:07:b2:bd",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "96182c0697dfbd3eb2978fc5bdfe5ab11e5c5e202e442a3bbbd2ca0b5a3c02a5",
	                    "EndpointID": "ce03d49d0c6c121c81670656b57b49128447ad97d3ae71729d8c8b21224f50f1",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "newest-cni-133340",
	                        "83f3564ca786"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-133340 -n newest-cni-133340
helpers_test.go:253: <<< TestStartStop/group/newest-cni/serial/EnableAddonWhileActive FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/newest-cni/serial/EnableAddonWhileActive]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p newest-cni-133340 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-arm64 -p newest-cni-133340 logs -n 25: (1.985970095s)
helpers_test.go:261: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────────────────┬─────────┬─────────┬─────────────────────┬─────────
────────────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │              PROFILE              │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────────────────┼─────────┼─────────┼─────────────────────┼─────────
────────────┤
	│ addons  │ enable metrics-server -p embed-certs-017122 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-017122                │ jenkins │ v1.37.0 │ 27 Dec 25 10:05 UTC │                     │
	│ stop    │ -p embed-certs-017122 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-017122                │ jenkins │ v1.37.0 │ 27 Dec 25 10:05 UTC │ 27 Dec 25 10:05 UTC │
	│ addons  │ enable dashboard -p embed-certs-017122 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-017122                │ jenkins │ v1.37.0 │ 27 Dec 25 10:05 UTC │ 27 Dec 25 10:05 UTC │
	│ start   │ -p embed-certs-017122 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0                                                                                        │ embed-certs-017122                │ jenkins │ v1.37.0 │ 27 Dec 25 10:05 UTC │ 27 Dec 25 10:06 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-681744 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-681744      │ jenkins │ v1.37.0 │ 27 Dec 25 10:05 UTC │                     │
	│ stop    │ -p default-k8s-diff-port-681744 --alsologtostderr -v=3                                                                                                                                                                                        │ default-k8s-diff-port-681744      │ jenkins │ v1.37.0 │ 27 Dec 25 10:05 UTC │ 27 Dec 25 10:05 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-681744 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                       │ default-k8s-diff-port-681744      │ jenkins │ v1.37.0 │ 27 Dec 25 10:05 UTC │ 27 Dec 25 10:05 UTC │
	│ start   │ -p default-k8s-diff-port-681744 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0                                                                      │ default-k8s-diff-port-681744      │ jenkins │ v1.37.0 │ 27 Dec 25 10:05 UTC │ 27 Dec 25 10:06 UTC │
	│ image   │ embed-certs-017122 image list --format=json                                                                                                                                                                                                   │ embed-certs-017122                │ jenkins │ v1.37.0 │ 27 Dec 25 10:06 UTC │ 27 Dec 25 10:06 UTC │
	│ pause   │ -p embed-certs-017122 --alsologtostderr -v=1                                                                                                                                                                                                  │ embed-certs-017122                │ jenkins │ v1.37.0 │ 27 Dec 25 10:06 UTC │                     │
	│ delete  │ -p embed-certs-017122                                                                                                                                                                                                                         │ embed-certs-017122                │ jenkins │ v1.37.0 │ 27 Dec 25 10:06 UTC │ 27 Dec 25 10:06 UTC │
	│ delete  │ -p embed-certs-017122                                                                                                                                                                                                                         │ embed-certs-017122                │ jenkins │ v1.37.0 │ 27 Dec 25 10:06 UTC │ 27 Dec 25 10:06 UTC │
	│ start   │ -p newest-cni-133340 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0 │ newest-cni-133340                 │ jenkins │ v1.37.0 │ 27 Dec 25 10:06 UTC │ 27 Dec 25 10:07 UTC │
	│ image   │ default-k8s-diff-port-681744 image list --format=json                                                                                                                                                                                         │ default-k8s-diff-port-681744      │ jenkins │ v1.37.0 │ 27 Dec 25 10:06 UTC │ 27 Dec 25 10:06 UTC │
	│ pause   │ -p default-k8s-diff-port-681744 --alsologtostderr -v=1                                                                                                                                                                                        │ default-k8s-diff-port-681744      │ jenkins │ v1.37.0 │ 27 Dec 25 10:06 UTC │                     │
	│ delete  │ -p default-k8s-diff-port-681744                                                                                                                                                                                                               │ default-k8s-diff-port-681744      │ jenkins │ v1.37.0 │ 27 Dec 25 10:07 UTC │ 27 Dec 25 10:07 UTC │
	│ delete  │ -p default-k8s-diff-port-681744                                                                                                                                                                                                               │ default-k8s-diff-port-681744      │ jenkins │ v1.37.0 │ 27 Dec 25 10:07 UTC │ 27 Dec 25 10:07 UTC │
	│ start   │ -p test-preload-dl-gcs-425359 --download-only --kubernetes-version v1.34.0-rc.1 --preload-source=gcs --alsologtostderr --v=1 --driver=docker  --container-runtime=crio                                                                        │ test-preload-dl-gcs-425359        │ jenkins │ v1.37.0 │ 27 Dec 25 10:07 UTC │                     │
	│ delete  │ -p test-preload-dl-gcs-425359                                                                                                                                                                                                                 │ test-preload-dl-gcs-425359        │ jenkins │ v1.37.0 │ 27 Dec 25 10:07 UTC │ 27 Dec 25 10:07 UTC │
	│ start   │ -p test-preload-dl-github-343343 --download-only --kubernetes-version v1.34.0-rc.2 --preload-source=github --alsologtostderr --v=1 --driver=docker  --container-runtime=crio                                                                  │ test-preload-dl-github-343343     │ jenkins │ v1.37.0 │ 27 Dec 25 10:07 UTC │                     │
	│ delete  │ -p test-preload-dl-github-343343                                                                                                                                                                                                              │ test-preload-dl-github-343343     │ jenkins │ v1.37.0 │ 27 Dec 25 10:07 UTC │ 27 Dec 25 10:07 UTC │
	│ start   │ -p test-preload-dl-gcs-cached-955830 --download-only --kubernetes-version v1.34.0-rc.2 --preload-source=gcs --alsologtostderr --v=1 --driver=docker  --container-runtime=crio                                                                 │ test-preload-dl-gcs-cached-955830 │ jenkins │ v1.37.0 │ 27 Dec 25 10:07 UTC │                     │
	│ delete  │ -p test-preload-dl-gcs-cached-955830                                                                                                                                                                                                          │ test-preload-dl-gcs-cached-955830 │ jenkins │ v1.37.0 │ 27 Dec 25 10:07 UTC │ 27 Dec 25 10:07 UTC │
	│ start   │ -p auto-246753 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio                                                                                                                       │ auto-246753                       │ jenkins │ v1.37.0 │ 27 Dec 25 10:07 UTC │                     │
	│ addons  │ enable metrics-server -p newest-cni-133340 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ newest-cni-133340                 │ jenkins │ v1.37.0 │ 27 Dec 25 10:07 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────────────────┴─────────┴─────────┴─────────────────────┴─────────
────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/27 10:07:20
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1227 10:07:20.073172  526230 out.go:360] Setting OutFile to fd 1 ...
	I1227 10:07:20.073633  526230 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 10:07:20.073667  526230 out.go:374] Setting ErrFile to fd 2...
	I1227 10:07:20.073688  526230 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 10:07:20.074035  526230 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22344-301174/.minikube/bin
	I1227 10:07:20.074581  526230 out.go:368] Setting JSON to false
	I1227 10:07:20.075589  526230 start.go:133] hostinfo: {"hostname":"ip-172-31-31-251","uptime":10189,"bootTime":1766819851,"procs":184,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1227 10:07:20.075692  526230 start.go:143] virtualization:  
	I1227 10:07:20.078872  526230 out.go:179] * [auto-246753] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1227 10:07:20.082935  526230 notify.go:221] Checking for updates...
	I1227 10:07:20.083983  526230 out.go:179]   - MINIKUBE_LOCATION=22344
	I1227 10:07:20.087854  526230 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1227 10:07:20.090966  526230 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22344-301174/kubeconfig
	I1227 10:07:20.093982  526230 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22344-301174/.minikube
	I1227 10:07:20.096898  526230 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1227 10:07:20.099711  526230 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1227 10:07:20.103733  526230 config.go:182] Loaded profile config "newest-cni-133340": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1227 10:07:20.103864  526230 driver.go:422] Setting default libvirt URI to qemu:///system
	I1227 10:07:20.149196  526230 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1227 10:07:20.149325  526230 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1227 10:07:20.240202  526230 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-27 10:07:20.228088373 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1227 10:07:20.240309  526230 docker.go:319] overlay module found
	I1227 10:07:20.243520  526230 out.go:179] * Using the docker driver based on user configuration
	I1227 10:07:20.246327  526230 start.go:309] selected driver: docker
	I1227 10:07:20.246350  526230 start.go:928] validating driver "docker" against <nil>
	I1227 10:07:20.246365  526230 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1227 10:07:20.247140  526230 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1227 10:07:20.303726  526230 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-27 10:07:20.294403644 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1227 10:07:20.303892  526230 start_flags.go:333] no existing cluster config was found, will generate one from the flags 
	I1227 10:07:20.304131  526230 start_flags.go:1019] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1227 10:07:20.307224  526230 out.go:179] * Using Docker driver with root privileges
	I1227 10:07:20.310262  526230 cni.go:84] Creating CNI manager for ""
	I1227 10:07:20.310343  526230 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1227 10:07:20.310356  526230 start_flags.go:342] Found "CNI" CNI - setting NetworkPlugin=cni
	I1227 10:07:20.310438  526230 start.go:353] cluster config:
	{Name:auto-246753 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:auto-246753 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:cri
o CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: Au
toPauseInterval:1m0s Rosetta:false}
	I1227 10:07:20.313518  526230 out.go:179] * Starting "auto-246753" primary control-plane node in "auto-246753" cluster
	I1227 10:07:20.316246  526230 cache.go:134] Beginning downloading kic base image for docker with crio
	I1227 10:07:20.319162  526230 out.go:179] * Pulling base image v0.0.48-1766570851-22316 ...
	I1227 10:07:20.322073  526230 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime crio
	I1227 10:07:20.322128  526230 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22344-301174/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-cri-o-overlay-arm64.tar.lz4
	I1227 10:07:20.322143  526230 cache.go:65] Caching tarball of preloaded images
	I1227 10:07:20.322187  526230 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a in local docker daemon
	I1227 10:07:20.322265  526230 preload.go:251] Found /home/jenkins/minikube-integration/22344-301174/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1227 10:07:20.322276  526230 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0 on crio
	I1227 10:07:20.322401  526230 profile.go:143] Saving config to /home/jenkins/minikube-integration/22344-301174/.minikube/profiles/auto-246753/config.json ...
	I1227 10:07:20.322419  526230 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22344-301174/.minikube/profiles/auto-246753/config.json: {Name:mke48d9f6ce7493f886faac721b7efd620a30286 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 10:07:20.343704  526230 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a in local docker daemon, skipping pull
	I1227 10:07:20.343725  526230 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a exists in daemon, skipping load
	I1227 10:07:20.343807  526230 cache.go:243] Successfully downloaded all kic artifacts
	I1227 10:07:20.343888  526230 start.go:360] acquireMachinesLock for auto-246753: {Name:mk37f906b42d876554f488fb64226eb1625f3711 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1227 10:07:20.344080  526230 start.go:364] duration metric: took 171.333µs to acquireMachinesLock for "auto-246753"
	I1227 10:07:20.344116  526230 start.go:93] Provisioning new machine with config: &{Name:auto-246753 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:auto-246753 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: Soc
ketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false} &{Name: IP: Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1227 10:07:20.344192  526230 start.go:125] createHost starting for "" (driver="docker")
	I1227 10:07:19.066107  522415 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1227 10:07:19.565913  522415 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1227 10:07:20.066087  522415 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1227 10:07:20.573222  522415 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1227 10:07:21.066737  522415 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1227 10:07:21.566720  522415 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1227 10:07:22.066636  522415 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1227 10:07:22.219912  522415 kubeadm.go:1114] duration metric: took 3.781349788s to wait for elevateKubeSystemPrivileges
	I1227 10:07:22.219941  522415 kubeadm.go:403] duration metric: took 18.960081986s to StartCluster
	I1227 10:07:22.219958  522415 settings.go:142] acquiring lock: {Name:mkd3c0090464e277fd903dd0170c6ace5a6172ea Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 10:07:22.220020  522415 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22344-301174/kubeconfig
	I1227 10:07:22.220618  522415 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22344-301174/kubeconfig: {Name:mk3a3b85dc4ae4614cb3a9a7459a1ec0ee3f4d9f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 10:07:22.220822  522415 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1227 10:07:22.220960  522415 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1227 10:07:22.221202  522415 config.go:182] Loaded profile config "newest-cni-133340": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1227 10:07:22.221234  522415 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1227 10:07:22.221288  522415 addons.go:70] Setting storage-provisioner=true in profile "newest-cni-133340"
	I1227 10:07:22.221302  522415 addons.go:239] Setting addon storage-provisioner=true in "newest-cni-133340"
	I1227 10:07:22.221323  522415 host.go:66] Checking if "newest-cni-133340" exists ...
	I1227 10:07:22.221828  522415 cli_runner.go:164] Run: docker container inspect newest-cni-133340 --format={{.State.Status}}
	I1227 10:07:22.223806  522415 addons.go:70] Setting default-storageclass=true in profile "newest-cni-133340"
	I1227 10:07:22.223849  522415 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-133340"
	I1227 10:07:22.224193  522415 cli_runner.go:164] Run: docker container inspect newest-cni-133340 --format={{.State.Status}}
	I1227 10:07:22.225355  522415 out.go:179] * Verifying Kubernetes components...
	I1227 10:07:22.229515  522415 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1227 10:07:22.265791  522415 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1227 10:07:22.269874  522415 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1227 10:07:22.269899  522415 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1227 10:07:22.269967  522415 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-133340
	I1227 10:07:22.300936  522415 addons.go:239] Setting addon default-storageclass=true in "newest-cni-133340"
	I1227 10:07:22.300976  522415 host.go:66] Checking if "newest-cni-133340" exists ...
	I1227 10:07:22.301392  522415 cli_runner.go:164] Run: docker container inspect newest-cni-133340 --format={{.State.Status}}
	I1227 10:07:22.362309  522415 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33461 SSHKeyPath:/home/jenkins/minikube-integration/22344-301174/.minikube/machines/newest-cni-133340/id_rsa Username:docker}
	I1227 10:07:22.364326  522415 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1227 10:07:22.364353  522415 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1227 10:07:22.364415  522415 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-133340
	I1227 10:07:22.421514  522415 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33461 SSHKeyPath:/home/jenkins/minikube-integration/22344-301174/.minikube/machines/newest-cni-133340/id_rsa Username:docker}
	I1227 10:07:22.794020  522415 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1227 10:07:22.808414  522415 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.76.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.35.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1227 10:07:22.808556  522415 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1227 10:07:22.909171  522415 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1227 10:07:24.097565  522415 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.30345884s)
	I1227 10:07:24.097611  522415 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.76.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.35.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.289037729s)
	I1227 10:07:24.097623  522415 start.go:987] {"host.minikube.internal": 192.168.76.1} host record injected into CoreDNS's ConfigMap
	I1227 10:07:24.098691  522415 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (1.290039637s)
	I1227 10:07:24.099300  522415 api_server.go:52] waiting for apiserver process to appear ...
	I1227 10:07:24.099347  522415 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 10:07:24.099432  522415 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.190239804s)
	I1227 10:07:24.134801  522415 api_server.go:72] duration metric: took 1.913940858s to wait for apiserver process to appear ...
	I1227 10:07:24.134826  522415 api_server.go:88] waiting for apiserver healthz status ...
	I1227 10:07:24.134846  522415 api_server.go:299] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1227 10:07:24.165570  522415 api_server.go:325] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1227 10:07:24.170325  522415 api_server.go:141] control plane version: v1.35.0
	I1227 10:07:24.170356  522415 api_server.go:131] duration metric: took 35.524053ms to wait for apiserver health ...
	I1227 10:07:24.170365  522415 system_pods.go:43] waiting for kube-system pods to appear ...
	I1227 10:07:24.186451  522415 system_pods.go:59] 9 kube-system pods found
	I1227 10:07:24.186498  522415 system_pods.go:61] "coredns-7d764666f9-8hltm" [53788b36-0c73-4a63-a7cf-762e014c7476] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint(s). no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1227 10:07:24.186527  522415 system_pods.go:61] "coredns-7d764666f9-ztmc7" [f239b963-7c4a-4112-8652-c5b0f615f94f] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint(s). no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1227 10:07:24.186542  522415 system_pods.go:61] "etcd-newest-cni-133340" [cfbaeb70-0fb0-4c4a-9a7e-163789d297a4] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1227 10:07:24.186551  522415 system_pods.go:61] "kindnet-fgjsl" [c7827a10-1fba-4ca9-a964-97f5b7ea1ceb] Pending / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1227 10:07:24.186585  522415 system_pods.go:61] "kube-apiserver-newest-cni-133340" [9c2aa856-552a-4144-af72-84fde5e9c118] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1227 10:07:24.186599  522415 system_pods.go:61] "kube-controller-manager-newest-cni-133340" [4d2a8823-9c53-4856-8b61-0f5847c7877d] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1227 10:07:24.186604  522415 system_pods.go:61] "kube-proxy-524xs" [21306208-0f93-4fa6-9524-38dc4245c9de] Running
	I1227 10:07:24.186617  522415 system_pods.go:61] "kube-scheduler-newest-cni-133340" [f9032e07-acb4-4316-af98-a51df2721f9f] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1227 10:07:24.186623  522415 system_pods.go:61] "storage-provisioner" [00d34553-4b22-4ac9-9a3b-c1a9cb443967] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint(s). no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1227 10:07:24.186648  522415 system_pods.go:74] duration metric: took 16.262807ms to wait for pod list to return data ...
	I1227 10:07:24.186664  522415 default_sa.go:34] waiting for default service account to be created ...
	I1227 10:07:24.193081  522415 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1227 10:07:24.196147  522415 addons.go:530] duration metric: took 1.974901839s for enable addons: enabled=[storage-provisioner default-storageclass]
	I1227 10:07:24.198072  522415 default_sa.go:45] found service account: "default"
	I1227 10:07:24.198100  522415 default_sa.go:55] duration metric: took 11.42863ms for default service account to be created ...
	I1227 10:07:24.198135  522415 kubeadm.go:587] duration metric: took 1.977279707s to wait for: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1227 10:07:24.198180  522415 node_conditions.go:102] verifying NodePressure condition ...
	I1227 10:07:24.206534  522415 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1227 10:07:24.206568  522415 node_conditions.go:123] node cpu capacity is 2
	I1227 10:07:24.206581  522415 node_conditions.go:105] duration metric: took 8.394714ms to run NodePressure ...
	I1227 10:07:24.206617  522415 start.go:242] waiting for startup goroutines ...
	I1227 10:07:24.602509  522415 kapi.go:214] "coredns" deployment in "kube-system" namespace and "newest-cni-133340" context rescaled to 1 replicas
	I1227 10:07:24.602584  522415 start.go:247] waiting for cluster config update ...
	I1227 10:07:24.602611  522415 start.go:256] writing updated cluster config ...
	I1227 10:07:24.602906  522415 ssh_runner.go:195] Run: rm -f paused
	I1227 10:07:24.694299  522415 start.go:625] kubectl: 1.33.2, cluster: 1.35.0 (minor skew: 2)
	I1227 10:07:24.701388  522415 out.go:203] 
	W1227 10:07:24.705123  522415 out.go:285] ! /usr/local/bin/kubectl is version 1.33.2, which may have incompatibilities with Kubernetes 1.35.0.
	I1227 10:07:24.708866  522415 out.go:179]   - Want kubectl v1.35.0? Try 'minikube kubectl -- get pods -A'
	I1227 10:07:24.712435  522415 out.go:179] * Done! kubectl is now configured to use "newest-cni-133340" cluster and "default" namespace by default
	I1227 10:07:20.347425  526230 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1227 10:07:20.347672  526230 start.go:159] libmachine.API.Create for "auto-246753" (driver="docker")
	I1227 10:07:20.347708  526230 client.go:173] LocalClient.Create starting
	I1227 10:07:20.347784  526230 main.go:144] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22344-301174/.minikube/certs/ca.pem
	I1227 10:07:20.347825  526230 main.go:144] libmachine: Decoding PEM data...
	I1227 10:07:20.347844  526230 main.go:144] libmachine: Parsing certificate...
	I1227 10:07:20.347894  526230 main.go:144] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22344-301174/.minikube/certs/cert.pem
	I1227 10:07:20.347915  526230 main.go:144] libmachine: Decoding PEM data...
	I1227 10:07:20.347926  526230 main.go:144] libmachine: Parsing certificate...
	I1227 10:07:20.348282  526230 cli_runner.go:164] Run: docker network inspect auto-246753 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1227 10:07:20.364949  526230 cli_runner.go:211] docker network inspect auto-246753 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1227 10:07:20.365032  526230 network_create.go:284] running [docker network inspect auto-246753] to gather additional debugging logs...
	I1227 10:07:20.365055  526230 cli_runner.go:164] Run: docker network inspect auto-246753
	W1227 10:07:20.381079  526230 cli_runner.go:211] docker network inspect auto-246753 returned with exit code 1
	I1227 10:07:20.381111  526230 network_create.go:287] error running [docker network inspect auto-246753]: docker network inspect auto-246753: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network auto-246753 not found
	I1227 10:07:20.381125  526230 network_create.go:289] output of [docker network inspect auto-246753]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network auto-246753 not found
	
	** /stderr **
	I1227 10:07:20.381232  526230 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1227 10:07:20.408862  526230 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-cc45c0939b74 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:be:a9:eb:a6:c1:01} reservation:<nil>}
	I1227 10:07:20.409244  526230 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-b122a856da6d IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:2e:eb:89:95:c1:62} reservation:<nil>}
	I1227 10:07:20.409474  526230 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-a1a16649dea9 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:9e:51:2b:90:d1:ea} reservation:<nil>}
	I1227 10:07:20.409778  526230 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-96182c0697df IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:2a:ed:bd:6f:51:96} reservation:<nil>}
	I1227 10:07:20.410337  526230 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001a6c540}
	I1227 10:07:20.410372  526230 network_create.go:124] attempt to create docker network auto-246753 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I1227 10:07:20.410430  526230 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=auto-246753 auto-246753
	I1227 10:07:20.494844  526230 network_create.go:108] docker network auto-246753 192.168.85.0/24 created
	I1227 10:07:20.494879  526230 kic.go:121] calculated static IP "192.168.85.2" for the "auto-246753" container
	I1227 10:07:20.494954  526230 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1227 10:07:20.513242  526230 cli_runner.go:164] Run: docker volume create auto-246753 --label name.minikube.sigs.k8s.io=auto-246753 --label created_by.minikube.sigs.k8s.io=true
	I1227 10:07:20.536228  526230 oci.go:103] Successfully created a docker volume auto-246753
	I1227 10:07:20.536329  526230 cli_runner.go:164] Run: docker run --rm --name auto-246753-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=auto-246753 --entrypoint /usr/bin/test -v auto-246753:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a -d /var/lib
	I1227 10:07:22.824203  526230 cli_runner.go:217] Completed: docker run --rm --name auto-246753-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=auto-246753 --entrypoint /usr/bin/test -v auto-246753:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a -d /var/lib: (2.28781064s)
	I1227 10:07:22.824231  526230 oci.go:107] Successfully prepared a docker volume auto-246753
	I1227 10:07:22.824276  526230 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime crio
	I1227 10:07:22.824295  526230 kic.go:194] Starting extracting preloaded images to volume ...
	I1227 10:07:22.824381  526230 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22344-301174/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v auto-246753:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a -I lz4 -xf /preloaded.tar -C /extractDir
	
	
	==> CRI-O <==
	Dec 27 10:07:09 newest-cni-133340 crio[843]: time="2025-12-27T10:07:09.820015458Z" level=info msg="Created container 297892f84f812685c673257f7fbc8f87967b55fc55ce966d911af249425c3eed: kube-system/kube-controller-manager-newest-cni-133340/kube-controller-manager" id=adc5d09a-272c-4721-8c49-cea259a02197 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 27 10:07:09 newest-cni-133340 crio[843]: time="2025-12-27T10:07:09.822726089Z" level=info msg="Starting container: 297892f84f812685c673257f7fbc8f87967b55fc55ce966d911af249425c3eed" id=85471a67-5a58-47b6-b66f-762e74208ad5 name=/runtime.v1.RuntimeService/StartContainer
	Dec 27 10:07:09 newest-cni-133340 crio[843]: time="2025-12-27T10:07:09.826199354Z" level=info msg="Started container" PID=1260 containerID=297892f84f812685c673257f7fbc8f87967b55fc55ce966d911af249425c3eed description=kube-system/kube-controller-manager-newest-cni-133340/kube-controller-manager id=85471a67-5a58-47b6-b66f-762e74208ad5 name=/runtime.v1.RuntimeService/StartContainer sandboxID=0341d23ce6f7720da67c073c1991d89ad42c8e2f276f95421c9b5e610cdf32a6
	Dec 27 10:07:23 newest-cni-133340 crio[843]: time="2025-12-27T10:07:23.101941333Z" level=info msg="Running pod sandbox: kube-system/kube-proxy-524xs/POD" id=6f79da3c-55f2-4694-ae77-c894298debb5 name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 27 10:07:23 newest-cni-133340 crio[843]: time="2025-12-27T10:07:23.102036834Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 27 10:07:23 newest-cni-133340 crio[843]: time="2025-12-27T10:07:23.112171229Z" level=info msg="Running pod sandbox: kube-system/kindnet-fgjsl/POD" id=ec01412d-b4fa-4ae1-beeb-d442c9c77f8c name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 27 10:07:23 newest-cni-133340 crio[843]: time="2025-12-27T10:07:23.112284978Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 27 10:07:23 newest-cni-133340 crio[843]: time="2025-12-27T10:07:23.125127895Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=ec01412d-b4fa-4ae1-beeb-d442c9c77f8c name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 27 10:07:23 newest-cni-133340 crio[843]: time="2025-12-27T10:07:23.12942568Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=6f79da3c-55f2-4694-ae77-c894298debb5 name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 27 10:07:23 newest-cni-133340 crio[843]: time="2025-12-27T10:07:23.135493882Z" level=info msg="Ran pod sandbox 7fb0f2e692189c5e2a6c9daa9c9635ddcef6ebc6a51d57fd6e1d08f863933787 with infra container: kube-system/kindnet-fgjsl/POD" id=ec01412d-b4fa-4ae1-beeb-d442c9c77f8c name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 27 10:07:23 newest-cni-133340 crio[843]: time="2025-12-27T10:07:23.141603399Z" level=info msg="Ran pod sandbox 981a54091f304a57a75487337955667e6731d363ad1ce056210021943fc89fd6 with infra container: kube-system/kube-proxy-524xs/POD" id=6f79da3c-55f2-4694-ae77-c894298debb5 name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 27 10:07:23 newest-cni-133340 crio[843]: time="2025-12-27T10:07:23.155136564Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20251212-v0.29.0-alpha-105-g20ccfc88" id=48d58402-1e78-4a3a-9038-f0314ccfb93a name=/runtime.v1.ImageService/ImageStatus
	Dec 27 10:07:23 newest-cni-133340 crio[843]: time="2025-12-27T10:07:23.155559683Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.35.0" id=11b3bef4-a1ce-49fe-89f5-ab847ad3cb0e name=/runtime.v1.ImageService/ImageStatus
	Dec 27 10:07:23 newest-cni-133340 crio[843]: time="2025-12-27T10:07:23.156436338Z" level=info msg="Image docker.io/kindest/kindnetd:v20251212-v0.29.0-alpha-105-g20ccfc88 not found" id=48d58402-1e78-4a3a-9038-f0314ccfb93a name=/runtime.v1.ImageService/ImageStatus
	Dec 27 10:07:23 newest-cni-133340 crio[843]: time="2025-12-27T10:07:23.156601649Z" level=info msg="Neither image nor artfiact docker.io/kindest/kindnetd:v20251212-v0.29.0-alpha-105-g20ccfc88 found" id=48d58402-1e78-4a3a-9038-f0314ccfb93a name=/runtime.v1.ImageService/ImageStatus
	Dec 27 10:07:23 newest-cni-133340 crio[843]: time="2025-12-27T10:07:23.161053093Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.35.0" id=e4e965b6-7d30-4ea7-bfe5-05d3b8ddbace name=/runtime.v1.ImageService/ImageStatus
	Dec 27 10:07:23 newest-cni-133340 crio[843]: time="2025-12-27T10:07:23.165742144Z" level=info msg="Pulling image: docker.io/kindest/kindnetd:v20251212-v0.29.0-alpha-105-g20ccfc88" id=8b9c5fb7-a91b-4f86-8479-b944aff35451 name=/runtime.v1.ImageService/PullImage
	Dec 27 10:07:23 newest-cni-133340 crio[843]: time="2025-12-27T10:07:23.174595334Z" level=info msg="Trying to access \"docker.io/kindest/kindnetd:v20251212-v0.29.0-alpha-105-g20ccfc88\""
	Dec 27 10:07:23 newest-cni-133340 crio[843]: time="2025-12-27T10:07:23.179040468Z" level=info msg="Creating container: kube-system/kube-proxy-524xs/kube-proxy" id=44727f32-751c-4717-81db-3468413fc0f4 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 27 10:07:23 newest-cni-133340 crio[843]: time="2025-12-27T10:07:23.179299368Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 27 10:07:23 newest-cni-133340 crio[843]: time="2025-12-27T10:07:23.18960846Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 27 10:07:23 newest-cni-133340 crio[843]: time="2025-12-27T10:07:23.196920559Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 27 10:07:23 newest-cni-133340 crio[843]: time="2025-12-27T10:07:23.228981217Z" level=info msg="Created container 8a21cf11b87c1900e0eec949db3beaa8d0fe682aa94a0d653192996907fb773e: kube-system/kube-proxy-524xs/kube-proxy" id=44727f32-751c-4717-81db-3468413fc0f4 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 27 10:07:23 newest-cni-133340 crio[843]: time="2025-12-27T10:07:23.232660902Z" level=info msg="Starting container: 8a21cf11b87c1900e0eec949db3beaa8d0fe682aa94a0d653192996907fb773e" id=9c3317bf-e701-4968-baa6-3adb477b7c5e name=/runtime.v1.RuntimeService/StartContainer
	Dec 27 10:07:23 newest-cni-133340 crio[843]: time="2025-12-27T10:07:23.270789033Z" level=info msg="Started container" PID=1485 containerID=8a21cf11b87c1900e0eec949db3beaa8d0fe682aa94a0d653192996907fb773e description=kube-system/kube-proxy-524xs/kube-proxy id=9c3317bf-e701-4968-baa6-3adb477b7c5e name=/runtime.v1.RuntimeService/StartContainer sandboxID=981a54091f304a57a75487337955667e6731d363ad1ce056210021943fc89fd6
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	8a21cf11b87c1       de369f46c2ff55c31ea783a663eb203caa820f3db1f9b9c935e79e7d1e9fd9e5   3 seconds ago       Running             kube-proxy                0                   981a54091f304       kube-proxy-524xs                            kube-system
	f77753ad8f1ec       c3fcf259c473a57a5d7da116e29161904491091743512d27467c907c5516f856   16 seconds ago      Running             kube-apiserver            0                   9fc4d9fd9e6de       kube-apiserver-newest-cni-133340            kube-system
	297892f84f812       88898f1d1a62a3ea9db5d4d099dee7aa52ebe8191016c5b3c721388a309983e0   16 seconds ago      Running             kube-controller-manager   0                   0341d23ce6f77       kube-controller-manager-newest-cni-133340   kube-system
	316f0d316eb7d       ddc8422d4d35a6fc66c34be61e24df795e5cebf197eb546f62740d0bafef874f   17 seconds ago      Running             kube-scheduler            0                   fde5cb9ca4b90       kube-scheduler-newest-cni-133340            kube-system
	affa0dfd5e3a4       271e49a0ebc56647476845128fcd2a73bb138beeca3878cc3bf52b4ff1172a57   17 seconds ago      Running             etcd                      0                   4a718bab5fd11       etcd-newest-cni-133340                      kube-system
	
	
	==> describe nodes <==
	Name:               newest-cni-133340
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=newest-cni-133340
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=a2daf445edf4872fd9586416ba5dbf507613db86
	                    minikube.k8s.io/name=newest-cni-133340
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_27T10_07_18_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 27 Dec 2025 10:07:14 +0000
	Taints:             node.kubernetes.io/not-ready:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  newest-cni-133340
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 27 Dec 2025 10:07:17 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 27 Dec 2025 10:07:17 +0000   Sat, 27 Dec 2025 10:07:10 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 27 Dec 2025 10:07:17 +0000   Sat, 27 Dec 2025 10:07:10 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 27 Dec 2025 10:07:17 +0000   Sat, 27 Dec 2025 10:07:10 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            False   Sat, 27 Dec 2025 10:07:17 +0000   Sat, 27 Dec 2025 10:07:10 +0000   KubeletNotReady              container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/cni/net.d/. Has your network provider started?
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    newest-cni-133340
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022304Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022304Ki
	  pods:               110
	System Info:
	  Machine ID:                 56495bcb733b49eb3642bf15694bbc8c
	  System UUID:                2ffc56ef-4a0e-4350-837c-13fb816f4d7e
	  Boot ID:                    bc2ae459-717b-455e-ba6b-b9cf89c5e3c8
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.3
	  Kubelet Version:            v1.35.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.42.0.0/24
	PodCIDRs:                     10.42.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  kube-system                 etcd-newest-cni-133340                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         9s
	  kube-system                 kindnet-fgjsl                                100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      4s
	  kube-system                 kube-apiserver-newest-cni-133340             250m (12%)    0 (0%)      0 (0%)           0 (0%)         11s
	  kube-system                 kube-controller-manager-newest-cni-133340    200m (10%)    0 (0%)      0 (0%)           0 (0%)         9s
	  kube-system                 kube-proxy-524xs                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         4s
	  kube-system                 kube-scheduler-newest-cni-133340             100m (5%)     0 (0%)      0 (0%)           0 (0%)         9s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (1%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type    Reason          Age   From             Message
	  ----    ------          ----  ----             -------
	  Normal  RegisteredNode  5s    node-controller  Node newest-cni-133340 event: Registered Node newest-cni-133340 in Controller
	
	
	==> dmesg <==
	[ +35.855481] overlayfs: idmapped layers are currently not supported
	[Dec27 09:36] overlayfs: idmapped layers are currently not supported
	[Dec27 09:37] overlayfs: idmapped layers are currently not supported
	[  +4.962986] overlayfs: idmapped layers are currently not supported
	[Dec27 09:38] overlayfs: idmapped layers are currently not supported
	[ +23.051169] overlayfs: idmapped layers are currently not supported
	[Dec27 09:39] overlayfs: idmapped layers are currently not supported
	[ +34.383287] overlayfs: idmapped layers are currently not supported
	[Dec27 09:41] overlayfs: idmapped layers are currently not supported
	[Dec27 09:43] overlayfs: idmapped layers are currently not supported
	[ +35.522529] overlayfs: idmapped layers are currently not supported
	[Dec27 09:48] overlayfs: idmapped layers are currently not supported
	[Dec27 09:49] overlayfs: idmapped layers are currently not supported
	[Dec27 09:51] overlayfs: idmapped layers are currently not supported
	[Dec27 09:52] overlayfs: idmapped layers are currently not supported
	[Dec27 09:59] overlayfs: idmapped layers are currently not supported
	[ +37.649191] overlayfs: idmapped layers are currently not supported
	[Dec27 10:01] overlayfs: idmapped layers are currently not supported
	[Dec27 10:02] overlayfs: idmapped layers are currently not supported
	[Dec27 10:03] overlayfs: idmapped layers are currently not supported
	[Dec27 10:04] overlayfs: idmapped layers are currently not supported
	[Dec27 10:05] overlayfs: idmapped layers are currently not supported
	[ +42.108139] overlayfs: idmapped layers are currently not supported
	[Dec27 10:06] overlayfs: idmapped layers are currently not supported
	[Dec27 10:07] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [affa0dfd5e3a484973ce56892f4200934e670bd6451a0a176a60a35d0e58428d] <==
	{"level":"info","ts":"2025-12-27T10:07:10.387311Z","logger":"raft","caller":"v3@v3.6.0/raft.go:988","msg":"ea7e25599daad906 is starting a new election at term 1"}
	{"level":"info","ts":"2025-12-27T10:07:10.387459Z","logger":"raft","caller":"v3@v3.6.0/raft.go:930","msg":"ea7e25599daad906 became pre-candidate at term 1"}
	{"level":"info","ts":"2025-12-27T10:07:10.387547Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1077","msg":"ea7e25599daad906 received MsgPreVoteResp from ea7e25599daad906 at term 1"}
	{"level":"info","ts":"2025-12-27T10:07:10.387618Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1693","msg":"ea7e25599daad906 has received 1 MsgPreVoteResp votes and 0 vote rejections"}
	{"level":"info","ts":"2025-12-27T10:07:10.387685Z","logger":"raft","caller":"v3@v3.6.0/raft.go:912","msg":"ea7e25599daad906 became candidate at term 2"}
	{"level":"info","ts":"2025-12-27T10:07:10.389201Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1077","msg":"ea7e25599daad906 received MsgVoteResp from ea7e25599daad906 at term 2"}
	{"level":"info","ts":"2025-12-27T10:07:10.389285Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1693","msg":"ea7e25599daad906 has received 1 MsgVoteResp votes and 0 vote rejections"}
	{"level":"info","ts":"2025-12-27T10:07:10.389328Z","logger":"raft","caller":"v3@v3.6.0/raft.go:970","msg":"ea7e25599daad906 became leader at term 2"}
	{"level":"info","ts":"2025-12-27T10:07:10.389363Z","logger":"raft","caller":"v3@v3.6.0/node.go:370","msg":"raft.node: ea7e25599daad906 elected leader ea7e25599daad906 at term 2"}
	{"level":"info","ts":"2025-12-27T10:07:10.394471Z","caller":"etcdserver/server.go:2420","msg":"setting up initial cluster version using v3 API","cluster-version":"3.6"}
	{"level":"info","ts":"2025-12-27T10:07:10.396502Z","caller":"etcdserver/server.go:1820","msg":"published local member to cluster through raft","local-member-id":"ea7e25599daad906","local-member-attributes":"{Name:newest-cni-133340 ClientURLs:[https://192.168.76.2:2379]}","cluster-id":"6f20f2c4b2fb5f8a","publish-timeout":"7s"}
	{"level":"info","ts":"2025-12-27T10:07:10.400720Z","caller":"membership/cluster.go:682","msg":"set initial cluster version","cluster-id":"6f20f2c4b2fb5f8a","local-member-id":"ea7e25599daad906","cluster-version":"3.6"}
	{"level":"info","ts":"2025-12-27T10:07:10.401044Z","caller":"api/capability.go:76","msg":"enabled capabilities for version","cluster-version":"3.6"}
	{"level":"info","ts":"2025-12-27T10:07:10.401136Z","caller":"etcdserver/server.go:2440","msg":"cluster version is updated","cluster-version":"3.6"}
	{"level":"info","ts":"2025-12-27T10:07:10.400760Z","caller":"embed/serve.go:138","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-12-27T10:07:10.401325Z","caller":"version/monitor.go:116","msg":"cluster version differs from storage version.","cluster-version":"3.6.0","storage-version":"3.5.0"}
	{"level":"info","ts":"2025-12-27T10:07:10.402260Z","caller":"schema/migration.go:65","msg":"updated storage version","new-storage-version":"3.6.0"}
	{"level":"info","ts":"2025-12-27T10:07:10.400771Z","caller":"embed/serve.go:138","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-12-27T10:07:10.403216Z","caller":"v3rpc/health.go:63","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-12-27T10:07:10.420493Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-12-27T10:07:10.420537Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-12-27T10:07:10.421861Z","caller":"embed/serve.go:283","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.76.2:2379"}
	{"level":"info","ts":"2025-12-27T10:07:10.426536Z","caller":"v3rpc/health.go:63","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-12-27T10:07:10.430189Z","caller":"embed/serve.go:283","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2025-12-27T10:07:22.051596Z","caller":"traceutil/trace.go:172","msg":"trace[1804160775] transaction","detail":"{read_only:false; response_revision:319; number_of_response:1; }","duration":"121.440487ms","start":"2025-12-27T10:07:21.930139Z","end":"2025-12-27T10:07:22.051580Z","steps":["trace[1804160775] 'process raft request'  (duration: 95.804517ms)","trace[1804160775] 'compare'  (duration: 25.337513ms)"],"step_count":2}
	
	
	==> kernel <==
	 10:07:27 up  2:49,  0 user,  load average: 4.18, 2.97, 2.43
	Linux newest-cni-133340 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kube-apiserver [f77753ad8f1ec9013d851fe05720423ec0caea62ad6933a529223199289494d6] <==
	I1227 10:07:14.420946       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1227 10:07:14.421069       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1227 10:07:14.421123       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1227 10:07:14.421308       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1227 10:07:14.430922       1 cache.go:39] Caches are synced for autoregister controller
	I1227 10:07:14.472757       1 cidrallocator.go:302] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1227 10:07:14.476879       1 default_servicecidr_controller.go:231] Setting default ServiceCIDR condition Ready to True
	I1227 10:07:14.512709       1 cidrallocator.go:278] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1227 10:07:14.792865       1 storage_scheduling.go:123] created PriorityClass system-node-critical with value 2000001000
	I1227 10:07:14.800979       1 storage_scheduling.go:123] created PriorityClass system-cluster-critical with value 2000000000
	I1227 10:07:14.801007       1 storage_scheduling.go:139] all system priority classes are created successfully or already exist.
	I1227 10:07:15.904466       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1227 10:07:15.964258       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1227 10:07:16.099009       1 alloc.go:329] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1227 10:07:16.111792       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.76.2]
	I1227 10:07:16.113060       1 controller.go:667] quota admission added evaluator for: endpoints
	I1227 10:07:16.120226       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1227 10:07:16.187450       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1227 10:07:17.275390       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1227 10:07:17.317297       1 alloc.go:329] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1227 10:07:17.338671       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1227 10:07:22.109846       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1227 10:07:22.132092       1 cidrallocator.go:278] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1227 10:07:22.172013       1 cidrallocator.go:278] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1227 10:07:22.217654       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	
	
	==> kube-controller-manager [297892f84f812685c673257f7fbc8f87967b55fc55ce966d911af249425c3eed] <==
	I1227 10:07:21.009562       1 node_lifecycle_controller.go:1038] "Controller detected that all Nodes are not-Ready. Entering master disruption mode"
	I1227 10:07:21.009625       1 shared_informer.go:377] "Caches are synced"
	I1227 10:07:21.009864       1 shared_informer.go:377] "Caches are synced"
	I1227 10:07:21.011480       1 shared_informer.go:377] "Caches are synced"
	I1227 10:07:21.011576       1 shared_informer.go:377] "Caches are synced"
	I1227 10:07:21.011637       1 shared_informer.go:377] "Caches are synced"
	I1227 10:07:21.011661       1 shared_informer.go:377] "Caches are synced"
	I1227 10:07:21.011764       1 shared_informer.go:377] "Caches are synced"
	I1227 10:07:21.011972       1 shared_informer.go:377] "Caches are synced"
	I1227 10:07:21.012171       1 shared_informer.go:377] "Caches are synced"
	I1227 10:07:21.012476       1 shared_informer.go:377] "Caches are synced"
	I1227 10:07:21.012706       1 shared_informer.go:377] "Caches are synced"
	I1227 10:07:21.013122       1 shared_informer.go:377] "Caches are synced"
	I1227 10:07:21.013456       1 shared_informer.go:377] "Caches are synced"
	I1227 10:07:21.014555       1 shared_informer.go:377] "Caches are synced"
	I1227 10:07:21.014749       1 shared_informer.go:377] "Caches are synced"
	I1227 10:07:21.014909       1 shared_informer.go:377] "Caches are synced"
	I1227 10:07:21.017125       1 shared_informer.go:370] "Waiting for caches to sync"
	I1227 10:07:20.999637       1 shared_informer.go:377] "Caches are synced"
	I1227 10:07:21.030379       1 shared_informer.go:377] "Caches are synced"
	I1227 10:07:21.038067       1 range_allocator.go:433] "Set node PodCIDR" node="newest-cni-133340" podCIDRs=["10.42.0.0/24"]
	I1227 10:07:21.103786       1 shared_informer.go:377] "Caches are synced"
	I1227 10:07:21.103815       1 garbagecollector.go:166] "Garbage collector: all resource monitors have synced"
	I1227 10:07:21.103821       1 garbagecollector.go:169] "Proceeding to collect garbage"
	I1227 10:07:21.119425       1 shared_informer.go:377] "Caches are synced"
	
	
	==> kube-proxy [8a21cf11b87c1900e0eec949db3beaa8d0fe682aa94a0d653192996907fb773e] <==
	I1227 10:07:23.333023       1 server_linux.go:53] "Using iptables proxy"
	I1227 10:07:23.459544       1 shared_informer.go:370] "Waiting for caches to sync"
	I1227 10:07:23.560477       1 shared_informer.go:377] "Caches are synced"
	I1227 10:07:23.560513       1 server.go:218] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1227 10:07:23.560586       1 server.go:255] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1227 10:07:23.624646       1 server.go:264] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1227 10:07:23.624702       1 server_linux.go:136] "Using iptables Proxier"
	I1227 10:07:23.638765       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1227 10:07:23.639074       1 server.go:529] "Version info" version="v1.35.0"
	I1227 10:07:23.639088       1 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1227 10:07:23.640566       1 config.go:200] "Starting service config controller"
	I1227 10:07:23.640577       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1227 10:07:23.640594       1 config.go:106] "Starting endpoint slice config controller"
	I1227 10:07:23.640615       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1227 10:07:23.640627       1 config.go:403] "Starting serviceCIDR config controller"
	I1227 10:07:23.640630       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1227 10:07:23.650959       1 config.go:309] "Starting node config controller"
	I1227 10:07:23.650976       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1227 10:07:23.650983       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1227 10:07:23.740705       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1227 10:07:23.740737       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1227 10:07:23.740762       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [316f0d316eb7d81cc13260dd83013010d3616fcc6498fad2cb5b67c984432972] <==
	E1227 10:07:14.364636       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1268" type="*v1.ConfigMap"
	E1227 10:07:14.383023       1 reflector.go:204] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.StorageClass"
	E1227 10:07:14.383194       1 reflector.go:204] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.CSIStorageCapacity"
	E1227 10:07:14.383297       1 reflector.go:204] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.PodDisruptionBudget"
	E1227 10:07:14.383428       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Node"
	E1227 10:07:14.383591       1 reflector.go:204] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.StatefulSet"
	E1227 10:07:14.383667       1 reflector.go:204] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.CSIDriver"
	E1227 10:07:14.390775       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ReplicationController"
	E1227 10:07:14.390987       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Pod"
	E1227 10:07:14.391104       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ResourceSlice"
	E1227 10:07:14.391262       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ResourceClaim"
	E1227 10:07:14.391375       1 reflector.go:204] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.VolumeAttachment"
	E1227 10:07:14.391530       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Service"
	E1227 10:07:14.391659       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ReplicaSet"
	E1227 10:07:14.394552       1 reflector.go:204] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.DeviceClass"
	E1227 10:07:14.394777       1 reflector.go:204] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.PersistentVolume"
	E1227 10:07:14.394909       1 reflector.go:204] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.CSINode"
	E1227 10:07:14.395108       1 reflector.go:204] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.PersistentVolumeClaim"
	E1227 10:07:14.395259       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Namespace"
	E1227 10:07:15.190395       1 reflector.go:204] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.StorageClass"
	E1227 10:07:15.211391       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ReplicationController"
	E1227 10:07:15.277444       1 reflector.go:204] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.PersistentVolumeClaim"
	E1227 10:07:15.450849       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ResourceSlice"
	E1227 10:07:15.921111       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1268" type="*v1.ConfigMap"
	I1227 10:07:18.334323       1 shared_informer.go:377] "Caches are synced"
	
	
	==> kubelet <==
	Dec 27 10:07:18 newest-cni-133340 kubelet[1306]: E1227 10:07:18.734456    1306 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/etcd-newest-cni-133340" containerName="etcd"
	Dec 27 10:07:18 newest-cni-133340 kubelet[1306]: I1227 10:07:18.745989    1306 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kube-scheduler-newest-cni-133340" podStartSLOduration=1.7459736540000002 podStartE2EDuration="1.745973654s" podCreationTimestamp="2025-12-27 10:07:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-27 10:07:18.745696153 +0000 UTC m=+1.570477848" watchObservedRunningTime="2025-12-27 10:07:18.745973654 +0000 UTC m=+1.570755348"
	Dec 27 10:07:18 newest-cni-133340 kubelet[1306]: I1227 10:07:18.757913    1306 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/etcd-newest-cni-133340" podStartSLOduration=1.757896009 podStartE2EDuration="1.757896009s" podCreationTimestamp="2025-12-27 10:07:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-27 10:07:18.757759245 +0000 UTC m=+1.582540956" watchObservedRunningTime="2025-12-27 10:07:18.757896009 +0000 UTC m=+1.582677721"
	Dec 27 10:07:18 newest-cni-133340 kubelet[1306]: I1227 10:07:18.792757    1306 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kube-apiserver-newest-cni-133340" podStartSLOduration=3.792742433 podStartE2EDuration="3.792742433s" podCreationTimestamp="2025-12-27 10:07:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-27 10:07:18.776449225 +0000 UTC m=+1.601230928" watchObservedRunningTime="2025-12-27 10:07:18.792742433 +0000 UTC m=+1.617524128"
	Dec 27 10:07:18 newest-cni-133340 kubelet[1306]: I1227 10:07:18.807847    1306 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kube-controller-manager-newest-cni-133340" podStartSLOduration=1.807830949 podStartE2EDuration="1.807830949s" podCreationTimestamp="2025-12-27 10:07:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-27 10:07:18.792691413 +0000 UTC m=+1.617473116" watchObservedRunningTime="2025-12-27 10:07:18.807830949 +0000 UTC m=+1.632612652"
	Dec 27 10:07:19 newest-cni-133340 kubelet[1306]: E1227 10:07:19.716650    1306 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-newest-cni-133340" containerName="kube-apiserver"
	Dec 27 10:07:19 newest-cni-133340 kubelet[1306]: E1227 10:07:19.716956    1306 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/etcd-newest-cni-133340" containerName="etcd"
	Dec 27 10:07:19 newest-cni-133340 kubelet[1306]: E1227 10:07:19.717294    1306 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-newest-cni-133340" containerName="kube-scheduler"
	Dec 27 10:07:20 newest-cni-133340 kubelet[1306]: E1227 10:07:20.718376    1306 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-newest-cni-133340" containerName="kube-scheduler"
	Dec 27 10:07:20 newest-cni-133340 kubelet[1306]: E1227 10:07:20.718424    1306 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/etcd-newest-cni-133340" containerName="etcd"
	Dec 27 10:07:20 newest-cni-133340 kubelet[1306]: E1227 10:07:20.718678    1306 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-newest-cni-133340" containerName="kube-apiserver"
	Dec 27 10:07:21 newest-cni-133340 kubelet[1306]: I1227 10:07:21.066584    1306 kuberuntime_manager.go:2062] "Updating runtime config through cri with podcidr" CIDR="10.42.0.0/24"
	Dec 27 10:07:21 newest-cni-133340 kubelet[1306]: I1227 10:07:21.069229    1306 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.42.0.0/24"
	Dec 27 10:07:21 newest-cni-133340 kubelet[1306]: E1227 10:07:21.720012    1306 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-newest-cni-133340" containerName="kube-scheduler"
	Dec 27 10:07:21 newest-cni-133340 kubelet[1306]: E1227 10:07:21.977300    1306 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-controller-manager-newest-cni-133340" containerName="kube-controller-manager"
	Dec 27 10:07:22 newest-cni-133340 kubelet[1306]: E1227 10:07:22.476388    1306 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-newest-cni-133340" containerName="kube-apiserver"
	Dec 27 10:07:22 newest-cni-133340 kubelet[1306]: I1227 10:07:22.667003    1306 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qmc46\" (UniqueName: \"kubernetes.io/projected/21306208-0f93-4fa6-9524-38dc4245c9de-kube-api-access-qmc46\") pod \"kube-proxy-524xs\" (UID: \"21306208-0f93-4fa6-9524-38dc4245c9de\") " pod="kube-system/kube-proxy-524xs"
	Dec 27 10:07:22 newest-cni-133340 kubelet[1306]: I1227 10:07:22.667073    1306 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/c7827a10-1fba-4ca9-a964-97f5b7ea1ceb-cni-cfg\") pod \"kindnet-fgjsl\" (UID: \"c7827a10-1fba-4ca9-a964-97f5b7ea1ceb\") " pod="kube-system/kindnet-fgjsl"
	Dec 27 10:07:22 newest-cni-133340 kubelet[1306]: I1227 10:07:22.667097    1306 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c7827a10-1fba-4ca9-a964-97f5b7ea1ceb-xtables-lock\") pod \"kindnet-fgjsl\" (UID: \"c7827a10-1fba-4ca9-a964-97f5b7ea1ceb\") " pod="kube-system/kindnet-fgjsl"
	Dec 27 10:07:22 newest-cni-133340 kubelet[1306]: I1227 10:07:22.667140    1306 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/21306208-0f93-4fa6-9524-38dc4245c9de-lib-modules\") pod \"kube-proxy-524xs\" (UID: \"21306208-0f93-4fa6-9524-38dc4245c9de\") " pod="kube-system/kube-proxy-524xs"
	Dec 27 10:07:22 newest-cni-133340 kubelet[1306]: I1227 10:07:22.667163    1306 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/21306208-0f93-4fa6-9524-38dc4245c9de-kube-proxy\") pod \"kube-proxy-524xs\" (UID: \"21306208-0f93-4fa6-9524-38dc4245c9de\") " pod="kube-system/kube-proxy-524xs"
	Dec 27 10:07:22 newest-cni-133340 kubelet[1306]: I1227 10:07:22.667182    1306 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/21306208-0f93-4fa6-9524-38dc4245c9de-xtables-lock\") pod \"kube-proxy-524xs\" (UID: \"21306208-0f93-4fa6-9524-38dc4245c9de\") " pod="kube-system/kube-proxy-524xs"
	Dec 27 10:07:22 newest-cni-133340 kubelet[1306]: I1227 10:07:22.667224    1306 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c7827a10-1fba-4ca9-a964-97f5b7ea1ceb-lib-modules\") pod \"kindnet-fgjsl\" (UID: \"c7827a10-1fba-4ca9-a964-97f5b7ea1ceb\") " pod="kube-system/kindnet-fgjsl"
	Dec 27 10:07:22 newest-cni-133340 kubelet[1306]: I1227 10:07:22.667246    1306 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ww2f7\" (UniqueName: \"kubernetes.io/projected/c7827a10-1fba-4ca9-a964-97f5b7ea1ceb-kube-api-access-ww2f7\") pod \"kindnet-fgjsl\" (UID: \"c7827a10-1fba-4ca9-a964-97f5b7ea1ceb\") " pod="kube-system/kindnet-fgjsl"
	Dec 27 10:07:22 newest-cni-133340 kubelet[1306]: I1227 10:07:22.810443    1306 swap_util.go:78] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory"
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-133340 -n newest-cni-133340
helpers_test.go:270: (dbg) Run:  kubectl --context newest-cni-133340 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:281: non-running pods: coredns-7d764666f9-ztmc7 storage-provisioner
helpers_test.go:283: ======> post-mortem[TestStartStop/group/newest-cni/serial/EnableAddonWhileActive]: describe non-running pods <======
helpers_test.go:286: (dbg) Run:  kubectl --context newest-cni-133340 describe pod coredns-7d764666f9-ztmc7 storage-provisioner
helpers_test.go:286: (dbg) Non-zero exit: kubectl --context newest-cni-133340 describe pod coredns-7d764666f9-ztmc7 storage-provisioner: exit status 1 (175.094087ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "coredns-7d764666f9-ztmc7" not found
	Error from server (NotFound): pods "storage-provisioner" not found

                                                
                                                
** /stderr **
helpers_test.go:288: kubectl --context newest-cni-133340 describe pod coredns-7d764666f9-ztmc7 storage-provisioner: exit status 1
--- FAIL: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (3.94s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (7.81s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p newest-cni-133340 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 pause -p newest-cni-133340 --alsologtostderr -v=1: exit status 80 (2.517713849s)

                                                
                                                
-- stdout --
	* Pausing node newest-cni-133340 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1227 10:07:48.075938  530952 out.go:360] Setting OutFile to fd 1 ...
	I1227 10:07:48.076135  530952 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 10:07:48.076162  530952 out.go:374] Setting ErrFile to fd 2...
	I1227 10:07:48.076180  530952 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 10:07:48.076498  530952 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22344-301174/.minikube/bin
	I1227 10:07:48.076804  530952 out.go:368] Setting JSON to false
	I1227 10:07:48.077292  530952 mustload.go:66] Loading cluster: newest-cni-133340
	I1227 10:07:48.077761  530952 config.go:182] Loaded profile config "newest-cni-133340": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1227 10:07:48.080927  530952 cli_runner.go:164] Run: docker container inspect newest-cni-133340 --format={{.State.Status}}
	I1227 10:07:48.111745  530952 host.go:66] Checking if "newest-cni-133340" exists ...
	I1227 10:07:48.112053  530952 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1227 10:07:48.213239  530952 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:50 OomKillDisable:true NGoroutines:63 SystemTime:2025-12-27 10:07:48.199282543 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1227 10:07:48.213878  530952 pause.go:60] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/22158/minikube-v1.37.0-1766719468-22158-arm64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1766719468-22158/minikube-v1.37.0-1766719468-22158-arm64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1766719468-22158-arm64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) preload-source:auto profile:newest-cni-133340 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) rosetta:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool
=true) wantupdatenotification:%!s(bool=true) wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1227 10:07:48.217480  530952 out.go:179] * Pausing node newest-cni-133340 ... 
	I1227 10:07:48.221442  530952 host.go:66] Checking if "newest-cni-133340" exists ...
	I1227 10:07:48.221795  530952 ssh_runner.go:195] Run: systemctl --version
	I1227 10:07:48.221845  530952 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-133340
	I1227 10:07:48.255784  530952 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33471 SSHKeyPath:/home/jenkins/minikube-integration/22344-301174/.minikube/machines/newest-cni-133340/id_rsa Username:docker}
	I1227 10:07:48.406785  530952 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1227 10:07:48.510062  530952 pause.go:52] kubelet running: true
	I1227 10:07:48.510130  530952 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1227 10:07:48.977540  530952 cri.go:61] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1227 10:07:48.977666  530952 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1227 10:07:49.169249  530952 cri.go:96] found id: "3ec22f1315d2ec502303d3a0be296af3a51716ae2198f1d68384e2893282cac1"
	I1227 10:07:49.169269  530952 cri.go:96] found id: "78fd010d78a7cb2f45faea09e3932928ca8bc285ea91b147f7d5b6b8969325e2"
	I1227 10:07:49.169274  530952 cri.go:96] found id: "cee879e19c3e8958ad16f09760810606a388ece1bda9a1e8a85a7c7adec6f94f"
	I1227 10:07:49.169281  530952 cri.go:96] found id: "c342531334826d76fa90df32a74fc2a20cd2f093ec252388fe14bd342b7da596"
	I1227 10:07:49.169284  530952 cri.go:96] found id: "39aa7832ba5e4fe2d6d2d87f20829e140105ab816a70d2f5a7edfa283eaa5e91"
	I1227 10:07:49.169295  530952 cri.go:96] found id: "7a9d01b797fbdb9019d71cb9c00b0d02027fd52f235f01e6a682c54cbb7beeb2"
	I1227 10:07:49.169298  530952 cri.go:96] found id: ""
	I1227 10:07:49.169355  530952 ssh_runner.go:195] Run: sudo runc list -f json
	I1227 10:07:49.206352  530952 retry.go:84] will retry after 300ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-27T10:07:49Z" level=error msg="open /run/runc: no such file or directory"
	I1227 10:07:49.505679  530952 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1227 10:07:49.528070  530952 pause.go:52] kubelet running: false
	I1227 10:07:49.528251  530952 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1227 10:07:49.764776  530952 cri.go:61] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1227 10:07:49.764901  530952 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1227 10:07:49.840813  530952 cri.go:96] found id: "3ec22f1315d2ec502303d3a0be296af3a51716ae2198f1d68384e2893282cac1"
	I1227 10:07:49.840877  530952 cri.go:96] found id: "78fd010d78a7cb2f45faea09e3932928ca8bc285ea91b147f7d5b6b8969325e2"
	I1227 10:07:49.840897  530952 cri.go:96] found id: "cee879e19c3e8958ad16f09760810606a388ece1bda9a1e8a85a7c7adec6f94f"
	I1227 10:07:49.840917  530952 cri.go:96] found id: "c342531334826d76fa90df32a74fc2a20cd2f093ec252388fe14bd342b7da596"
	I1227 10:07:49.840936  530952 cri.go:96] found id: "39aa7832ba5e4fe2d6d2d87f20829e140105ab816a70d2f5a7edfa283eaa5e91"
	I1227 10:07:49.841007  530952 cri.go:96] found id: "7a9d01b797fbdb9019d71cb9c00b0d02027fd52f235f01e6a682c54cbb7beeb2"
	I1227 10:07:49.841035  530952 cri.go:96] found id: ""
	I1227 10:07:49.841119  530952 ssh_runner.go:195] Run: sudo runc list -f json
	I1227 10:07:50.175926  530952 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1227 10:07:50.192581  530952 pause.go:52] kubelet running: false
	I1227 10:07:50.192658  530952 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1227 10:07:50.373805  530952 cri.go:61] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1227 10:07:50.373899  530952 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1227 10:07:50.467875  530952 cri.go:96] found id: "3ec22f1315d2ec502303d3a0be296af3a51716ae2198f1d68384e2893282cac1"
	I1227 10:07:50.467919  530952 cri.go:96] found id: "78fd010d78a7cb2f45faea09e3932928ca8bc285ea91b147f7d5b6b8969325e2"
	I1227 10:07:50.467924  530952 cri.go:96] found id: "cee879e19c3e8958ad16f09760810606a388ece1bda9a1e8a85a7c7adec6f94f"
	I1227 10:07:50.467928  530952 cri.go:96] found id: "c342531334826d76fa90df32a74fc2a20cd2f093ec252388fe14bd342b7da596"
	I1227 10:07:50.467931  530952 cri.go:96] found id: "39aa7832ba5e4fe2d6d2d87f20829e140105ab816a70d2f5a7edfa283eaa5e91"
	I1227 10:07:50.467935  530952 cri.go:96] found id: "7a9d01b797fbdb9019d71cb9c00b0d02027fd52f235f01e6a682c54cbb7beeb2"
	I1227 10:07:50.467939  530952 cri.go:96] found id: ""
	I1227 10:07:50.467994  530952 ssh_runner.go:195] Run: sudo runc list -f json
	I1227 10:07:50.485632  530952 out.go:203] 
	W1227 10:07:50.488505  530952 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-27T10:07:50Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-27T10:07:50Z" level=error msg="open /run/runc: no such file or directory"
	
	W1227 10:07:50.488525  530952 out.go:285] * 
	* 
	W1227 10:07:50.492810  530952 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1227 10:07:50.495818  530952 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:309: out/minikube-linux-arm64 pause -p newest-cni-133340 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect newest-cni-133340
helpers_test.go:244: (dbg) docker inspect newest-cni-133340:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "83f3564ca7865c8f0fff28cd70ed0fbf3baa50a4f4aaaa707de637f5214ccf50",
	        "Created": "2025-12-27T10:06:53.938355809Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 528415,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-27T10:07:30.862534319Z",
	            "FinishedAt": "2025-12-27T10:07:29.947930687Z"
	        },
	        "Image": "sha256:713d59eed1cb7ea89d31e3cbef8f6274a6ab7509a421e96cf0197c334e76398d",
	        "ResolvConfPath": "/var/lib/docker/containers/83f3564ca7865c8f0fff28cd70ed0fbf3baa50a4f4aaaa707de637f5214ccf50/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/83f3564ca7865c8f0fff28cd70ed0fbf3baa50a4f4aaaa707de637f5214ccf50/hostname",
	        "HostsPath": "/var/lib/docker/containers/83f3564ca7865c8f0fff28cd70ed0fbf3baa50a4f4aaaa707de637f5214ccf50/hosts",
	        "LogPath": "/var/lib/docker/containers/83f3564ca7865c8f0fff28cd70ed0fbf3baa50a4f4aaaa707de637f5214ccf50/83f3564ca7865c8f0fff28cd70ed0fbf3baa50a4f4aaaa707de637f5214ccf50-json.log",
	        "Name": "/newest-cni-133340",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "newest-cni-133340:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "newest-cni-133340",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "83f3564ca7865c8f0fff28cd70ed0fbf3baa50a4f4aaaa707de637f5214ccf50",
	                "LowerDir": "/var/lib/docker/overlay2/0c9d74f728c6a9877244e39be65b6ef5c79623cfc825950897d2d18fa4c76de0-init/diff:/var/lib/docker/overlay2/888349771a41a46b09e7de676064af9effbe2f5ae2a8ba49ad062335fb2a70e5/diff",
	                "MergedDir": "/var/lib/docker/overlay2/0c9d74f728c6a9877244e39be65b6ef5c79623cfc825950897d2d18fa4c76de0/merged",
	                "UpperDir": "/var/lib/docker/overlay2/0c9d74f728c6a9877244e39be65b6ef5c79623cfc825950897d2d18fa4c76de0/diff",
	                "WorkDir": "/var/lib/docker/overlay2/0c9d74f728c6a9877244e39be65b6ef5c79623cfc825950897d2d18fa4c76de0/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "newest-cni-133340",
	                "Source": "/var/lib/docker/volumes/newest-cni-133340/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "newest-cni-133340",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "newest-cni-133340",
	                "name.minikube.sigs.k8s.io": "newest-cni-133340",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "e0e184dd9795323d477977ed5616d550ad340ad250881cf42962f0f6fe2da274",
	            "SandboxKey": "/var/run/docker/netns/e0e184dd9795",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33471"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33472"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33475"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33473"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33474"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "newest-cni-133340": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "e6:56:e3:05:83:cf",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "96182c0697dfbd3eb2978fc5bdfe5ab11e5c5e202e442a3bbbd2ca0b5a3c02a5",
	                    "EndpointID": "687db72d3da4b43193b932e64f2ef812f42c3013e18c12e0a6c5603fa5aa7dcf",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "newest-cni-133340",
	                        "83f3564ca786"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-133340 -n newest-cni-133340
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-133340 -n newest-cni-133340: exit status 2 (437.017809ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestStartStop/group/newest-cni/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p newest-cni-133340 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-arm64 -p newest-cni-133340 logs -n 25: (1.497745025s)
helpers_test.go:261: TestStartStop/group/newest-cni/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────────────────┬─────────┬─────────┬─────────────────────┬─────────
────────────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │              PROFILE              │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────────────────┼─────────┼─────────┼─────────────────────┼─────────
────────────┤
	│ stop    │ -p default-k8s-diff-port-681744 --alsologtostderr -v=3                                                                                                                                                                                        │ default-k8s-diff-port-681744      │ jenkins │ v1.37.0 │ 27 Dec 25 10:05 UTC │ 27 Dec 25 10:05 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-681744 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                       │ default-k8s-diff-port-681744      │ jenkins │ v1.37.0 │ 27 Dec 25 10:05 UTC │ 27 Dec 25 10:05 UTC │
	│ start   │ -p default-k8s-diff-port-681744 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0                                                                      │ default-k8s-diff-port-681744      │ jenkins │ v1.37.0 │ 27 Dec 25 10:05 UTC │ 27 Dec 25 10:06 UTC │
	│ image   │ embed-certs-017122 image list --format=json                                                                                                                                                                                                   │ embed-certs-017122                │ jenkins │ v1.37.0 │ 27 Dec 25 10:06 UTC │ 27 Dec 25 10:06 UTC │
	│ pause   │ -p embed-certs-017122 --alsologtostderr -v=1                                                                                                                                                                                                  │ embed-certs-017122                │ jenkins │ v1.37.0 │ 27 Dec 25 10:06 UTC │                     │
	│ delete  │ -p embed-certs-017122                                                                                                                                                                                                                         │ embed-certs-017122                │ jenkins │ v1.37.0 │ 27 Dec 25 10:06 UTC │ 27 Dec 25 10:06 UTC │
	│ delete  │ -p embed-certs-017122                                                                                                                                                                                                                         │ embed-certs-017122                │ jenkins │ v1.37.0 │ 27 Dec 25 10:06 UTC │ 27 Dec 25 10:06 UTC │
	│ start   │ -p newest-cni-133340 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0 │ newest-cni-133340                 │ jenkins │ v1.37.0 │ 27 Dec 25 10:06 UTC │ 27 Dec 25 10:07 UTC │
	│ image   │ default-k8s-diff-port-681744 image list --format=json                                                                                                                                                                                         │ default-k8s-diff-port-681744      │ jenkins │ v1.37.0 │ 27 Dec 25 10:06 UTC │ 27 Dec 25 10:06 UTC │
	│ pause   │ -p default-k8s-diff-port-681744 --alsologtostderr -v=1                                                                                                                                                                                        │ default-k8s-diff-port-681744      │ jenkins │ v1.37.0 │ 27 Dec 25 10:06 UTC │                     │
	│ delete  │ -p default-k8s-diff-port-681744                                                                                                                                                                                                               │ default-k8s-diff-port-681744      │ jenkins │ v1.37.0 │ 27 Dec 25 10:07 UTC │ 27 Dec 25 10:07 UTC │
	│ delete  │ -p default-k8s-diff-port-681744                                                                                                                                                                                                               │ default-k8s-diff-port-681744      │ jenkins │ v1.37.0 │ 27 Dec 25 10:07 UTC │ 27 Dec 25 10:07 UTC │
	│ start   │ -p test-preload-dl-gcs-425359 --download-only --kubernetes-version v1.34.0-rc.1 --preload-source=gcs --alsologtostderr --v=1 --driver=docker  --container-runtime=crio                                                                        │ test-preload-dl-gcs-425359        │ jenkins │ v1.37.0 │ 27 Dec 25 10:07 UTC │                     │
	│ delete  │ -p test-preload-dl-gcs-425359                                                                                                                                                                                                                 │ test-preload-dl-gcs-425359        │ jenkins │ v1.37.0 │ 27 Dec 25 10:07 UTC │ 27 Dec 25 10:07 UTC │
	│ start   │ -p test-preload-dl-github-343343 --download-only --kubernetes-version v1.34.0-rc.2 --preload-source=github --alsologtostderr --v=1 --driver=docker  --container-runtime=crio                                                                  │ test-preload-dl-github-343343     │ jenkins │ v1.37.0 │ 27 Dec 25 10:07 UTC │                     │
	│ delete  │ -p test-preload-dl-github-343343                                                                                                                                                                                                              │ test-preload-dl-github-343343     │ jenkins │ v1.37.0 │ 27 Dec 25 10:07 UTC │ 27 Dec 25 10:07 UTC │
	│ start   │ -p test-preload-dl-gcs-cached-955830 --download-only --kubernetes-version v1.34.0-rc.2 --preload-source=gcs --alsologtostderr --v=1 --driver=docker  --container-runtime=crio                                                                 │ test-preload-dl-gcs-cached-955830 │ jenkins │ v1.37.0 │ 27 Dec 25 10:07 UTC │                     │
	│ delete  │ -p test-preload-dl-gcs-cached-955830                                                                                                                                                                                                          │ test-preload-dl-gcs-cached-955830 │ jenkins │ v1.37.0 │ 27 Dec 25 10:07 UTC │ 27 Dec 25 10:07 UTC │
	│ start   │ -p auto-246753 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio                                                                                                                       │ auto-246753                       │ jenkins │ v1.37.0 │ 27 Dec 25 10:07 UTC │                     │
	│ addons  │ enable metrics-server -p newest-cni-133340 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ newest-cni-133340                 │ jenkins │ v1.37.0 │ 27 Dec 25 10:07 UTC │                     │
	│ stop    │ -p newest-cni-133340 --alsologtostderr -v=3                                                                                                                                                                                                   │ newest-cni-133340                 │ jenkins │ v1.37.0 │ 27 Dec 25 10:07 UTC │ 27 Dec 25 10:07 UTC │
	│ addons  │ enable dashboard -p newest-cni-133340 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ newest-cni-133340                 │ jenkins │ v1.37.0 │ 27 Dec 25 10:07 UTC │ 27 Dec 25 10:07 UTC │
	│ start   │ -p newest-cni-133340 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0 │ newest-cni-133340                 │ jenkins │ v1.37.0 │ 27 Dec 25 10:07 UTC │ 27 Dec 25 10:07 UTC │
	│ image   │ newest-cni-133340 image list --format=json                                                                                                                                                                                                    │ newest-cni-133340                 │ jenkins │ v1.37.0 │ 27 Dec 25 10:07 UTC │ 27 Dec 25 10:07 UTC │
	│ pause   │ -p newest-cni-133340 --alsologtostderr -v=1                                                                                                                                                                                                   │ newest-cni-133340                 │ jenkins │ v1.37.0 │ 27 Dec 25 10:07 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────────────────┴─────────┴─────────┴─────────────────────┴─────────
────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/27 10:07:30
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1227 10:07:30.588310  528292 out.go:360] Setting OutFile to fd 1 ...
	I1227 10:07:30.588451  528292 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 10:07:30.588462  528292 out.go:374] Setting ErrFile to fd 2...
	I1227 10:07:30.588468  528292 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 10:07:30.588731  528292 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22344-301174/.minikube/bin
	I1227 10:07:30.589114  528292 out.go:368] Setting JSON to false
	I1227 10:07:30.590060  528292 start.go:133] hostinfo: {"hostname":"ip-172-31-31-251","uptime":10200,"bootTime":1766819851,"procs":169,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1227 10:07:30.590141  528292 start.go:143] virtualization:  
	I1227 10:07:30.593187  528292 out.go:179] * [newest-cni-133340] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1227 10:07:30.596934  528292 out.go:179]   - MINIKUBE_LOCATION=22344
	I1227 10:07:30.597069  528292 notify.go:221] Checking for updates...
	I1227 10:07:30.602859  528292 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1227 10:07:30.605775  528292 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22344-301174/kubeconfig
	I1227 10:07:30.608676  528292 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22344-301174/.minikube
	I1227 10:07:30.611647  528292 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1227 10:07:30.614518  528292 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1227 10:07:30.617848  528292 config.go:182] Loaded profile config "newest-cni-133340": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1227 10:07:30.618498  528292 driver.go:422] Setting default libvirt URI to qemu:///system
	I1227 10:07:30.642551  528292 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1227 10:07:30.642665  528292 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1227 10:07:30.701362  528292 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:40 OomKillDisable:true NGoroutines:53 SystemTime:2025-12-27 10:07:30.691895894 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1227 10:07:30.701473  528292 docker.go:319] overlay module found
	I1227 10:07:30.704824  528292 out.go:179] * Using the docker driver based on existing profile
	I1227 10:07:30.707659  528292 start.go:309] selected driver: docker
	I1227 10:07:30.707677  528292 start.go:928] validating driver "docker" against &{Name:newest-cni-133340 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:newest-cni-133340 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker Mou
ntIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1227 10:07:30.707792  528292 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1227 10:07:30.708481  528292 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1227 10:07:30.765688  528292 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:40 OomKillDisable:true NGoroutines:53 SystemTime:2025-12-27 10:07:30.756260109 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1227 10:07:30.766053  528292 start_flags.go:1038] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1227 10:07:30.766081  528292 cni.go:84] Creating CNI manager for ""
	I1227 10:07:30.766142  528292 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1227 10:07:30.766273  528292 start.go:353] cluster config:
	{Name:newest-cni-133340 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:newest-cni-133340 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker
BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1227 10:07:30.771716  528292 out.go:179] * Starting "newest-cni-133340" primary control-plane node in "newest-cni-133340" cluster
	I1227 10:07:30.774841  528292 cache.go:134] Beginning downloading kic base image for docker with crio
	I1227 10:07:30.777842  528292 out.go:179] * Pulling base image v0.0.48-1766570851-22316 ...
	I1227 10:07:30.780585  528292 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime crio
	I1227 10:07:30.780635  528292 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22344-301174/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-cri-o-overlay-arm64.tar.lz4
	I1227 10:07:30.780645  528292 cache.go:65] Caching tarball of preloaded images
	I1227 10:07:30.780681  528292 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a in local docker daemon
	I1227 10:07:30.780737  528292 preload.go:251] Found /home/jenkins/minikube-integration/22344-301174/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1227 10:07:30.780748  528292 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0 on crio
	I1227 10:07:30.780882  528292 profile.go:143] Saving config to /home/jenkins/minikube-integration/22344-301174/.minikube/profiles/newest-cni-133340/config.json ...
	I1227 10:07:30.800792  528292 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a in local docker daemon, skipping pull
	I1227 10:07:30.800812  528292 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a exists in daemon, skipping load
	I1227 10:07:30.800828  528292 cache.go:243] Successfully downloaded all kic artifacts
	I1227 10:07:30.800864  528292 start.go:360] acquireMachinesLock for newest-cni-133340: {Name:mke43a3ebd8f4eaf65da86bf9dafee410f8229a8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1227 10:07:30.800921  528292 start.go:364] duration metric: took 39.811µs to acquireMachinesLock for "newest-cni-133340"
	I1227 10:07:30.800941  528292 start.go:96] Skipping create...Using existing machine configuration
	I1227 10:07:30.800946  528292 fix.go:54] fixHost starting: 
	I1227 10:07:30.801203  528292 cli_runner.go:164] Run: docker container inspect newest-cni-133340 --format={{.State.Status}}
	I1227 10:07:30.828045  528292 fix.go:112] recreateIfNeeded on newest-cni-133340: state=Stopped err=<nil>
	W1227 10:07:30.828075  528292 fix.go:138] unexpected machine state, will restart: <nil>
	I1227 10:07:32.225693  526230 main.go:144] libmachine: SSH cmd err, output: <nil>: auto-246753
	
	I1227 10:07:32.225729  526230 ubuntu.go:182] provisioning hostname "auto-246753"
	I1227 10:07:32.225792  526230 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-246753
	I1227 10:07:32.243587  526230 main.go:144] libmachine: Using SSH client type: native
	I1227 10:07:32.243911  526230 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dbbe0] 0x3de0e0 <nil>  [] 0s} 127.0.0.1 33466 <nil> <nil>}
	I1227 10:07:32.243926  526230 main.go:144] libmachine: About to run SSH command:
	sudo hostname auto-246753 && echo "auto-246753" | sudo tee /etc/hostname
	I1227 10:07:32.396219  526230 main.go:144] libmachine: SSH cmd err, output: <nil>: auto-246753
	
	I1227 10:07:32.396352  526230 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-246753
	I1227 10:07:32.414974  526230 main.go:144] libmachine: Using SSH client type: native
	I1227 10:07:32.415323  526230 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dbbe0] 0x3de0e0 <nil>  [] 0s} 127.0.0.1 33466 <nil> <nil>}
	I1227 10:07:32.415344  526230 main.go:144] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sauto-246753' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 auto-246753/g' /etc/hosts;
				else 
					echo '127.0.1.1 auto-246753' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1227 10:07:32.558810  526230 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	I1227 10:07:32.558837  526230 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22344-301174/.minikube CaCertPath:/home/jenkins/minikube-integration/22344-301174/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22344-301174/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22344-301174/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22344-301174/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22344-301174/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22344-301174/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22344-301174/.minikube}
	I1227 10:07:32.558856  526230 ubuntu.go:190] setting up certificates
	I1227 10:07:32.558867  526230 provision.go:84] configureAuth start
	I1227 10:07:32.558926  526230 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" auto-246753
	I1227 10:07:32.576995  526230 provision.go:143] copyHostCerts
	I1227 10:07:32.577070  526230 exec_runner.go:144] found /home/jenkins/minikube-integration/22344-301174/.minikube/ca.pem, removing ...
	I1227 10:07:32.577086  526230 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22344-301174/.minikube/ca.pem
	I1227 10:07:32.577160  526230 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22344-301174/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22344-301174/.minikube/ca.pem (1082 bytes)
	I1227 10:07:32.577270  526230 exec_runner.go:144] found /home/jenkins/minikube-integration/22344-301174/.minikube/cert.pem, removing ...
	I1227 10:07:32.577281  526230 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22344-301174/.minikube/cert.pem
	I1227 10:07:32.577309  526230 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22344-301174/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22344-301174/.minikube/cert.pem (1123 bytes)
	I1227 10:07:32.577375  526230 exec_runner.go:144] found /home/jenkins/minikube-integration/22344-301174/.minikube/key.pem, removing ...
	I1227 10:07:32.577385  526230 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22344-301174/.minikube/key.pem
	I1227 10:07:32.577410  526230 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22344-301174/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22344-301174/.minikube/key.pem (1675 bytes)
	I1227 10:07:32.577472  526230 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22344-301174/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22344-301174/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22344-301174/.minikube/certs/ca-key.pem org=jenkins.auto-246753 san=[127.0.0.1 192.168.85.2 auto-246753 localhost minikube]
	I1227 10:07:33.022596  526230 provision.go:177] copyRemoteCerts
	I1227 10:07:33.022672  526230 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1227 10:07:33.022714  526230 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-246753
	I1227 10:07:33.040662  526230 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33466 SSHKeyPath:/home/jenkins/minikube-integration/22344-301174/.minikube/machines/auto-246753/id_rsa Username:docker}
	I1227 10:07:33.138357  526230 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22344-301174/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1227 10:07:33.157291  526230 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22344-301174/.minikube/machines/server.pem --> /etc/docker/server.pem (1204 bytes)
	I1227 10:07:33.175618  526230 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22344-301174/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1227 10:07:33.193610  526230 provision.go:87] duration metric: took 634.719087ms to configureAuth
	I1227 10:07:33.193637  526230 ubuntu.go:206] setting minikube options for container-runtime
	I1227 10:07:33.193826  526230 config.go:182] Loaded profile config "auto-246753": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1227 10:07:33.193925  526230 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-246753
	I1227 10:07:33.210976  526230 main.go:144] libmachine: Using SSH client type: native
	I1227 10:07:33.211330  526230 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dbbe0] 0x3de0e0 <nil>  [] 0s} 127.0.0.1 33466 <nil> <nil>}
	I1227 10:07:33.211362  526230 main.go:144] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1227 10:07:33.506947  526230 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1227 10:07:33.506970  526230 machine.go:97] duration metric: took 4.460655316s to provisionDockerMachine
	I1227 10:07:33.506981  526230 client.go:176] duration metric: took 13.159262918s to LocalClient.Create
	I1227 10:07:33.506994  526230 start.go:167] duration metric: took 13.159323767s to libmachine.API.Create "auto-246753"
	I1227 10:07:33.507002  526230 start.go:293] postStartSetup for "auto-246753" (driver="docker")
	I1227 10:07:33.507011  526230 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1227 10:07:33.507092  526230 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1227 10:07:33.507142  526230 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-246753
	I1227 10:07:33.525356  526230 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33466 SSHKeyPath:/home/jenkins/minikube-integration/22344-301174/.minikube/machines/auto-246753/id_rsa Username:docker}
	I1227 10:07:33.626961  526230 ssh_runner.go:195] Run: cat /etc/os-release
	I1227 10:07:33.630514  526230 main.go:144] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1227 10:07:33.630545  526230 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1227 10:07:33.630558  526230 filesync.go:126] Scanning /home/jenkins/minikube-integration/22344-301174/.minikube/addons for local assets ...
	I1227 10:07:33.630618  526230 filesync.go:126] Scanning /home/jenkins/minikube-integration/22344-301174/.minikube/files for local assets ...
	I1227 10:07:33.630703  526230 filesync.go:149] local asset: /home/jenkins/minikube-integration/22344-301174/.minikube/files/etc/ssl/certs/3030432.pem -> 3030432.pem in /etc/ssl/certs
	I1227 10:07:33.630811  526230 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1227 10:07:33.638782  526230 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22344-301174/.minikube/files/etc/ssl/certs/3030432.pem --> /etc/ssl/certs/3030432.pem (1708 bytes)
	I1227 10:07:33.657343  526230 start.go:296] duration metric: took 150.327928ms for postStartSetup
	I1227 10:07:33.657733  526230 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" auto-246753
	I1227 10:07:33.675118  526230 profile.go:143] Saving config to /home/jenkins/minikube-integration/22344-301174/.minikube/profiles/auto-246753/config.json ...
	I1227 10:07:33.675413  526230 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1227 10:07:33.675462  526230 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-246753
	I1227 10:07:33.692232  526230 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33466 SSHKeyPath:/home/jenkins/minikube-integration/22344-301174/.minikube/machines/auto-246753/id_rsa Username:docker}
	I1227 10:07:33.787228  526230 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1227 10:07:33.791936  526230 start.go:128] duration metric: took 13.447727779s to createHost
	I1227 10:07:33.791963  526230 start.go:83] releasing machines lock for "auto-246753", held for 13.44786479s
	I1227 10:07:33.792035  526230 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" auto-246753
	I1227 10:07:33.809848  526230 ssh_runner.go:195] Run: cat /version.json
	I1227 10:07:33.809870  526230 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1227 10:07:33.809899  526230 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-246753
	I1227 10:07:33.809942  526230 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-246753
	I1227 10:07:33.830236  526230 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33466 SSHKeyPath:/home/jenkins/minikube-integration/22344-301174/.minikube/machines/auto-246753/id_rsa Username:docker}
	I1227 10:07:33.830090  526230 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33466 SSHKeyPath:/home/jenkins/minikube-integration/22344-301174/.minikube/machines/auto-246753/id_rsa Username:docker}
	I1227 10:07:33.926385  526230 ssh_runner.go:195] Run: systemctl --version
	I1227 10:07:34.026078  526230 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1227 10:07:34.063370  526230 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1227 10:07:34.067822  526230 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1227 10:07:34.067912  526230 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1227 10:07:34.096687  526230 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1227 10:07:34.096714  526230 start.go:496] detecting cgroup driver to use...
	I1227 10:07:34.096788  526230 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1227 10:07:34.096871  526230 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1227 10:07:34.115384  526230 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1227 10:07:34.128607  526230 docker.go:218] disabling cri-docker service (if available) ...
	I1227 10:07:34.128711  526230 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1227 10:07:34.148936  526230 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1227 10:07:34.168401  526230 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1227 10:07:34.328494  526230 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1227 10:07:34.495885  526230 docker.go:234] disabling docker service ...
	I1227 10:07:34.495953  526230 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1227 10:07:34.519745  526230 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1227 10:07:34.541927  526230 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1227 10:07:34.688479  526230 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1227 10:07:34.842457  526230 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1227 10:07:34.857547  526230 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1227 10:07:34.872507  526230 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1227 10:07:34.872589  526230 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 10:07:34.882142  526230 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1227 10:07:34.882297  526230 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 10:07:34.891557  526230 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 10:07:34.900324  526230 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 10:07:34.909299  526230 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1227 10:07:34.917643  526230 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 10:07:34.926847  526230 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 10:07:34.940904  526230 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 10:07:34.950339  526230 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1227 10:07:34.958741  526230 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1227 10:07:34.966803  526230 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1227 10:07:35.113152  526230 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1227 10:07:35.335467  526230 start.go:553] Will wait 60s for socket path /var/run/crio/crio.sock
	I1227 10:07:35.335540  526230 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1227 10:07:35.341174  526230 start.go:574] Will wait 60s for crictl version
	I1227 10:07:35.341254  526230 ssh_runner.go:195] Run: which crictl
	I1227 10:07:35.344987  526230 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1227 10:07:35.372618  526230 start.go:590] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1227 10:07:35.372706  526230 ssh_runner.go:195] Run: crio --version
	I1227 10:07:35.405240  526230 ssh_runner.go:195] Run: crio --version
	I1227 10:07:35.450049  526230 out.go:179] * Preparing Kubernetes v1.35.0 on CRI-O 1.34.3 ...
	I1227 10:07:30.831001  528292 out.go:252] * Restarting existing docker container for "newest-cni-133340" ...
	I1227 10:07:30.831109  528292 cli_runner.go:164] Run: docker start newest-cni-133340
	I1227 10:07:31.096088  528292 cli_runner.go:164] Run: docker container inspect newest-cni-133340 --format={{.State.Status}}
	I1227 10:07:31.117424  528292 kic.go:430] container "newest-cni-133340" state is running.
	I1227 10:07:31.117836  528292 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-133340
	I1227 10:07:31.141782  528292 profile.go:143] Saving config to /home/jenkins/minikube-integration/22344-301174/.minikube/profiles/newest-cni-133340/config.json ...
	I1227 10:07:31.142384  528292 machine.go:94] provisionDockerMachine start ...
	I1227 10:07:31.142540  528292 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-133340
	I1227 10:07:31.173346  528292 main.go:144] libmachine: Using SSH client type: native
	I1227 10:07:31.173665  528292 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dbbe0] 0x3de0e0 <nil>  [] 0s} 127.0.0.1 33471 <nil> <nil>}
	I1227 10:07:31.173674  528292 main.go:144] libmachine: About to run SSH command:
	hostname
	I1227 10:07:31.174456  528292 main.go:144] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1227 10:07:34.325928  528292 main.go:144] libmachine: SSH cmd err, output: <nil>: newest-cni-133340
	
	I1227 10:07:34.325955  528292 ubuntu.go:182] provisioning hostname "newest-cni-133340"
	I1227 10:07:34.326052  528292 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-133340
	I1227 10:07:34.347520  528292 main.go:144] libmachine: Using SSH client type: native
	I1227 10:07:34.347853  528292 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dbbe0] 0x3de0e0 <nil>  [] 0s} 127.0.0.1 33471 <nil> <nil>}
	I1227 10:07:34.347868  528292 main.go:144] libmachine: About to run SSH command:
	sudo hostname newest-cni-133340 && echo "newest-cni-133340" | sudo tee /etc/hostname
	I1227 10:07:34.519970  528292 main.go:144] libmachine: SSH cmd err, output: <nil>: newest-cni-133340
	
	I1227 10:07:34.520036  528292 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-133340
	I1227 10:07:34.543713  528292 main.go:144] libmachine: Using SSH client type: native
	I1227 10:07:34.544024  528292 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dbbe0] 0x3de0e0 <nil>  [] 0s} 127.0.0.1 33471 <nil> <nil>}
	I1227 10:07:34.544046  528292 main.go:144] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-133340' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-133340/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-133340' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1227 10:07:34.702495  528292 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	I1227 10:07:34.702521  528292 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22344-301174/.minikube CaCertPath:/home/jenkins/minikube-integration/22344-301174/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22344-301174/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22344-301174/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22344-301174/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22344-301174/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22344-301174/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22344-301174/.minikube}
	I1227 10:07:34.702551  528292 ubuntu.go:190] setting up certificates
	I1227 10:07:34.702561  528292 provision.go:84] configureAuth start
	I1227 10:07:34.702629  528292 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-133340
	I1227 10:07:34.720673  528292 provision.go:143] copyHostCerts
	I1227 10:07:34.720739  528292 exec_runner.go:144] found /home/jenkins/minikube-integration/22344-301174/.minikube/ca.pem, removing ...
	I1227 10:07:34.720756  528292 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22344-301174/.minikube/ca.pem
	I1227 10:07:34.720817  528292 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22344-301174/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22344-301174/.minikube/ca.pem (1082 bytes)
	I1227 10:07:34.720922  528292 exec_runner.go:144] found /home/jenkins/minikube-integration/22344-301174/.minikube/cert.pem, removing ...
	I1227 10:07:34.720927  528292 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22344-301174/.minikube/cert.pem
	I1227 10:07:34.720947  528292 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22344-301174/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22344-301174/.minikube/cert.pem (1123 bytes)
	I1227 10:07:34.721003  528292 exec_runner.go:144] found /home/jenkins/minikube-integration/22344-301174/.minikube/key.pem, removing ...
	I1227 10:07:34.721008  528292 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22344-301174/.minikube/key.pem
	I1227 10:07:34.721032  528292 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22344-301174/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22344-301174/.minikube/key.pem (1675 bytes)
	I1227 10:07:34.721089  528292 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22344-301174/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22344-301174/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22344-301174/.minikube/certs/ca-key.pem org=jenkins.newest-cni-133340 san=[127.0.0.1 192.168.76.2 localhost minikube newest-cni-133340]
	I1227 10:07:34.992235  528292 provision.go:177] copyRemoteCerts
	I1227 10:07:34.992349  528292 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1227 10:07:34.992407  528292 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-133340
	I1227 10:07:35.024152  528292 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33471 SSHKeyPath:/home/jenkins/minikube-integration/22344-301174/.minikube/machines/newest-cni-133340/id_rsa Username:docker}
	I1227 10:07:35.141958  528292 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22344-301174/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1227 10:07:35.166084  528292 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22344-301174/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1227 10:07:35.188385  528292 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22344-301174/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1227 10:07:35.212486  528292 provision.go:87] duration metric: took 509.900129ms to configureAuth
	I1227 10:07:35.212566  528292 ubuntu.go:206] setting minikube options for container-runtime
	I1227 10:07:35.212828  528292 config.go:182] Loaded profile config "newest-cni-133340": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1227 10:07:35.212991  528292 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-133340
	I1227 10:07:35.233464  528292 main.go:144] libmachine: Using SSH client type: native
	I1227 10:07:35.233765  528292 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dbbe0] 0x3de0e0 <nil>  [] 0s} 127.0.0.1 33471 <nil> <nil>}
	I1227 10:07:35.233779  528292 main.go:144] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1227 10:07:35.600708  528292 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1227 10:07:35.600729  528292 machine.go:97] duration metric: took 4.458277637s to provisionDockerMachine
	I1227 10:07:35.600740  528292 start.go:293] postStartSetup for "newest-cni-133340" (driver="docker")
	I1227 10:07:35.600751  528292 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1227 10:07:35.600840  528292 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1227 10:07:35.600885  528292 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-133340
	I1227 10:07:35.631940  528292 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33471 SSHKeyPath:/home/jenkins/minikube-integration/22344-301174/.minikube/machines/newest-cni-133340/id_rsa Username:docker}
	I1227 10:07:35.740128  528292 ssh_runner.go:195] Run: cat /etc/os-release
	I1227 10:07:35.744262  528292 main.go:144] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1227 10:07:35.744288  528292 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1227 10:07:35.744300  528292 filesync.go:126] Scanning /home/jenkins/minikube-integration/22344-301174/.minikube/addons for local assets ...
	I1227 10:07:35.744356  528292 filesync.go:126] Scanning /home/jenkins/minikube-integration/22344-301174/.minikube/files for local assets ...
	I1227 10:07:35.744438  528292 filesync.go:149] local asset: /home/jenkins/minikube-integration/22344-301174/.minikube/files/etc/ssl/certs/3030432.pem -> 3030432.pem in /etc/ssl/certs
	I1227 10:07:35.744542  528292 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1227 10:07:35.758865  528292 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22344-301174/.minikube/files/etc/ssl/certs/3030432.pem --> /etc/ssl/certs/3030432.pem (1708 bytes)
	I1227 10:07:35.787251  528292 start.go:296] duration metric: took 186.495985ms for postStartSetup
	I1227 10:07:35.787329  528292 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1227 10:07:35.787382  528292 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-133340
	I1227 10:07:35.813827  528292 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33471 SSHKeyPath:/home/jenkins/minikube-integration/22344-301174/.minikube/machines/newest-cni-133340/id_rsa Username:docker}
	I1227 10:07:35.916352  528292 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1227 10:07:35.922125  528292 fix.go:56] duration metric: took 5.121172304s for fixHost
	I1227 10:07:35.922263  528292 start.go:83] releasing machines lock for "newest-cni-133340", held for 5.121332256s
	I1227 10:07:35.922358  528292 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-133340
	I1227 10:07:35.945272  528292 ssh_runner.go:195] Run: cat /version.json
	I1227 10:07:35.945321  528292 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-133340
	I1227 10:07:35.945564  528292 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1227 10:07:35.945620  528292 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-133340
	I1227 10:07:35.995121  528292 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33471 SSHKeyPath:/home/jenkins/minikube-integration/22344-301174/.minikube/machines/newest-cni-133340/id_rsa Username:docker}
	I1227 10:07:35.995928  528292 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33471 SSHKeyPath:/home/jenkins/minikube-integration/22344-301174/.minikube/machines/newest-cni-133340/id_rsa Username:docker}
	I1227 10:07:36.106061  528292 ssh_runner.go:195] Run: systemctl --version
	I1227 10:07:36.211544  528292 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1227 10:07:36.251766  528292 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1227 10:07:36.256338  528292 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1227 10:07:36.256425  528292 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1227 10:07:36.268221  528292 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1227 10:07:36.268242  528292 start.go:496] detecting cgroup driver to use...
	I1227 10:07:36.268275  528292 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1227 10:07:36.268325  528292 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1227 10:07:36.285307  528292 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1227 10:07:36.300036  528292 docker.go:218] disabling cri-docker service (if available) ...
	I1227 10:07:36.300105  528292 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1227 10:07:36.316999  528292 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1227 10:07:36.331385  528292 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1227 10:07:36.474969  528292 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1227 10:07:36.618805  528292 docker.go:234] disabling docker service ...
	I1227 10:07:36.618867  528292 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1227 10:07:36.637752  528292 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1227 10:07:36.652905  528292 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1227 10:07:36.818556  528292 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1227 10:07:36.981407  528292 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1227 10:07:36.996765  528292 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1227 10:07:37.014607  528292 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1227 10:07:37.014679  528292 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 10:07:37.026767  528292 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1227 10:07:37.026832  528292 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 10:07:37.037278  528292 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 10:07:37.047562  528292 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 10:07:37.057317  528292 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1227 10:07:37.066674  528292 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 10:07:37.076081  528292 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 10:07:37.084865  528292 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 10:07:37.094751  528292 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1227 10:07:37.103832  528292 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1227 10:07:37.112546  528292 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1227 10:07:37.261632  528292 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1227 10:07:37.448233  528292 start.go:553] Will wait 60s for socket path /var/run/crio/crio.sock
	I1227 10:07:37.448300  528292 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1227 10:07:37.454226  528292 start.go:574] Will wait 60s for crictl version
	I1227 10:07:37.454288  528292 ssh_runner.go:195] Run: which crictl
	I1227 10:07:37.458310  528292 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1227 10:07:37.507285  528292 start.go:590] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1227 10:07:37.507372  528292 ssh_runner.go:195] Run: crio --version
	I1227 10:07:37.540745  528292 ssh_runner.go:195] Run: crio --version
	I1227 10:07:37.586631  528292 out.go:179] * Preparing Kubernetes v1.35.0 on CRI-O 1.34.3 ...
	I1227 10:07:35.452952  526230 cli_runner.go:164] Run: docker network inspect auto-246753 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1227 10:07:35.476776  526230 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1227 10:07:35.481366  526230 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1227 10:07:35.492005  526230 kubeadm.go:884] updating cluster {Name:auto-246753 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:auto-246753 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:
[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMne
tClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false} ...
	I1227 10:07:35.492151  526230 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime crio
	I1227 10:07:35.492213  526230 ssh_runner.go:195] Run: sudo crictl images --output json
	I1227 10:07:35.533088  526230 crio.go:561] all images are preloaded for cri-o runtime.
	I1227 10:07:35.533114  526230 crio.go:433] Images already preloaded, skipping extraction
	I1227 10:07:35.533172  526230 ssh_runner.go:195] Run: sudo crictl images --output json
	I1227 10:07:35.560910  526230 crio.go:561] all images are preloaded for cri-o runtime.
	I1227 10:07:35.560936  526230 cache_images.go:86] Images are preloaded, skipping loading
	I1227 10:07:35.560944  526230 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.35.0 crio true true} ...
	I1227 10:07:35.561071  526230 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=auto-246753 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0 ClusterName:auto-246753 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1227 10:07:35.561203  526230 ssh_runner.go:195] Run: crio config
	I1227 10:07:35.665825  526230 cni.go:84] Creating CNI manager for ""
	I1227 10:07:35.665894  526230 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1227 10:07:35.665925  526230 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1227 10:07:35.665966  526230 kubeadm.go:197] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.35.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:auto-246753 NodeName:auto-246753 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/
manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock failCgroupV1:false hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1227 10:07:35.666129  526230 kubeadm.go:203] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "auto-246753"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	failCgroupV1: false
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1227 10:07:35.666259  526230 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0
	I1227 10:07:35.675340  526230 binaries.go:51] Found k8s binaries, skipping transfer
	I1227 10:07:35.675462  526230 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1227 10:07:35.684462  526230 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (361 bytes)
	I1227 10:07:35.698746  526230 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1227 10:07:35.717207  526230 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2228 bytes)
	I1227 10:07:35.730179  526230 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1227 10:07:35.735783  526230 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1227 10:07:35.748180  526230 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1227 10:07:35.884511  526230 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1227 10:07:35.900419  526230 certs.go:69] Setting up /home/jenkins/minikube-integration/22344-301174/.minikube/profiles/auto-246753 for IP: 192.168.85.2
	I1227 10:07:35.900438  526230 certs.go:195] generating shared ca certs ...
	I1227 10:07:35.900454  526230 certs.go:227] acquiring lock for ca certs: {Name:mk8bd99999eb300c33129cb11fd55a1be3001328 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 10:07:35.900592  526230 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22344-301174/.minikube/ca.key
	I1227 10:07:35.900668  526230 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22344-301174/.minikube/proxy-client-ca.key
	I1227 10:07:35.900675  526230 certs.go:257] generating profile certs ...
	I1227 10:07:35.900737  526230 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22344-301174/.minikube/profiles/auto-246753/client.key
	I1227 10:07:35.900752  526230 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22344-301174/.minikube/profiles/auto-246753/client.crt with IP's: []
	I1227 10:07:36.141895  526230 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22344-301174/.minikube/profiles/auto-246753/client.crt ...
	I1227 10:07:36.141930  526230 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22344-301174/.minikube/profiles/auto-246753/client.crt: {Name:mkf0ce9b15cb1d547dcb69259189b5bd4371836c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 10:07:36.142124  526230 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22344-301174/.minikube/profiles/auto-246753/client.key ...
	I1227 10:07:36.142139  526230 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22344-301174/.minikube/profiles/auto-246753/client.key: {Name:mk3b606387df1d40c0813baaf1d7802470b1d10a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 10:07:36.142277  526230 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22344-301174/.minikube/profiles/auto-246753/apiserver.key.3f0dc94a
	I1227 10:07:36.142298  526230 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22344-301174/.minikube/profiles/auto-246753/apiserver.crt.3f0dc94a with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.85.2]
	I1227 10:07:36.506335  526230 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22344-301174/.minikube/profiles/auto-246753/apiserver.crt.3f0dc94a ...
	I1227 10:07:36.506368  526230 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22344-301174/.minikube/profiles/auto-246753/apiserver.crt.3f0dc94a: {Name:mk95e18bee89c196840e37dc9a03521c66824287 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 10:07:36.506555  526230 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22344-301174/.minikube/profiles/auto-246753/apiserver.key.3f0dc94a ...
	I1227 10:07:36.506569  526230 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22344-301174/.minikube/profiles/auto-246753/apiserver.key.3f0dc94a: {Name:mkf71db83df1cc85db92268d54c0edff2cbfac8e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 10:07:36.506664  526230 certs.go:382] copying /home/jenkins/minikube-integration/22344-301174/.minikube/profiles/auto-246753/apiserver.crt.3f0dc94a -> /home/jenkins/minikube-integration/22344-301174/.minikube/profiles/auto-246753/apiserver.crt
	I1227 10:07:36.506748  526230 certs.go:386] copying /home/jenkins/minikube-integration/22344-301174/.minikube/profiles/auto-246753/apiserver.key.3f0dc94a -> /home/jenkins/minikube-integration/22344-301174/.minikube/profiles/auto-246753/apiserver.key
	I1227 10:07:36.506809  526230 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22344-301174/.minikube/profiles/auto-246753/proxy-client.key
	I1227 10:07:36.506825  526230 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22344-301174/.minikube/profiles/auto-246753/proxy-client.crt with IP's: []
	I1227 10:07:36.621658  526230 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22344-301174/.minikube/profiles/auto-246753/proxy-client.crt ...
	I1227 10:07:36.621689  526230 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22344-301174/.minikube/profiles/auto-246753/proxy-client.crt: {Name:mk3534a59dcc1f87c2493eb49bdfcb0cf5d09a7d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 10:07:36.621854  526230 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22344-301174/.minikube/profiles/auto-246753/proxy-client.key ...
	I1227 10:07:36.621868  526230 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22344-301174/.minikube/profiles/auto-246753/proxy-client.key: {Name:mk35b6a3adeb9ea216b0ad02a7f5e7e29c6e4a0f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 10:07:36.622049  526230 certs.go:484] found cert: /home/jenkins/minikube-integration/22344-301174/.minikube/certs/303043.pem (1338 bytes)
	W1227 10:07:36.622100  526230 certs.go:480] ignoring /home/jenkins/minikube-integration/22344-301174/.minikube/certs/303043_empty.pem, impossibly tiny 0 bytes
	I1227 10:07:36.622114  526230 certs.go:484] found cert: /home/jenkins/minikube-integration/22344-301174/.minikube/certs/ca-key.pem (1675 bytes)
	I1227 10:07:36.622143  526230 certs.go:484] found cert: /home/jenkins/minikube-integration/22344-301174/.minikube/certs/ca.pem (1082 bytes)
	I1227 10:07:36.622190  526230 certs.go:484] found cert: /home/jenkins/minikube-integration/22344-301174/.minikube/certs/cert.pem (1123 bytes)
	I1227 10:07:36.622218  526230 certs.go:484] found cert: /home/jenkins/minikube-integration/22344-301174/.minikube/certs/key.pem (1675 bytes)
	I1227 10:07:36.622272  526230 certs.go:484] found cert: /home/jenkins/minikube-integration/22344-301174/.minikube/files/etc/ssl/certs/3030432.pem (1708 bytes)
	I1227 10:07:36.622834  526230 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22344-301174/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1227 10:07:36.649204  526230 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22344-301174/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1227 10:07:36.679146  526230 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22344-301174/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1227 10:07:36.710358  526230 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22344-301174/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1227 10:07:36.769368  526230 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22344-301174/.minikube/profiles/auto-246753/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1415 bytes)
	I1227 10:07:36.791256  526230 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22344-301174/.minikube/profiles/auto-246753/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1227 10:07:36.813439  526230 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22344-301174/.minikube/profiles/auto-246753/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1227 10:07:36.832939  526230 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22344-301174/.minikube/profiles/auto-246753/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1227 10:07:36.851320  526230 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22344-301174/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1227 10:07:36.877085  526230 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22344-301174/.minikube/certs/303043.pem --> /usr/share/ca-certificates/303043.pem (1338 bytes)
	I1227 10:07:36.904809  526230 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22344-301174/.minikube/files/etc/ssl/certs/3030432.pem --> /usr/share/ca-certificates/3030432.pem (1708 bytes)
	I1227 10:07:36.929748  526230 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (722 bytes)
	I1227 10:07:36.944877  526230 ssh_runner.go:195] Run: openssl version
	I1227 10:07:36.951832  526230 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/3030432.pem
	I1227 10:07:36.960000  526230 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/3030432.pem /etc/ssl/certs/3030432.pem
	I1227 10:07:36.968344  526230 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3030432.pem
	I1227 10:07:36.972613  526230 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 27 09:17 /usr/share/ca-certificates/3030432.pem
	I1227 10:07:36.972750  526230 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3030432.pem
	I1227 10:07:37.015782  526230 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1227 10:07:37.025966  526230 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/3030432.pem /etc/ssl/certs/3ec20f2e.0
	I1227 10:07:37.035375  526230 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1227 10:07:37.044469  526230 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1227 10:07:37.053266  526230 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1227 10:07:37.057741  526230 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 27 09:13 /usr/share/ca-certificates/minikubeCA.pem
	I1227 10:07:37.057855  526230 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1227 10:07:37.100638  526230 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1227 10:07:37.109038  526230 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1227 10:07:37.117603  526230 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/303043.pem
	I1227 10:07:37.125941  526230 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/303043.pem /etc/ssl/certs/303043.pem
	I1227 10:07:37.135015  526230 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/303043.pem
	I1227 10:07:37.139532  526230 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 27 09:17 /usr/share/ca-certificates/303043.pem
	I1227 10:07:37.139595  526230 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/303043.pem
	I1227 10:07:37.195138  526230 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1227 10:07:37.203686  526230 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/303043.pem /etc/ssl/certs/51391683.0
	I1227 10:07:37.211395  526230 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1227 10:07:37.215890  526230 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1227 10:07:37.215950  526230 kubeadm.go:401] StartCluster: {Name:auto-246753 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:auto-246753 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[]
APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetCl
ientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1227 10:07:37.216028  526230 cri.go:61] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1227 10:07:37.216087  526230 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1227 10:07:37.243392  526230 cri.go:96] found id: ""
	I1227 10:07:37.243500  526230 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1227 10:07:37.253003  526230 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1227 10:07:37.260830  526230 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1227 10:07:37.260892  526230 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1227 10:07:37.271475  526230 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1227 10:07:37.271550  526230 kubeadm.go:158] found existing configuration files:
	
	I1227 10:07:37.271635  526230 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1227 10:07:37.281814  526230 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1227 10:07:37.281890  526230 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1227 10:07:37.293268  526230 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1227 10:07:37.302255  526230 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1227 10:07:37.302316  526230 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1227 10:07:37.310723  526230 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1227 10:07:37.319206  526230 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1227 10:07:37.319282  526230 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1227 10:07:37.326832  526230 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1227 10:07:37.335959  526230 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1227 10:07:37.336022  526230 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1227 10:07:37.343075  526230 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1227 10:07:37.405015  526230 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0
	I1227 10:07:37.405196  526230 kubeadm.go:319] [preflight] Running pre-flight checks
	I1227 10:07:37.495683  526230 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1227 10:07:37.495758  526230 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1227 10:07:37.495798  526230 kubeadm.go:319] OS: Linux
	I1227 10:07:37.495861  526230 kubeadm.go:319] CGROUPS_CPU: enabled
	I1227 10:07:37.495914  526230 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1227 10:07:37.495964  526230 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1227 10:07:37.496015  526230 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1227 10:07:37.496067  526230 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1227 10:07:37.496118  526230 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1227 10:07:37.496167  526230 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1227 10:07:37.496218  526230 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1227 10:07:37.496273  526230 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1227 10:07:37.585634  526230 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1227 10:07:37.585774  526230 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1227 10:07:37.585925  526230 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1227 10:07:37.614646  526230 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1227 10:07:37.589772  528292 cli_runner.go:164] Run: docker network inspect newest-cni-133340 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1227 10:07:37.618522  528292 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1227 10:07:37.623370  528292 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1227 10:07:37.636096  528292 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I1227 10:07:37.638936  528292 kubeadm.go:884] updating cluster {Name:newest-cni-133340 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:newest-cni-133340 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:
262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false} ...
	I1227 10:07:37.639100  528292 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime crio
	I1227 10:07:37.639165  528292 ssh_runner.go:195] Run: sudo crictl images --output json
	I1227 10:07:37.674972  528292 crio.go:561] all images are preloaded for cri-o runtime.
	I1227 10:07:37.674994  528292 crio.go:433] Images already preloaded, skipping extraction
	I1227 10:07:37.675048  528292 ssh_runner.go:195] Run: sudo crictl images --output json
	I1227 10:07:37.719683  528292 crio.go:561] all images are preloaded for cri-o runtime.
	I1227 10:07:37.719757  528292 cache_images.go:86] Images are preloaded, skipping loading
	I1227 10:07:37.719780  528292 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.35.0 crio true true} ...
	I1227 10:07:37.719943  528292 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=newest-cni-133340 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0 ClusterName:newest-cni-133340 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1227 10:07:37.720065  528292 ssh_runner.go:195] Run: crio config
	I1227 10:07:37.802253  528292 cni.go:84] Creating CNI manager for ""
	I1227 10:07:37.802321  528292 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1227 10:07:37.802354  528292 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1227 10:07:37.802409  528292 kubeadm.go:197] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.35.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-133340 NodeName:newest-cni-133340 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/
kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock failCgroupV1:false hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1227 10:07:37.802588  528292 kubeadm.go:203] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-133340"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	failCgroupV1: false
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1227 10:07:37.802703  528292 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0
	I1227 10:07:37.811204  528292 binaries.go:51] Found k8s binaries, skipping transfer
	I1227 10:07:37.811320  528292 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1227 10:07:37.819279  528292 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I1227 10:07:37.838327  528292 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1227 10:07:37.859717  528292 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2232 bytes)
	I1227 10:07:37.885407  528292 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1227 10:07:37.891392  528292 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1227 10:07:37.905565  528292 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1227 10:07:38.065820  528292 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1227 10:07:38.087953  528292 certs.go:69] Setting up /home/jenkins/minikube-integration/22344-301174/.minikube/profiles/newest-cni-133340 for IP: 192.168.76.2
	I1227 10:07:38.087977  528292 certs.go:195] generating shared ca certs ...
	I1227 10:07:38.087994  528292 certs.go:227] acquiring lock for ca certs: {Name:mk8bd99999eb300c33129cb11fd55a1be3001328 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 10:07:38.088180  528292 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22344-301174/.minikube/ca.key
	I1227 10:07:38.088271  528292 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22344-301174/.minikube/proxy-client-ca.key
	I1227 10:07:38.088287  528292 certs.go:257] generating profile certs ...
	I1227 10:07:38.088416  528292 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22344-301174/.minikube/profiles/newest-cni-133340/client.key
	I1227 10:07:38.088525  528292 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22344-301174/.minikube/profiles/newest-cni-133340/apiserver.key.5a59841a
	I1227 10:07:38.088586  528292 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22344-301174/.minikube/profiles/newest-cni-133340/proxy-client.key
	I1227 10:07:38.088742  528292 certs.go:484] found cert: /home/jenkins/minikube-integration/22344-301174/.minikube/certs/303043.pem (1338 bytes)
	W1227 10:07:38.088796  528292 certs.go:480] ignoring /home/jenkins/minikube-integration/22344-301174/.minikube/certs/303043_empty.pem, impossibly tiny 0 bytes
	I1227 10:07:38.088814  528292 certs.go:484] found cert: /home/jenkins/minikube-integration/22344-301174/.minikube/certs/ca-key.pem (1675 bytes)
	I1227 10:07:38.088844  528292 certs.go:484] found cert: /home/jenkins/minikube-integration/22344-301174/.minikube/certs/ca.pem (1082 bytes)
	I1227 10:07:38.088888  528292 certs.go:484] found cert: /home/jenkins/minikube-integration/22344-301174/.minikube/certs/cert.pem (1123 bytes)
	I1227 10:07:38.088932  528292 certs.go:484] found cert: /home/jenkins/minikube-integration/22344-301174/.minikube/certs/key.pem (1675 bytes)
	I1227 10:07:38.088997  528292 certs.go:484] found cert: /home/jenkins/minikube-integration/22344-301174/.minikube/files/etc/ssl/certs/3030432.pem (1708 bytes)
	I1227 10:07:38.089805  528292 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22344-301174/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1227 10:07:38.116578  528292 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22344-301174/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1227 10:07:38.144137  528292 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22344-301174/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1227 10:07:38.190121  528292 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22344-301174/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1227 10:07:38.261666  528292 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22344-301174/.minikube/profiles/newest-cni-133340/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1227 10:07:38.334591  528292 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22344-301174/.minikube/profiles/newest-cni-133340/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1227 10:07:38.366097  528292 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22344-301174/.minikube/profiles/newest-cni-133340/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1227 10:07:38.405330  528292 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22344-301174/.minikube/profiles/newest-cni-133340/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1227 10:07:38.424715  528292 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22344-301174/.minikube/files/etc/ssl/certs/3030432.pem --> /usr/share/ca-certificates/3030432.pem (1708 bytes)
	I1227 10:07:38.449471  528292 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22344-301174/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1227 10:07:38.476555  528292 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22344-301174/.minikube/certs/303043.pem --> /usr/share/ca-certificates/303043.pem (1338 bytes)
	I1227 10:07:38.501385  528292 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (722 bytes)
	I1227 10:07:38.519994  528292 ssh_runner.go:195] Run: openssl version
	I1227 10:07:38.527473  528292 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/3030432.pem
	I1227 10:07:38.537585  528292 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/3030432.pem /etc/ssl/certs/3030432.pem
	I1227 10:07:38.546095  528292 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3030432.pem
	I1227 10:07:38.550055  528292 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 27 09:17 /usr/share/ca-certificates/3030432.pem
	I1227 10:07:38.550191  528292 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3030432.pem
	I1227 10:07:38.592498  528292 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1227 10:07:38.601578  528292 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1227 10:07:38.609580  528292 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1227 10:07:38.617934  528292 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1227 10:07:38.622405  528292 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 27 09:13 /usr/share/ca-certificates/minikubeCA.pem
	I1227 10:07:38.622531  528292 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1227 10:07:38.664331  528292 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1227 10:07:38.672585  528292 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/303043.pem
	I1227 10:07:38.680551  528292 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/303043.pem /etc/ssl/certs/303043.pem
	I1227 10:07:38.688566  528292 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/303043.pem
	I1227 10:07:38.692605  528292 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 27 09:17 /usr/share/ca-certificates/303043.pem
	I1227 10:07:38.692720  528292 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/303043.pem
	I1227 10:07:38.736008  528292 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1227 10:07:38.744397  528292 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1227 10:07:38.748535  528292 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1227 10:07:38.790685  528292 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1227 10:07:38.890669  528292 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1227 10:07:38.994352  528292 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1227 10:07:39.061759  528292 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1227 10:07:39.212463  528292 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1227 10:07:39.347487  528292 kubeadm.go:401] StartCluster: {Name:newest-cni-133340 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:newest-cni-133340 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262
144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1227 10:07:39.347630  528292 cri.go:61] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1227 10:07:39.347724  528292 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1227 10:07:39.463794  528292 cri.go:96] found id: "cee879e19c3e8958ad16f09760810606a388ece1bda9a1e8a85a7c7adec6f94f"
	I1227 10:07:39.463871  528292 cri.go:96] found id: "c342531334826d76fa90df32a74fc2a20cd2f093ec252388fe14bd342b7da596"
	I1227 10:07:39.463901  528292 cri.go:96] found id: "39aa7832ba5e4fe2d6d2d87f20829e140105ab816a70d2f5a7edfa283eaa5e91"
	I1227 10:07:39.463919  528292 cri.go:96] found id: "7a9d01b797fbdb9019d71cb9c00b0d02027fd52f235f01e6a682c54cbb7beeb2"
	I1227 10:07:39.463952  528292 cri.go:96] found id: ""
	I1227 10:07:39.464037  528292 ssh_runner.go:195] Run: sudo runc list -f json
	W1227 10:07:39.511844  528292 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-27T10:07:39Z" level=error msg="open /run/runc: no such file or directory"
	I1227 10:07:39.511965  528292 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1227 10:07:39.546819  528292 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1227 10:07:39.546889  528292 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1227 10:07:39.546975  528292 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1227 10:07:39.563851  528292 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1227 10:07:39.564354  528292 kubeconfig.go:47] verify endpoint returned: get endpoint: "newest-cni-133340" does not appear in /home/jenkins/minikube-integration/22344-301174/kubeconfig
	I1227 10:07:39.564502  528292 kubeconfig.go:62] /home/jenkins/minikube-integration/22344-301174/kubeconfig needs updating (will repair): [kubeconfig missing "newest-cni-133340" cluster setting kubeconfig missing "newest-cni-133340" context setting]
	I1227 10:07:39.564857  528292 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22344-301174/kubeconfig: {Name:mk3a3b85dc4ae4614cb3a9a7459a1ec0ee3f4d9f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 10:07:39.566431  528292 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1227 10:07:39.591889  528292 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.76.2
	I1227 10:07:39.591965  528292 kubeadm.go:602] duration metric: took 45.038493ms to restartPrimaryControlPlane
	I1227 10:07:39.591994  528292 kubeadm.go:403] duration metric: took 244.515464ms to StartCluster
	I1227 10:07:39.592043  528292 settings.go:142] acquiring lock: {Name:mkd3c0090464e277fd903dd0170c6ace5a6172ea Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 10:07:39.592132  528292 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22344-301174/kubeconfig
	I1227 10:07:39.592854  528292 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22344-301174/kubeconfig: {Name:mk3a3b85dc4ae4614cb3a9a7459a1ec0ee3f4d9f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 10:07:39.593118  528292 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1227 10:07:39.593566  528292 config.go:182] Loaded profile config "newest-cni-133340": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1227 10:07:39.593566  528292 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1227 10:07:39.593641  528292 addons.go:70] Setting storage-provisioner=true in profile "newest-cni-133340"
	I1227 10:07:39.593663  528292 addons.go:239] Setting addon storage-provisioner=true in "newest-cni-133340"
	W1227 10:07:39.593675  528292 addons.go:248] addon storage-provisioner should already be in state true
	I1227 10:07:39.593698  528292 host.go:66] Checking if "newest-cni-133340" exists ...
	I1227 10:07:39.593731  528292 addons.go:70] Setting dashboard=true in profile "newest-cni-133340"
	I1227 10:07:39.593756  528292 addons.go:239] Setting addon dashboard=true in "newest-cni-133340"
	W1227 10:07:39.593786  528292 addons.go:248] addon dashboard should already be in state true
	I1227 10:07:39.593820  528292 host.go:66] Checking if "newest-cni-133340" exists ...
	I1227 10:07:39.594284  528292 cli_runner.go:164] Run: docker container inspect newest-cni-133340 --format={{.State.Status}}
	I1227 10:07:39.594819  528292 cli_runner.go:164] Run: docker container inspect newest-cni-133340 --format={{.State.Status}}
	I1227 10:07:39.594885  528292 addons.go:70] Setting default-storageclass=true in profile "newest-cni-133340"
	I1227 10:07:39.594915  528292 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-133340"
	I1227 10:07:39.595647  528292 cli_runner.go:164] Run: docker container inspect newest-cni-133340 --format={{.State.Status}}
	I1227 10:07:39.602219  528292 out.go:179] * Verifying Kubernetes components...
	I1227 10:07:39.610273  528292 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1227 10:07:39.650957  528292 addons.go:239] Setting addon default-storageclass=true in "newest-cni-133340"
	W1227 10:07:39.650980  528292 addons.go:248] addon default-storageclass should already be in state true
	I1227 10:07:39.651006  528292 host.go:66] Checking if "newest-cni-133340" exists ...
	I1227 10:07:39.651970  528292 cli_runner.go:164] Run: docker container inspect newest-cni-133340 --format={{.State.Status}}
	I1227 10:07:39.674748  528292 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1227 10:07:39.679735  528292 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1227 10:07:39.679819  528292 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1227 10:07:37.620144  526230 out.go:252]   - Generating certificates and keys ...
	I1227 10:07:37.620284  526230 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1227 10:07:37.620382  526230 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1227 10:07:37.800226  526230 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1227 10:07:38.363721  526230 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1227 10:07:38.847221  526230 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1227 10:07:39.382240  526230 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1227 10:07:39.587529  526230 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1227 10:07:39.588295  526230 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [auto-246753 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1227 10:07:39.841366  526230 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1227 10:07:39.848989  526230 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [auto-246753 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1227 10:07:39.687466  528292 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1227 10:07:39.687498  528292 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1227 10:07:39.687590  528292 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-133340
	I1227 10:07:39.687838  528292 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1227 10:07:39.687852  528292 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1227 10:07:39.687891  528292 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-133340
	I1227 10:07:39.690585  528292 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1227 10:07:39.690604  528292 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1227 10:07:39.690657  528292 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-133340
	I1227 10:07:39.738395  528292 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33471 SSHKeyPath:/home/jenkins/minikube-integration/22344-301174/.minikube/machines/newest-cni-133340/id_rsa Username:docker}
	I1227 10:07:39.750890  528292 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33471 SSHKeyPath:/home/jenkins/minikube-integration/22344-301174/.minikube/machines/newest-cni-133340/id_rsa Username:docker}
	I1227 10:07:39.760026  528292 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33471 SSHKeyPath:/home/jenkins/minikube-integration/22344-301174/.minikube/machines/newest-cni-133340/id_rsa Username:docker}
	I1227 10:07:40.058753  528292 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1227 10:07:40.111871  528292 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1227 10:07:40.177095  528292 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1227 10:07:40.177160  528292 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1227 10:07:40.264091  528292 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1227 10:07:40.266645  528292 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1227 10:07:40.266723  528292 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1227 10:07:40.316117  528292 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1227 10:07:40.316194  528292 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1227 10:07:40.389941  528292 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1227 10:07:40.390027  528292 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1227 10:07:40.482599  528292 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1227 10:07:40.482621  528292 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1227 10:07:40.536161  528292 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1227 10:07:40.536187  528292 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1227 10:07:40.563536  528292 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1227 10:07:40.563561  528292 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1227 10:07:40.522284  526230 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1227 10:07:40.759706  526230 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1227 10:07:40.998357  526230 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1227 10:07:40.998638  526230 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1227 10:07:41.078528  526230 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1227 10:07:41.191565  526230 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1227 10:07:41.638357  526230 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1227 10:07:42.364284  526230 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1227 10:07:42.739038  526230 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1227 10:07:42.739845  526230 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1227 10:07:42.742666  526230 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1227 10:07:42.746639  526230 out.go:252]   - Booting up control plane ...
	I1227 10:07:42.746772  526230 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1227 10:07:42.751349  526230 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1227 10:07:42.754280  526230 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1227 10:07:42.785688  526230 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1227 10:07:42.786307  526230 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1227 10:07:42.796666  526230 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1227 10:07:42.798903  526230 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1227 10:07:42.799304  526230 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1227 10:07:42.971767  526230 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1227 10:07:42.971899  526230 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1227 10:07:43.974515  526230 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.001579911s
	I1227 10:07:43.975253  526230 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1227 10:07:43.975474  526230 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.85.2:8443/livez
	I1227 10:07:43.975568  526230 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1227 10:07:43.975647  526230 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1227 10:07:40.612710  528292 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1227 10:07:40.612782  528292 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1227 10:07:40.653744  528292 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1227 10:07:40.653818  528292 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1227 10:07:40.704720  528292 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1227 10:07:46.804987  528292 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (6.746139088s)
	I1227 10:07:46.805061  528292 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (6.693111154s)
	I1227 10:07:46.805101  528292 api_server.go:52] waiting for apiserver process to appear ...
	I1227 10:07:46.805163  528292 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 10:07:46.805239  528292 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (6.541078572s)
	I1227 10:07:46.805617  528292 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (6.100806585s)
	I1227 10:07:46.808669  528292 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p newest-cni-133340 addons enable metrics-server
	
	I1227 10:07:46.848621  528292 api_server.go:72] duration metric: took 7.255439956s to wait for apiserver process to appear ...
	I1227 10:07:46.848646  528292 api_server.go:88] waiting for apiserver healthz status ...
	I1227 10:07:46.848664  528292 api_server.go:299] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1227 10:07:46.854402  528292 out.go:179] * Enabled addons: storage-provisioner, dashboard, default-storageclass
	I1227 10:07:46.857186  528292 addons.go:530] duration metric: took 7.263620496s for enable addons: enabled=[storage-provisioner dashboard default-storageclass]
	I1227 10:07:46.867060  528292 api_server.go:325] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1227 10:07:46.868199  528292 api_server.go:141] control plane version: v1.35.0
	I1227 10:07:46.868228  528292 api_server.go:131] duration metric: took 19.572256ms to wait for apiserver health ...
	I1227 10:07:46.868237  528292 system_pods.go:43] waiting for kube-system pods to appear ...
	I1227 10:07:46.877385  528292 system_pods.go:59] 8 kube-system pods found
	I1227 10:07:46.877426  528292 system_pods.go:61] "coredns-7d764666f9-ztmc7" [f239b963-7c4a-4112-8652-c5b0f615f94f] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint(s). no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1227 10:07:46.877437  528292 system_pods.go:61] "etcd-newest-cni-133340" [cfbaeb70-0fb0-4c4a-9a7e-163789d297a4] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1227 10:07:46.877443  528292 system_pods.go:61] "kindnet-fgjsl" [c7827a10-1fba-4ca9-a964-97f5b7ea1ceb] Running
	I1227 10:07:46.877450  528292 system_pods.go:61] "kube-apiserver-newest-cni-133340" [9c2aa856-552a-4144-af72-84fde5e9c118] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1227 10:07:46.877458  528292 system_pods.go:61] "kube-controller-manager-newest-cni-133340" [4d2a8823-9c53-4856-8b61-0f5847c7877d] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1227 10:07:46.877467  528292 system_pods.go:61] "kube-proxy-524xs" [21306208-0f93-4fa6-9524-38dc4245c9de] Running
	I1227 10:07:46.877474  528292 system_pods.go:61] "kube-scheduler-newest-cni-133340" [f9032e07-acb4-4316-af98-a51df2721f9f] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1227 10:07:46.877488  528292 system_pods.go:61] "storage-provisioner" [00d34553-4b22-4ac9-9a3b-c1a9cb443967] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint(s). no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1227 10:07:46.877494  528292 system_pods.go:74] duration metric: took 9.246712ms to wait for pod list to return data ...
	I1227 10:07:46.877508  528292 default_sa.go:34] waiting for default service account to be created ...
	I1227 10:07:46.881274  528292 default_sa.go:45] found service account: "default"
	I1227 10:07:46.881301  528292 default_sa.go:55] duration metric: took 3.785049ms for default service account to be created ...
	I1227 10:07:46.881315  528292 kubeadm.go:587] duration metric: took 7.288140679s to wait for: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1227 10:07:46.881331  528292 node_conditions.go:102] verifying NodePressure condition ...
	I1227 10:07:46.886298  528292 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1227 10:07:46.886332  528292 node_conditions.go:123] node cpu capacity is 2
	I1227 10:07:46.886346  528292 node_conditions.go:105] duration metric: took 5.009966ms to run NodePressure ...
	I1227 10:07:46.886359  528292 start.go:242] waiting for startup goroutines ...
	I1227 10:07:46.886366  528292 start.go:247] waiting for cluster config update ...
	I1227 10:07:46.886382  528292 start.go:256] writing updated cluster config ...
	I1227 10:07:46.886661  528292 ssh_runner.go:195] Run: rm -f paused
	I1227 10:07:46.988457  528292 start.go:625] kubectl: 1.33.2, cluster: 1.35.0 (minor skew: 2)
	I1227 10:07:46.991626  528292 out.go:203] 
	W1227 10:07:46.994467  528292 out.go:285] ! /usr/local/bin/kubectl is version 1.33.2, which may have incompatibilities with Kubernetes 1.35.0.
	I1227 10:07:46.997489  528292 out.go:179]   - Want kubectl v1.35.0? Try 'minikube kubectl -- get pods -A'
	I1227 10:07:47.000502  528292 out.go:179] * Done! kubectl is now configured to use "newest-cni-133340" cluster and "default" namespace by default
	I1227 10:07:46.007092  526230 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 2.031054061s
	I1227 10:07:49.316192  526230 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 5.340024044s
	
	
	==> CRI-O <==
	Dec 27 10:07:44 newest-cni-133340 crio[616]: time="2025-12-27T10:07:44.742065157Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 27 10:07:44 newest-cni-133340 crio[616]: time="2025-12-27T10:07:44.746790205Z" level=info msg="Running pod sandbox: kube-system/kindnet-fgjsl/POD" id=d9472fc2-6098-48fb-adce-a40df9ae6897 name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 27 10:07:44 newest-cni-133340 crio[616]: time="2025-12-27T10:07:44.746840126Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 27 10:07:44 newest-cni-133340 crio[616]: time="2025-12-27T10:07:44.771093597Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=86ac3e44-6018-431e-af28-b72b52d147b1 name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 27 10:07:44 newest-cni-133340 crio[616]: time="2025-12-27T10:07:44.77139116Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=d9472fc2-6098-48fb-adce-a40df9ae6897 name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 27 10:07:44 newest-cni-133340 crio[616]: time="2025-12-27T10:07:44.790078515Z" level=info msg="Ran pod sandbox a91c4e11ef4022ea552669de31bdcfa10adbcb6e75cad845f199e8bdb55bf05d with infra container: kube-system/kube-proxy-524xs/POD" id=86ac3e44-6018-431e-af28-b72b52d147b1 name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 27 10:07:44 newest-cni-133340 crio[616]: time="2025-12-27T10:07:44.828013305Z" level=info msg="Ran pod sandbox 4ed055e8d3f101d8915448acaf8d143aa45834bf1976030165a37ff572280a85 with infra container: kube-system/kindnet-fgjsl/POD" id=d9472fc2-6098-48fb-adce-a40df9ae6897 name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 27 10:07:44 newest-cni-133340 crio[616]: time="2025-12-27T10:07:44.843846684Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20251212-v0.29.0-alpha-105-g20ccfc88" id=0d28402f-ddc7-4913-9ba8-f63edc45c32d name=/runtime.v1.ImageService/ImageStatus
	Dec 27 10:07:44 newest-cni-133340 crio[616]: time="2025-12-27T10:07:44.844262623Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.35.0" id=26800f9d-c076-4dd6-9716-af958a9d5e16 name=/runtime.v1.ImageService/ImageStatus
	Dec 27 10:07:44 newest-cni-133340 crio[616]: time="2025-12-27T10:07:44.848317878Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20251212-v0.29.0-alpha-105-g20ccfc88" id=3ad79379-6f7a-478c-8920-94a9acd3fa6c name=/runtime.v1.ImageService/ImageStatus
	Dec 27 10:07:44 newest-cni-133340 crio[616]: time="2025-12-27T10:07:44.848660684Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.35.0" id=b3cb7e17-dfb7-432d-b7db-a720b9fcf396 name=/runtime.v1.ImageService/ImageStatus
	Dec 27 10:07:44 newest-cni-133340 crio[616]: time="2025-12-27T10:07:44.852517996Z" level=info msg="Creating container: kube-system/kindnet-fgjsl/kindnet-cni" id=f4212c4f-f106-464b-8f7b-50bad3ba95ba name=/runtime.v1.RuntimeService/CreateContainer
	Dec 27 10:07:44 newest-cni-133340 crio[616]: time="2025-12-27T10:07:44.852629915Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 27 10:07:44 newest-cni-133340 crio[616]: time="2025-12-27T10:07:44.854715636Z" level=info msg="Creating container: kube-system/kube-proxy-524xs/kube-proxy" id=113df369-888b-4a2d-a3bb-f19fe837c946 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 27 10:07:44 newest-cni-133340 crio[616]: time="2025-12-27T10:07:44.864790584Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 27 10:07:44 newest-cni-133340 crio[616]: time="2025-12-27T10:07:44.87846021Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 27 10:07:44 newest-cni-133340 crio[616]: time="2025-12-27T10:07:44.893311776Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 27 10:07:44 newest-cni-133340 crio[616]: time="2025-12-27T10:07:44.893893642Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 27 10:07:44 newest-cni-133340 crio[616]: time="2025-12-27T10:07:44.910567999Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 27 10:07:45 newest-cni-133340 crio[616]: time="2025-12-27T10:07:45.046537627Z" level=info msg="Created container 78fd010d78a7cb2f45faea09e3932928ca8bc285ea91b147f7d5b6b8969325e2: kube-system/kindnet-fgjsl/kindnet-cni" id=f4212c4f-f106-464b-8f7b-50bad3ba95ba name=/runtime.v1.RuntimeService/CreateContainer
	Dec 27 10:07:45 newest-cni-133340 crio[616]: time="2025-12-27T10:07:45.062728213Z" level=info msg="Starting container: 78fd010d78a7cb2f45faea09e3932928ca8bc285ea91b147f7d5b6b8969325e2" id=83b5d769-6cd5-4fd8-b225-3df9d0809768 name=/runtime.v1.RuntimeService/StartContainer
	Dec 27 10:07:45 newest-cni-133340 crio[616]: time="2025-12-27T10:07:45.067758979Z" level=info msg="Created container 3ec22f1315d2ec502303d3a0be296af3a51716ae2198f1d68384e2893282cac1: kube-system/kube-proxy-524xs/kube-proxy" id=113df369-888b-4a2d-a3bb-f19fe837c946 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 27 10:07:45 newest-cni-133340 crio[616]: time="2025-12-27T10:07:45.078962343Z" level=info msg="Starting container: 3ec22f1315d2ec502303d3a0be296af3a51716ae2198f1d68384e2893282cac1" id=ada0ed6f-1116-474c-a7c7-75efa690bec3 name=/runtime.v1.RuntimeService/StartContainer
	Dec 27 10:07:45 newest-cni-133340 crio[616]: time="2025-12-27T10:07:45.082344153Z" level=info msg="Started container" PID=1074 containerID=78fd010d78a7cb2f45faea09e3932928ca8bc285ea91b147f7d5b6b8969325e2 description=kube-system/kindnet-fgjsl/kindnet-cni id=83b5d769-6cd5-4fd8-b225-3df9d0809768 name=/runtime.v1.RuntimeService/StartContainer sandboxID=4ed055e8d3f101d8915448acaf8d143aa45834bf1976030165a37ff572280a85
	Dec 27 10:07:45 newest-cni-133340 crio[616]: time="2025-12-27T10:07:45.110856492Z" level=info msg="Started container" PID=1076 containerID=3ec22f1315d2ec502303d3a0be296af3a51716ae2198f1d68384e2893282cac1 description=kube-system/kube-proxy-524xs/kube-proxy id=ada0ed6f-1116-474c-a7c7-75efa690bec3 name=/runtime.v1.RuntimeService/StartContainer sandboxID=a91c4e11ef4022ea552669de31bdcfa10adbcb6e75cad845f199e8bdb55bf05d
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	3ec22f1315d2e       de369f46c2ff55c31ea783a663eb203caa820f3db1f9b9c935e79e7d1e9fd9e5   6 seconds ago       Running             kube-proxy                1                   a91c4e11ef402       kube-proxy-524xs                            kube-system
	78fd010d78a7c       c96ee3c17498748ccc544ba99ee8ffeb020fc335b230b43cd28bf43bed229a13   6 seconds ago       Running             kindnet-cni               1                   4ed055e8d3f10       kindnet-fgjsl                               kube-system
	cee879e19c3e8       ddc8422d4d35a6fc66c34be61e24df795e5cebf197eb546f62740d0bafef874f   12 seconds ago      Running             kube-scheduler            1                   9f5f82965a8ef       kube-scheduler-newest-cni-133340            kube-system
	c342531334826       271e49a0ebc56647476845128fcd2a73bb138beeca3878cc3bf52b4ff1172a57   12 seconds ago      Running             etcd                      1                   abff1b2348346       etcd-newest-cni-133340                      kube-system
	39aa7832ba5e4       88898f1d1a62a3ea9db5d4d099dee7aa52ebe8191016c5b3c721388a309983e0   12 seconds ago      Running             kube-controller-manager   1                   0b1143a36cb92       kube-controller-manager-newest-cni-133340   kube-system
	7a9d01b797fbd       c3fcf259c473a57a5d7da116e29161904491091743512d27467c907c5516f856   12 seconds ago      Running             kube-apiserver            1                   d569adc450bc2       kube-apiserver-newest-cni-133340            kube-system
	
	
	==> describe nodes <==
	Name:               newest-cni-133340
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=newest-cni-133340
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=a2daf445edf4872fd9586416ba5dbf507613db86
	                    minikube.k8s.io/name=newest-cni-133340
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_27T10_07_18_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 27 Dec 2025 10:07:14 +0000
	Taints:             node.kubernetes.io/not-ready:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  newest-cni-133340
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 27 Dec 2025 10:07:43 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 27 Dec 2025 10:07:44 +0000   Sat, 27 Dec 2025 10:07:10 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 27 Dec 2025 10:07:44 +0000   Sat, 27 Dec 2025 10:07:10 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 27 Dec 2025 10:07:44 +0000   Sat, 27 Dec 2025 10:07:10 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            False   Sat, 27 Dec 2025 10:07:44 +0000   Sat, 27 Dec 2025 10:07:10 +0000   KubeletNotReady              container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/cni/net.d/. Has your network provider started?
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    newest-cni-133340
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022304Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022304Ki
	  pods:               110
	System Info:
	  Machine ID:                 56495bcb733b49eb3642bf15694bbc8c
	  System UUID:                2ffc56ef-4a0e-4350-837c-13fb816f4d7e
	  Boot ID:                    bc2ae459-717b-455e-ba6b-b9cf89c5e3c8
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.3
	  Kubelet Version:            v1.35.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.42.0.0/24
	PodCIDRs:                     10.42.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  kube-system                 etcd-newest-cni-133340                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         35s
	  kube-system                 kindnet-fgjsl                                100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      30s
	  kube-system                 kube-apiserver-newest-cni-133340             250m (12%)    0 (0%)      0 (0%)           0 (0%)         37s
	  kube-system                 kube-controller-manager-newest-cni-133340    200m (10%)    0 (0%)      0 (0%)           0 (0%)         35s
	  kube-system                 kube-proxy-524xs                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         30s
	  kube-system                 kube-scheduler-newest-cni-133340             100m (5%)     0 (0%)      0 (0%)           0 (0%)         35s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (1%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type    Reason          Age   From             Message
	  ----    ------          ----  ----             -------
	  Normal  RegisteredNode  31s   node-controller  Node newest-cni-133340 event: Registered Node newest-cni-133340 in Controller
	  Normal  RegisteredNode  4s    node-controller  Node newest-cni-133340 event: Registered Node newest-cni-133340 in Controller
	
	
	==> dmesg <==
	[Dec27 09:37] overlayfs: idmapped layers are currently not supported
	[  +4.962986] overlayfs: idmapped layers are currently not supported
	[Dec27 09:38] overlayfs: idmapped layers are currently not supported
	[ +23.051169] overlayfs: idmapped layers are currently not supported
	[Dec27 09:39] overlayfs: idmapped layers are currently not supported
	[ +34.383287] overlayfs: idmapped layers are currently not supported
	[Dec27 09:41] overlayfs: idmapped layers are currently not supported
	[Dec27 09:43] overlayfs: idmapped layers are currently not supported
	[ +35.522529] overlayfs: idmapped layers are currently not supported
	[Dec27 09:48] overlayfs: idmapped layers are currently not supported
	[Dec27 09:49] overlayfs: idmapped layers are currently not supported
	[Dec27 09:51] overlayfs: idmapped layers are currently not supported
	[Dec27 09:52] overlayfs: idmapped layers are currently not supported
	[Dec27 09:59] overlayfs: idmapped layers are currently not supported
	[ +37.649191] overlayfs: idmapped layers are currently not supported
	[Dec27 10:01] overlayfs: idmapped layers are currently not supported
	[Dec27 10:02] overlayfs: idmapped layers are currently not supported
	[Dec27 10:03] overlayfs: idmapped layers are currently not supported
	[Dec27 10:04] overlayfs: idmapped layers are currently not supported
	[Dec27 10:05] overlayfs: idmapped layers are currently not supported
	[ +42.108139] overlayfs: idmapped layers are currently not supported
	[Dec27 10:06] overlayfs: idmapped layers are currently not supported
	[Dec27 10:07] overlayfs: idmapped layers are currently not supported
	[ +29.217037] overlayfs: idmapped layers are currently not supported
	[  +5.170102] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [c342531334826d76fa90df32a74fc2a20cd2f093ec252388fe14bd342b7da596] <==
	{"level":"info","ts":"2025-12-27T10:07:39.493495Z","caller":"fileutil/purge.go:49","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-12-27T10:07:39.493560Z","caller":"fileutil/purge.go:49","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-12-27T10:07:39.493801Z","caller":"embed/etcd.go:640","msg":"serving peer traffic","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2025-12-27T10:07:39.493842Z","caller":"embed/etcd.go:611","msg":"cmux::serve","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2025-12-27T10:07:39.512984Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1981","msg":"ea7e25599daad906 switched to configuration voters=(16896983918768216326)"}
	{"level":"info","ts":"2025-12-27T10:07:39.513187Z","caller":"membership/cluster.go:433","msg":"ignore already added member","cluster-id":"6f20f2c4b2fb5f8a","local-member-id":"ea7e25599daad906","added-peer-id":"ea7e25599daad906","added-peer-peer-urls":["https://192.168.76.2:2380"],"added-peer-is-learner":false}
	{"level":"info","ts":"2025-12-27T10:07:39.513345Z","caller":"membership/cluster.go:674","msg":"updated cluster version","cluster-id":"6f20f2c4b2fb5f8a","local-member-id":"ea7e25599daad906","from":"3.6","to":"3.6"}
	{"level":"info","ts":"2025-12-27T10:07:40.138282Z","logger":"raft","caller":"v3@v3.6.0/raft.go:988","msg":"ea7e25599daad906 is starting a new election at term 2"}
	{"level":"info","ts":"2025-12-27T10:07:40.138332Z","logger":"raft","caller":"v3@v3.6.0/raft.go:930","msg":"ea7e25599daad906 became pre-candidate at term 2"}
	{"level":"info","ts":"2025-12-27T10:07:40.138373Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1077","msg":"ea7e25599daad906 received MsgPreVoteResp from ea7e25599daad906 at term 2"}
	{"level":"info","ts":"2025-12-27T10:07:40.138385Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1693","msg":"ea7e25599daad906 has received 1 MsgPreVoteResp votes and 0 vote rejections"}
	{"level":"info","ts":"2025-12-27T10:07:40.138401Z","logger":"raft","caller":"v3@v3.6.0/raft.go:912","msg":"ea7e25599daad906 became candidate at term 3"}
	{"level":"info","ts":"2025-12-27T10:07:40.142367Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1077","msg":"ea7e25599daad906 received MsgVoteResp from ea7e25599daad906 at term 3"}
	{"level":"info","ts":"2025-12-27T10:07:40.142430Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1693","msg":"ea7e25599daad906 has received 1 MsgVoteResp votes and 0 vote rejections"}
	{"level":"info","ts":"2025-12-27T10:07:40.142458Z","logger":"raft","caller":"v3@v3.6.0/raft.go:970","msg":"ea7e25599daad906 became leader at term 3"}
	{"level":"info","ts":"2025-12-27T10:07:40.142468Z","logger":"raft","caller":"v3@v3.6.0/node.go:370","msg":"raft.node: ea7e25599daad906 elected leader ea7e25599daad906 at term 3"}
	{"level":"info","ts":"2025-12-27T10:07:40.150558Z","caller":"embed/serve.go:138","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-12-27T10:07:40.151507Z","caller":"v3rpc/health.go:63","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-12-27T10:07:40.153531Z","caller":"embed/serve.go:283","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.76.2:2379"}
	{"level":"info","ts":"2025-12-27T10:07:40.153840Z","caller":"embed/serve.go:138","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-12-27T10:07:40.150527Z","caller":"etcdserver/server.go:1820","msg":"published local member to cluster through raft","local-member-id":"ea7e25599daad906","local-member-attributes":"{Name:newest-cni-133340 ClientURLs:[https://192.168.76.2:2379]}","cluster-id":"6f20f2c4b2fb5f8a","publish-timeout":"7s"}
	{"level":"info","ts":"2025-12-27T10:07:40.171070Z","caller":"v3rpc/health.go:63","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-12-27T10:07:40.171820Z","caller":"embed/serve.go:283","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2025-12-27T10:07:40.194249Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-12-27T10:07:40.194287Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	
	
	==> kernel <==
	 10:07:52 up  2:50,  0 user,  load average: 6.99, 3.72, 2.69
	Linux newest-cni-133340 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [78fd010d78a7cb2f45faea09e3932928ca8bc285ea91b147f7d5b6b8969325e2] <==
	I1227 10:07:45.189682       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1227 10:07:45.244514       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1227 10:07:45.244655       1 main.go:148] setting mtu 1500 for CNI 
	I1227 10:07:45.244670       1 main.go:178] kindnetd IP family: "ipv4"
	I1227 10:07:45.244685       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-12-27T10:07:45Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1227 10:07:45.467606       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1227 10:07:45.467629       1 controller.go:381] "Waiting for informer caches to sync"
	I1227 10:07:45.467638       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1227 10:07:45.467946       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	
	
	==> kube-apiserver [7a9d01b797fbdb9019d71cb9c00b0d02027fd52f235f01e6a682c54cbb7beeb2] <==
	I1227 10:07:44.136669       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1227 10:07:44.136675       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1227 10:07:44.136765       1 shared_informer.go:377] "Caches are synced"
	I1227 10:07:44.136783       1 shared_informer.go:377] "Caches are synced"
	I1227 10:07:44.136819       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1227 10:07:44.137154       1 cidrallocator.go:302] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1227 10:07:44.137185       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1227 10:07:44.154828       1 shared_informer.go:377] "Caches are synced"
	I1227 10:07:44.154850       1 policy_source.go:248] refreshing policies
	I1227 10:07:44.164375       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1227 10:07:44.164415       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1227 10:07:44.230607       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	E1227 10:07:44.333102       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1227 10:07:44.584546       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1227 10:07:44.716728       1 storage_scheduling.go:139] all system priority classes are created successfully or already exist.
	I1227 10:07:46.041035       1 controller.go:667] quota admission added evaluator for: namespaces
	I1227 10:07:46.144958       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1227 10:07:46.201385       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1227 10:07:46.241809       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1227 10:07:46.385959       1 alloc.go:329] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.111.121.23"}
	I1227 10:07:46.421498       1 alloc.go:329] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.106.72.173"}
	I1227 10:07:48.379695       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1227 10:07:48.634167       1 controller.go:667] quota admission added evaluator for: endpoints
	I1227 10:07:48.691873       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1227 10:07:48.839118       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	
	
	==> kube-controller-manager [39aa7832ba5e4fe2d6d2d87f20829e140105ab816a70d2f5a7edfa283eaa5e91] <==
	I1227 10:07:48.207662       1 shared_informer.go:377] "Caches are synced"
	I1227 10:07:48.207734       1 shared_informer.go:377] "Caches are synced"
	I1227 10:07:48.209838       1 shared_informer.go:377] "Caches are synced"
	I1227 10:07:48.209919       1 shared_informer.go:377] "Caches are synced"
	I1227 10:07:48.209969       1 shared_informer.go:377] "Caches are synced"
	I1227 10:07:48.210034       1 shared_informer.go:377] "Caches are synced"
	I1227 10:07:48.210083       1 shared_informer.go:377] "Caches are synced"
	I1227 10:07:48.210140       1 shared_informer.go:377] "Caches are synced"
	I1227 10:07:48.210405       1 shared_informer.go:377] "Caches are synced"
	I1227 10:07:48.214501       1 shared_informer.go:377] "Caches are synced"
	I1227 10:07:48.214651       1 shared_informer.go:377] "Caches are synced"
	I1227 10:07:48.214749       1 shared_informer.go:377] "Caches are synced"
	I1227 10:07:48.215280       1 shared_informer.go:377] "Caches are synced"
	I1227 10:07:48.216277       1 shared_informer.go:377] "Caches are synced"
	I1227 10:07:48.216325       1 shared_informer.go:377] "Caches are synced"
	I1227 10:07:48.216465       1 shared_informer.go:377] "Caches are synced"
	I1227 10:07:48.221579       1 shared_informer.go:377] "Caches are synced"
	I1227 10:07:48.221732       1 shared_informer.go:377] "Caches are synced"
	I1227 10:07:48.221803       1 shared_informer.go:377] "Caches are synced"
	I1227 10:07:48.223679       1 shared_informer.go:377] "Caches are synced"
	I1227 10:07:48.235506       1 shared_informer.go:377] "Caches are synced"
	I1227 10:07:48.284400       1 shared_informer.go:377] "Caches are synced"
	I1227 10:07:48.304282       1 shared_informer.go:377] "Caches are synced"
	I1227 10:07:48.304363       1 garbagecollector.go:166] "Garbage collector: all resource monitors have synced"
	I1227 10:07:48.305917       1 garbagecollector.go:169] "Proceeding to collect garbage"
	
	
	==> kube-proxy [3ec22f1315d2ec502303d3a0be296af3a51716ae2198f1d68384e2893282cac1] <==
	I1227 10:07:45.794414       1 server_linux.go:53] "Using iptables proxy"
	I1227 10:07:46.339886       1 shared_informer.go:370] "Waiting for caches to sync"
	I1227 10:07:46.521341       1 shared_informer.go:377] "Caches are synced"
	I1227 10:07:46.521377       1 server.go:218] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1227 10:07:46.521450       1 server.go:255] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1227 10:07:46.672237       1 server.go:264] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1227 10:07:46.672291       1 server_linux.go:136] "Using iptables Proxier"
	I1227 10:07:46.683090       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1227 10:07:46.683476       1 server.go:529] "Version info" version="v1.35.0"
	I1227 10:07:46.683664       1 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1227 10:07:46.684855       1 config.go:200] "Starting service config controller"
	I1227 10:07:46.684936       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1227 10:07:46.684987       1 config.go:106] "Starting endpoint slice config controller"
	I1227 10:07:46.685015       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1227 10:07:46.685051       1 config.go:403] "Starting serviceCIDR config controller"
	I1227 10:07:46.685079       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1227 10:07:46.708492       1 config.go:309] "Starting node config controller"
	I1227 10:07:46.708581       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1227 10:07:46.708616       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1227 10:07:46.796335       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1227 10:07:46.796439       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1227 10:07:46.796457       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [cee879e19c3e8958ad16f09760810606a388ece1bda9a1e8a85a7c7adec6f94f] <==
	I1227 10:07:42.093575       1 serving.go:386] Generated self-signed cert in-memory
	W1227 10:07:43.846405       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1227 10:07:43.846438       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1227 10:07:43.846450       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1227 10:07:43.846458       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1227 10:07:44.052405       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.35.0"
	I1227 10:07:44.052444       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1227 10:07:44.074110       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1227 10:07:44.079714       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1227 10:07:44.079811       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1227 10:07:44.079835       1 shared_informer.go:370] "Waiting for caches to sync"
	I1227 10:07:44.280427       1 shared_informer.go:377] "Caches are synced"
	
	
	==> kubelet <==
	Dec 27 10:07:44 newest-cni-133340 kubelet[736]: I1227 10:07:44.322847     736 kubelet_node_status.go:123] "Node was previously registered" node="newest-cni-133340"
	Dec 27 10:07:44 newest-cni-133340 kubelet[736]: I1227 10:07:44.322946     736 kubelet_node_status.go:77] "Successfully registered node" node="newest-cni-133340"
	Dec 27 10:07:44 newest-cni-133340 kubelet[736]: I1227 10:07:44.322986     736 kuberuntime_manager.go:2062] "Updating runtime config through cri with podcidr" CIDR="10.42.0.0/24"
	Dec 27 10:07:44 newest-cni-133340 kubelet[736]: I1227 10:07:44.347607     736 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.42.0.0/24"
	Dec 27 10:07:44 newest-cni-133340 kubelet[736]: E1227 10:07:44.348408     736 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-apiserver-newest-cni-133340\" already exists" pod="kube-system/kube-apiserver-newest-cni-133340"
	Dec 27 10:07:44 newest-cni-133340 kubelet[736]: I1227 10:07:44.348428     736 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-newest-cni-133340"
	Dec 27 10:07:44 newest-cni-133340 kubelet[736]: E1227 10:07:44.392685     736 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-newest-cni-133340" containerName="kube-apiserver"
	Dec 27 10:07:44 newest-cni-133340 kubelet[736]: E1227 10:07:44.393166     736 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-controller-manager-newest-cni-133340\" already exists" pod="kube-system/kube-controller-manager-newest-cni-133340"
	Dec 27 10:07:44 newest-cni-133340 kubelet[736]: I1227 10:07:44.393204     736 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-newest-cni-133340"
	Dec 27 10:07:44 newest-cni-133340 kubelet[736]: E1227 10:07:44.393367     736 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-controller-manager-newest-cni-133340" containerName="kube-controller-manager"
	Dec 27 10:07:44 newest-cni-133340 kubelet[736]: E1227 10:07:44.399924     736 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/etcd-newest-cni-133340" containerName="etcd"
	Dec 27 10:07:44 newest-cni-133340 kubelet[736]: E1227 10:07:44.401240     736 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-newest-cni-133340" containerName="kube-scheduler"
	Dec 27 10:07:44 newest-cni-133340 kubelet[736]: E1227 10:07:44.422282     736 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-scheduler-newest-cni-133340\" already exists" pod="kube-system/kube-scheduler-newest-cni-133340"
	Dec 27 10:07:44 newest-cni-133340 kubelet[736]: I1227 10:07:44.481357     736 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world"
	Dec 27 10:07:44 newest-cni-133340 kubelet[736]: I1227 10:07:44.562414     736 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/21306208-0f93-4fa6-9524-38dc4245c9de-xtables-lock\") pod \"kube-proxy-524xs\" (UID: \"21306208-0f93-4fa6-9524-38dc4245c9de\") " pod="kube-system/kube-proxy-524xs"
	Dec 27 10:07:44 newest-cni-133340 kubelet[736]: I1227 10:07:44.562461     736 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c7827a10-1fba-4ca9-a964-97f5b7ea1ceb-xtables-lock\") pod \"kindnet-fgjsl\" (UID: \"c7827a10-1fba-4ca9-a964-97f5b7ea1ceb\") " pod="kube-system/kindnet-fgjsl"
	Dec 27 10:07:44 newest-cni-133340 kubelet[736]: I1227 10:07:44.562483     736 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/21306208-0f93-4fa6-9524-38dc4245c9de-lib-modules\") pod \"kube-proxy-524xs\" (UID: \"21306208-0f93-4fa6-9524-38dc4245c9de\") " pod="kube-system/kube-proxy-524xs"
	Dec 27 10:07:44 newest-cni-133340 kubelet[736]: I1227 10:07:44.562517     736 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c7827a10-1fba-4ca9-a964-97f5b7ea1ceb-lib-modules\") pod \"kindnet-fgjsl\" (UID: \"c7827a10-1fba-4ca9-a964-97f5b7ea1ceb\") " pod="kube-system/kindnet-fgjsl"
	Dec 27 10:07:44 newest-cni-133340 kubelet[736]: I1227 10:07:44.562561     736 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/c7827a10-1fba-4ca9-a964-97f5b7ea1ceb-cni-cfg\") pod \"kindnet-fgjsl\" (UID: \"c7827a10-1fba-4ca9-a964-97f5b7ea1ceb\") " pod="kube-system/kindnet-fgjsl"
	Dec 27 10:07:44 newest-cni-133340 kubelet[736]: I1227 10:07:44.642603     736 swap_util.go:78] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory"
	Dec 27 10:07:44 newest-cni-133340 kubelet[736]: W1227 10:07:44.821484     736 manager.go:1172] Failed to process watch event {EventType:0 Name:/docker/83f3564ca7865c8f0fff28cd70ed0fbf3baa50a4f4aaaa707de637f5214ccf50/crio-4ed055e8d3f101d8915448acaf8d143aa45834bf1976030165a37ff572280a85 WatchSource:0}: Error finding container 4ed055e8d3f101d8915448acaf8d143aa45834bf1976030165a37ff572280a85: Status 404 returned error can't find the container with id 4ed055e8d3f101d8915448acaf8d143aa45834bf1976030165a37ff572280a85
	Dec 27 10:07:45 newest-cni-133340 kubelet[736]: E1227 10:07:45.226569     736 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-newest-cni-133340" containerName="kube-scheduler"
	Dec 27 10:07:48 newest-cni-133340 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Dec 27 10:07:48 newest-cni-133340 systemd[1]: kubelet.service: Deactivated successfully.
	Dec 27 10:07:48 newest-cni-133340 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-133340 -n newest-cni-133340
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-133340 -n newest-cni-133340: exit status 2 (448.52082ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:270: (dbg) Run:  kubectl --context newest-cni-133340 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:281: non-running pods: coredns-7d764666f9-ztmc7 storage-provisioner dashboard-metrics-scraper-867fb5f87b-r5k2g kubernetes-dashboard-b84665fb8-r2c5t
helpers_test.go:283: ======> post-mortem[TestStartStop/group/newest-cni/serial/Pause]: describe non-running pods <======
helpers_test.go:286: (dbg) Run:  kubectl --context newest-cni-133340 describe pod coredns-7d764666f9-ztmc7 storage-provisioner dashboard-metrics-scraper-867fb5f87b-r5k2g kubernetes-dashboard-b84665fb8-r2c5t
helpers_test.go:286: (dbg) Non-zero exit: kubectl --context newest-cni-133340 describe pod coredns-7d764666f9-ztmc7 storage-provisioner dashboard-metrics-scraper-867fb5f87b-r5k2g kubernetes-dashboard-b84665fb8-r2c5t: exit status 1 (112.046114ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "coredns-7d764666f9-ztmc7" not found
	Error from server (NotFound): pods "storage-provisioner" not found
	Error from server (NotFound): pods "dashboard-metrics-scraper-867fb5f87b-r5k2g" not found
	Error from server (NotFound): pods "kubernetes-dashboard-b84665fb8-r2c5t" not found

                                                
                                                
** /stderr **
helpers_test.go:288: kubectl --context newest-cni-133340 describe pod coredns-7d764666f9-ztmc7 storage-provisioner dashboard-metrics-scraper-867fb5f87b-r5k2g kubernetes-dashboard-b84665fb8-r2c5t: exit status 1
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect newest-cni-133340
helpers_test.go:244: (dbg) docker inspect newest-cni-133340:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "83f3564ca7865c8f0fff28cd70ed0fbf3baa50a4f4aaaa707de637f5214ccf50",
	        "Created": "2025-12-27T10:06:53.938355809Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 528415,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-27T10:07:30.862534319Z",
	            "FinishedAt": "2025-12-27T10:07:29.947930687Z"
	        },
	        "Image": "sha256:713d59eed1cb7ea89d31e3cbef8f6274a6ab7509a421e96cf0197c334e76398d",
	        "ResolvConfPath": "/var/lib/docker/containers/83f3564ca7865c8f0fff28cd70ed0fbf3baa50a4f4aaaa707de637f5214ccf50/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/83f3564ca7865c8f0fff28cd70ed0fbf3baa50a4f4aaaa707de637f5214ccf50/hostname",
	        "HostsPath": "/var/lib/docker/containers/83f3564ca7865c8f0fff28cd70ed0fbf3baa50a4f4aaaa707de637f5214ccf50/hosts",
	        "LogPath": "/var/lib/docker/containers/83f3564ca7865c8f0fff28cd70ed0fbf3baa50a4f4aaaa707de637f5214ccf50/83f3564ca7865c8f0fff28cd70ed0fbf3baa50a4f4aaaa707de637f5214ccf50-json.log",
	        "Name": "/newest-cni-133340",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "newest-cni-133340:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "newest-cni-133340",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "83f3564ca7865c8f0fff28cd70ed0fbf3baa50a4f4aaaa707de637f5214ccf50",
	                "LowerDir": "/var/lib/docker/overlay2/0c9d74f728c6a9877244e39be65b6ef5c79623cfc825950897d2d18fa4c76de0-init/diff:/var/lib/docker/overlay2/888349771a41a46b09e7de676064af9effbe2f5ae2a8ba49ad062335fb2a70e5/diff",
	                "MergedDir": "/var/lib/docker/overlay2/0c9d74f728c6a9877244e39be65b6ef5c79623cfc825950897d2d18fa4c76de0/merged",
	                "UpperDir": "/var/lib/docker/overlay2/0c9d74f728c6a9877244e39be65b6ef5c79623cfc825950897d2d18fa4c76de0/diff",
	                "WorkDir": "/var/lib/docker/overlay2/0c9d74f728c6a9877244e39be65b6ef5c79623cfc825950897d2d18fa4c76de0/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "newest-cni-133340",
	                "Source": "/var/lib/docker/volumes/newest-cni-133340/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "newest-cni-133340",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "newest-cni-133340",
	                "name.minikube.sigs.k8s.io": "newest-cni-133340",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "e0e184dd9795323d477977ed5616d550ad340ad250881cf42962f0f6fe2da274",
	            "SandboxKey": "/var/run/docker/netns/e0e184dd9795",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33471"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33472"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33475"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33473"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33474"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "newest-cni-133340": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "e6:56:e3:05:83:cf",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "96182c0697dfbd3eb2978fc5bdfe5ab11e5c5e202e442a3bbbd2ca0b5a3c02a5",
	                    "EndpointID": "687db72d3da4b43193b932e64f2ef812f42c3013e18c12e0a6c5603fa5aa7dcf",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "newest-cni-133340",
	                        "83f3564ca786"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-133340 -n newest-cni-133340
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-133340 -n newest-cni-133340: exit status 2 (506.945712ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestStartStop/group/newest-cni/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p newest-cni-133340 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-arm64 -p newest-cni-133340 logs -n 25: (1.32145354s)
helpers_test.go:261: TestStartStop/group/newest-cni/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────────────────┬─────────┬─────────┬─────────────────────┬─────────
────────────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │              PROFILE              │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────────────────┼─────────┼─────────┼─────────────────────┼─────────
────────────┤
	│ stop    │ -p default-k8s-diff-port-681744 --alsologtostderr -v=3                                                                                                                                                                                        │ default-k8s-diff-port-681744      │ jenkins │ v1.37.0 │ 27 Dec 25 10:05 UTC │ 27 Dec 25 10:05 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-681744 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                       │ default-k8s-diff-port-681744      │ jenkins │ v1.37.0 │ 27 Dec 25 10:05 UTC │ 27 Dec 25 10:05 UTC │
	│ start   │ -p default-k8s-diff-port-681744 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0                                                                      │ default-k8s-diff-port-681744      │ jenkins │ v1.37.0 │ 27 Dec 25 10:05 UTC │ 27 Dec 25 10:06 UTC │
	│ image   │ embed-certs-017122 image list --format=json                                                                                                                                                                                                   │ embed-certs-017122                │ jenkins │ v1.37.0 │ 27 Dec 25 10:06 UTC │ 27 Dec 25 10:06 UTC │
	│ pause   │ -p embed-certs-017122 --alsologtostderr -v=1                                                                                                                                                                                                  │ embed-certs-017122                │ jenkins │ v1.37.0 │ 27 Dec 25 10:06 UTC │                     │
	│ delete  │ -p embed-certs-017122                                                                                                                                                                                                                         │ embed-certs-017122                │ jenkins │ v1.37.0 │ 27 Dec 25 10:06 UTC │ 27 Dec 25 10:06 UTC │
	│ delete  │ -p embed-certs-017122                                                                                                                                                                                                                         │ embed-certs-017122                │ jenkins │ v1.37.0 │ 27 Dec 25 10:06 UTC │ 27 Dec 25 10:06 UTC │
	│ start   │ -p newest-cni-133340 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0 │ newest-cni-133340                 │ jenkins │ v1.37.0 │ 27 Dec 25 10:06 UTC │ 27 Dec 25 10:07 UTC │
	│ image   │ default-k8s-diff-port-681744 image list --format=json                                                                                                                                                                                         │ default-k8s-diff-port-681744      │ jenkins │ v1.37.0 │ 27 Dec 25 10:06 UTC │ 27 Dec 25 10:06 UTC │
	│ pause   │ -p default-k8s-diff-port-681744 --alsologtostderr -v=1                                                                                                                                                                                        │ default-k8s-diff-port-681744      │ jenkins │ v1.37.0 │ 27 Dec 25 10:06 UTC │                     │
	│ delete  │ -p default-k8s-diff-port-681744                                                                                                                                                                                                               │ default-k8s-diff-port-681744      │ jenkins │ v1.37.0 │ 27 Dec 25 10:07 UTC │ 27 Dec 25 10:07 UTC │
	│ delete  │ -p default-k8s-diff-port-681744                                                                                                                                                                                                               │ default-k8s-diff-port-681744      │ jenkins │ v1.37.0 │ 27 Dec 25 10:07 UTC │ 27 Dec 25 10:07 UTC │
	│ start   │ -p test-preload-dl-gcs-425359 --download-only --kubernetes-version v1.34.0-rc.1 --preload-source=gcs --alsologtostderr --v=1 --driver=docker  --container-runtime=crio                                                                        │ test-preload-dl-gcs-425359        │ jenkins │ v1.37.0 │ 27 Dec 25 10:07 UTC │                     │
	│ delete  │ -p test-preload-dl-gcs-425359                                                                                                                                                                                                                 │ test-preload-dl-gcs-425359        │ jenkins │ v1.37.0 │ 27 Dec 25 10:07 UTC │ 27 Dec 25 10:07 UTC │
	│ start   │ -p test-preload-dl-github-343343 --download-only --kubernetes-version v1.34.0-rc.2 --preload-source=github --alsologtostderr --v=1 --driver=docker  --container-runtime=crio                                                                  │ test-preload-dl-github-343343     │ jenkins │ v1.37.0 │ 27 Dec 25 10:07 UTC │                     │
	│ delete  │ -p test-preload-dl-github-343343                                                                                                                                                                                                              │ test-preload-dl-github-343343     │ jenkins │ v1.37.0 │ 27 Dec 25 10:07 UTC │ 27 Dec 25 10:07 UTC │
	│ start   │ -p test-preload-dl-gcs-cached-955830 --download-only --kubernetes-version v1.34.0-rc.2 --preload-source=gcs --alsologtostderr --v=1 --driver=docker  --container-runtime=crio                                                                 │ test-preload-dl-gcs-cached-955830 │ jenkins │ v1.37.0 │ 27 Dec 25 10:07 UTC │                     │
	│ delete  │ -p test-preload-dl-gcs-cached-955830                                                                                                                                                                                                          │ test-preload-dl-gcs-cached-955830 │ jenkins │ v1.37.0 │ 27 Dec 25 10:07 UTC │ 27 Dec 25 10:07 UTC │
	│ start   │ -p auto-246753 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio                                                                                                                       │ auto-246753                       │ jenkins │ v1.37.0 │ 27 Dec 25 10:07 UTC │                     │
	│ addons  │ enable metrics-server -p newest-cni-133340 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ newest-cni-133340                 │ jenkins │ v1.37.0 │ 27 Dec 25 10:07 UTC │                     │
	│ stop    │ -p newest-cni-133340 --alsologtostderr -v=3                                                                                                                                                                                                   │ newest-cni-133340                 │ jenkins │ v1.37.0 │ 27 Dec 25 10:07 UTC │ 27 Dec 25 10:07 UTC │
	│ addons  │ enable dashboard -p newest-cni-133340 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ newest-cni-133340                 │ jenkins │ v1.37.0 │ 27 Dec 25 10:07 UTC │ 27 Dec 25 10:07 UTC │
	│ start   │ -p newest-cni-133340 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0 │ newest-cni-133340                 │ jenkins │ v1.37.0 │ 27 Dec 25 10:07 UTC │ 27 Dec 25 10:07 UTC │
	│ image   │ newest-cni-133340 image list --format=json                                                                                                                                                                                                    │ newest-cni-133340                 │ jenkins │ v1.37.0 │ 27 Dec 25 10:07 UTC │ 27 Dec 25 10:07 UTC │
	│ pause   │ -p newest-cni-133340 --alsologtostderr -v=1                                                                                                                                                                                                   │ newest-cni-133340                 │ jenkins │ v1.37.0 │ 27 Dec 25 10:07 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────────────────┴─────────┴─────────┴─────────────────────┴─────────
────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/27 10:07:30
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1227 10:07:30.588310  528292 out.go:360] Setting OutFile to fd 1 ...
	I1227 10:07:30.588451  528292 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 10:07:30.588462  528292 out.go:374] Setting ErrFile to fd 2...
	I1227 10:07:30.588468  528292 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 10:07:30.588731  528292 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22344-301174/.minikube/bin
	I1227 10:07:30.589114  528292 out.go:368] Setting JSON to false
	I1227 10:07:30.590060  528292 start.go:133] hostinfo: {"hostname":"ip-172-31-31-251","uptime":10200,"bootTime":1766819851,"procs":169,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1227 10:07:30.590141  528292 start.go:143] virtualization:  
	I1227 10:07:30.593187  528292 out.go:179] * [newest-cni-133340] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1227 10:07:30.596934  528292 out.go:179]   - MINIKUBE_LOCATION=22344
	I1227 10:07:30.597069  528292 notify.go:221] Checking for updates...
	I1227 10:07:30.602859  528292 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1227 10:07:30.605775  528292 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22344-301174/kubeconfig
	I1227 10:07:30.608676  528292 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22344-301174/.minikube
	I1227 10:07:30.611647  528292 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1227 10:07:30.614518  528292 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1227 10:07:30.617848  528292 config.go:182] Loaded profile config "newest-cni-133340": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1227 10:07:30.618498  528292 driver.go:422] Setting default libvirt URI to qemu:///system
	I1227 10:07:30.642551  528292 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1227 10:07:30.642665  528292 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1227 10:07:30.701362  528292 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:40 OomKillDisable:true NGoroutines:53 SystemTime:2025-12-27 10:07:30.691895894 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1227 10:07:30.701473  528292 docker.go:319] overlay module found
	I1227 10:07:30.704824  528292 out.go:179] * Using the docker driver based on existing profile
	I1227 10:07:30.707659  528292 start.go:309] selected driver: docker
	I1227 10:07:30.707677  528292 start.go:928] validating driver "docker" against &{Name:newest-cni-133340 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:newest-cni-133340 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker Mou
ntIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1227 10:07:30.707792  528292 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1227 10:07:30.708481  528292 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1227 10:07:30.765688  528292 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:40 OomKillDisable:true NGoroutines:53 SystemTime:2025-12-27 10:07:30.756260109 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1227 10:07:30.766053  528292 start_flags.go:1038] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1227 10:07:30.766081  528292 cni.go:84] Creating CNI manager for ""
	I1227 10:07:30.766142  528292 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1227 10:07:30.766273  528292 start.go:353] cluster config:
	{Name:newest-cni-133340 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:newest-cni-133340 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker
BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1227 10:07:30.771716  528292 out.go:179] * Starting "newest-cni-133340" primary control-plane node in "newest-cni-133340" cluster
	I1227 10:07:30.774841  528292 cache.go:134] Beginning downloading kic base image for docker with crio
	I1227 10:07:30.777842  528292 out.go:179] * Pulling base image v0.0.48-1766570851-22316 ...
	I1227 10:07:30.780585  528292 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime crio
	I1227 10:07:30.780635  528292 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22344-301174/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-cri-o-overlay-arm64.tar.lz4
	I1227 10:07:30.780645  528292 cache.go:65] Caching tarball of preloaded images
	I1227 10:07:30.780681  528292 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a in local docker daemon
	I1227 10:07:30.780737  528292 preload.go:251] Found /home/jenkins/minikube-integration/22344-301174/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1227 10:07:30.780748  528292 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0 on crio
	I1227 10:07:30.780882  528292 profile.go:143] Saving config to /home/jenkins/minikube-integration/22344-301174/.minikube/profiles/newest-cni-133340/config.json ...
	I1227 10:07:30.800792  528292 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a in local docker daemon, skipping pull
	I1227 10:07:30.800812  528292 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a exists in daemon, skipping load
	I1227 10:07:30.800828  528292 cache.go:243] Successfully downloaded all kic artifacts
	I1227 10:07:30.800864  528292 start.go:360] acquireMachinesLock for newest-cni-133340: {Name:mke43a3ebd8f4eaf65da86bf9dafee410f8229a8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1227 10:07:30.800921  528292 start.go:364] duration metric: took 39.811µs to acquireMachinesLock for "newest-cni-133340"
	I1227 10:07:30.800941  528292 start.go:96] Skipping create...Using existing machine configuration
	I1227 10:07:30.800946  528292 fix.go:54] fixHost starting: 
	I1227 10:07:30.801203  528292 cli_runner.go:164] Run: docker container inspect newest-cni-133340 --format={{.State.Status}}
	I1227 10:07:30.828045  528292 fix.go:112] recreateIfNeeded on newest-cni-133340: state=Stopped err=<nil>
	W1227 10:07:30.828075  528292 fix.go:138] unexpected machine state, will restart: <nil>
	I1227 10:07:32.225693  526230 main.go:144] libmachine: SSH cmd err, output: <nil>: auto-246753
	
	I1227 10:07:32.225729  526230 ubuntu.go:182] provisioning hostname "auto-246753"
	I1227 10:07:32.225792  526230 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-246753
	I1227 10:07:32.243587  526230 main.go:144] libmachine: Using SSH client type: native
	I1227 10:07:32.243911  526230 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dbbe0] 0x3de0e0 <nil>  [] 0s} 127.0.0.1 33466 <nil> <nil>}
	I1227 10:07:32.243926  526230 main.go:144] libmachine: About to run SSH command:
	sudo hostname auto-246753 && echo "auto-246753" | sudo tee /etc/hostname
	I1227 10:07:32.396219  526230 main.go:144] libmachine: SSH cmd err, output: <nil>: auto-246753
	
	I1227 10:07:32.396352  526230 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-246753
	I1227 10:07:32.414974  526230 main.go:144] libmachine: Using SSH client type: native
	I1227 10:07:32.415323  526230 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dbbe0] 0x3de0e0 <nil>  [] 0s} 127.0.0.1 33466 <nil> <nil>}
	I1227 10:07:32.415344  526230 main.go:144] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sauto-246753' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 auto-246753/g' /etc/hosts;
				else 
					echo '127.0.1.1 auto-246753' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1227 10:07:32.558810  526230 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	I1227 10:07:32.558837  526230 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22344-301174/.minikube CaCertPath:/home/jenkins/minikube-integration/22344-301174/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22344-301174/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22344-301174/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22344-301174/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22344-301174/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22344-301174/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22344-301174/.minikube}
	I1227 10:07:32.558856  526230 ubuntu.go:190] setting up certificates
	I1227 10:07:32.558867  526230 provision.go:84] configureAuth start
	I1227 10:07:32.558926  526230 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" auto-246753
	I1227 10:07:32.576995  526230 provision.go:143] copyHostCerts
	I1227 10:07:32.577070  526230 exec_runner.go:144] found /home/jenkins/minikube-integration/22344-301174/.minikube/ca.pem, removing ...
	I1227 10:07:32.577086  526230 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22344-301174/.minikube/ca.pem
	I1227 10:07:32.577160  526230 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22344-301174/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22344-301174/.minikube/ca.pem (1082 bytes)
	I1227 10:07:32.577270  526230 exec_runner.go:144] found /home/jenkins/minikube-integration/22344-301174/.minikube/cert.pem, removing ...
	I1227 10:07:32.577281  526230 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22344-301174/.minikube/cert.pem
	I1227 10:07:32.577309  526230 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22344-301174/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22344-301174/.minikube/cert.pem (1123 bytes)
	I1227 10:07:32.577375  526230 exec_runner.go:144] found /home/jenkins/minikube-integration/22344-301174/.minikube/key.pem, removing ...
	I1227 10:07:32.577385  526230 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22344-301174/.minikube/key.pem
	I1227 10:07:32.577410  526230 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22344-301174/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22344-301174/.minikube/key.pem (1675 bytes)
	I1227 10:07:32.577472  526230 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22344-301174/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22344-301174/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22344-301174/.minikube/certs/ca-key.pem org=jenkins.auto-246753 san=[127.0.0.1 192.168.85.2 auto-246753 localhost minikube]
	I1227 10:07:33.022596  526230 provision.go:177] copyRemoteCerts
	I1227 10:07:33.022672  526230 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1227 10:07:33.022714  526230 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-246753
	I1227 10:07:33.040662  526230 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33466 SSHKeyPath:/home/jenkins/minikube-integration/22344-301174/.minikube/machines/auto-246753/id_rsa Username:docker}
	I1227 10:07:33.138357  526230 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22344-301174/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1227 10:07:33.157291  526230 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22344-301174/.minikube/machines/server.pem --> /etc/docker/server.pem (1204 bytes)
	I1227 10:07:33.175618  526230 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22344-301174/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1227 10:07:33.193610  526230 provision.go:87] duration metric: took 634.719087ms to configureAuth
	I1227 10:07:33.193637  526230 ubuntu.go:206] setting minikube options for container-runtime
	I1227 10:07:33.193826  526230 config.go:182] Loaded profile config "auto-246753": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1227 10:07:33.193925  526230 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-246753
	I1227 10:07:33.210976  526230 main.go:144] libmachine: Using SSH client type: native
	I1227 10:07:33.211330  526230 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dbbe0] 0x3de0e0 <nil>  [] 0s} 127.0.0.1 33466 <nil> <nil>}
	I1227 10:07:33.211362  526230 main.go:144] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1227 10:07:33.506947  526230 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1227 10:07:33.506970  526230 machine.go:97] duration metric: took 4.460655316s to provisionDockerMachine
	I1227 10:07:33.506981  526230 client.go:176] duration metric: took 13.159262918s to LocalClient.Create
	I1227 10:07:33.506994  526230 start.go:167] duration metric: took 13.159323767s to libmachine.API.Create "auto-246753"
	I1227 10:07:33.507002  526230 start.go:293] postStartSetup for "auto-246753" (driver="docker")
	I1227 10:07:33.507011  526230 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1227 10:07:33.507092  526230 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1227 10:07:33.507142  526230 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-246753
	I1227 10:07:33.525356  526230 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33466 SSHKeyPath:/home/jenkins/minikube-integration/22344-301174/.minikube/machines/auto-246753/id_rsa Username:docker}
	I1227 10:07:33.626961  526230 ssh_runner.go:195] Run: cat /etc/os-release
	I1227 10:07:33.630514  526230 main.go:144] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1227 10:07:33.630545  526230 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1227 10:07:33.630558  526230 filesync.go:126] Scanning /home/jenkins/minikube-integration/22344-301174/.minikube/addons for local assets ...
	I1227 10:07:33.630618  526230 filesync.go:126] Scanning /home/jenkins/minikube-integration/22344-301174/.minikube/files for local assets ...
	I1227 10:07:33.630703  526230 filesync.go:149] local asset: /home/jenkins/minikube-integration/22344-301174/.minikube/files/etc/ssl/certs/3030432.pem -> 3030432.pem in /etc/ssl/certs
	I1227 10:07:33.630811  526230 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1227 10:07:33.638782  526230 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22344-301174/.minikube/files/etc/ssl/certs/3030432.pem --> /etc/ssl/certs/3030432.pem (1708 bytes)
	I1227 10:07:33.657343  526230 start.go:296] duration metric: took 150.327928ms for postStartSetup
	I1227 10:07:33.657733  526230 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" auto-246753
	I1227 10:07:33.675118  526230 profile.go:143] Saving config to /home/jenkins/minikube-integration/22344-301174/.minikube/profiles/auto-246753/config.json ...
	I1227 10:07:33.675413  526230 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1227 10:07:33.675462  526230 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-246753
	I1227 10:07:33.692232  526230 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33466 SSHKeyPath:/home/jenkins/minikube-integration/22344-301174/.minikube/machines/auto-246753/id_rsa Username:docker}
	I1227 10:07:33.787228  526230 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1227 10:07:33.791936  526230 start.go:128] duration metric: took 13.447727779s to createHost
	I1227 10:07:33.791963  526230 start.go:83] releasing machines lock for "auto-246753", held for 13.44786479s
	I1227 10:07:33.792035  526230 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" auto-246753
	I1227 10:07:33.809848  526230 ssh_runner.go:195] Run: cat /version.json
	I1227 10:07:33.809870  526230 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1227 10:07:33.809899  526230 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-246753
	I1227 10:07:33.809942  526230 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-246753
	I1227 10:07:33.830236  526230 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33466 SSHKeyPath:/home/jenkins/minikube-integration/22344-301174/.minikube/machines/auto-246753/id_rsa Username:docker}
	I1227 10:07:33.830090  526230 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33466 SSHKeyPath:/home/jenkins/minikube-integration/22344-301174/.minikube/machines/auto-246753/id_rsa Username:docker}
	I1227 10:07:33.926385  526230 ssh_runner.go:195] Run: systemctl --version
	I1227 10:07:34.026078  526230 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1227 10:07:34.063370  526230 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1227 10:07:34.067822  526230 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1227 10:07:34.067912  526230 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1227 10:07:34.096687  526230 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1227 10:07:34.096714  526230 start.go:496] detecting cgroup driver to use...
	I1227 10:07:34.096788  526230 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1227 10:07:34.096871  526230 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1227 10:07:34.115384  526230 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1227 10:07:34.128607  526230 docker.go:218] disabling cri-docker service (if available) ...
	I1227 10:07:34.128711  526230 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1227 10:07:34.148936  526230 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1227 10:07:34.168401  526230 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1227 10:07:34.328494  526230 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1227 10:07:34.495885  526230 docker.go:234] disabling docker service ...
	I1227 10:07:34.495953  526230 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1227 10:07:34.519745  526230 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1227 10:07:34.541927  526230 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1227 10:07:34.688479  526230 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1227 10:07:34.842457  526230 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1227 10:07:34.857547  526230 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1227 10:07:34.872507  526230 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1227 10:07:34.872589  526230 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 10:07:34.882142  526230 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1227 10:07:34.882297  526230 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 10:07:34.891557  526230 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 10:07:34.900324  526230 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 10:07:34.909299  526230 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1227 10:07:34.917643  526230 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 10:07:34.926847  526230 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 10:07:34.940904  526230 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 10:07:34.950339  526230 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1227 10:07:34.958741  526230 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1227 10:07:34.966803  526230 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1227 10:07:35.113152  526230 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1227 10:07:35.335467  526230 start.go:553] Will wait 60s for socket path /var/run/crio/crio.sock
	I1227 10:07:35.335540  526230 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1227 10:07:35.341174  526230 start.go:574] Will wait 60s for crictl version
	I1227 10:07:35.341254  526230 ssh_runner.go:195] Run: which crictl
	I1227 10:07:35.344987  526230 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1227 10:07:35.372618  526230 start.go:590] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1227 10:07:35.372706  526230 ssh_runner.go:195] Run: crio --version
	I1227 10:07:35.405240  526230 ssh_runner.go:195] Run: crio --version
	I1227 10:07:35.450049  526230 out.go:179] * Preparing Kubernetes v1.35.0 on CRI-O 1.34.3 ...
	I1227 10:07:30.831001  528292 out.go:252] * Restarting existing docker container for "newest-cni-133340" ...
	I1227 10:07:30.831109  528292 cli_runner.go:164] Run: docker start newest-cni-133340
	I1227 10:07:31.096088  528292 cli_runner.go:164] Run: docker container inspect newest-cni-133340 --format={{.State.Status}}
	I1227 10:07:31.117424  528292 kic.go:430] container "newest-cni-133340" state is running.
	I1227 10:07:31.117836  528292 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-133340
	I1227 10:07:31.141782  528292 profile.go:143] Saving config to /home/jenkins/minikube-integration/22344-301174/.minikube/profiles/newest-cni-133340/config.json ...
	I1227 10:07:31.142384  528292 machine.go:94] provisionDockerMachine start ...
	I1227 10:07:31.142540  528292 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-133340
	I1227 10:07:31.173346  528292 main.go:144] libmachine: Using SSH client type: native
	I1227 10:07:31.173665  528292 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dbbe0] 0x3de0e0 <nil>  [] 0s} 127.0.0.1 33471 <nil> <nil>}
	I1227 10:07:31.173674  528292 main.go:144] libmachine: About to run SSH command:
	hostname
	I1227 10:07:31.174456  528292 main.go:144] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1227 10:07:34.325928  528292 main.go:144] libmachine: SSH cmd err, output: <nil>: newest-cni-133340
	
	I1227 10:07:34.325955  528292 ubuntu.go:182] provisioning hostname "newest-cni-133340"
	I1227 10:07:34.326052  528292 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-133340
	I1227 10:07:34.347520  528292 main.go:144] libmachine: Using SSH client type: native
	I1227 10:07:34.347853  528292 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dbbe0] 0x3de0e0 <nil>  [] 0s} 127.0.0.1 33471 <nil> <nil>}
	I1227 10:07:34.347868  528292 main.go:144] libmachine: About to run SSH command:
	sudo hostname newest-cni-133340 && echo "newest-cni-133340" | sudo tee /etc/hostname
	I1227 10:07:34.519970  528292 main.go:144] libmachine: SSH cmd err, output: <nil>: newest-cni-133340
	
	I1227 10:07:34.520036  528292 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-133340
	I1227 10:07:34.543713  528292 main.go:144] libmachine: Using SSH client type: native
	I1227 10:07:34.544024  528292 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dbbe0] 0x3de0e0 <nil>  [] 0s} 127.0.0.1 33471 <nil> <nil>}
	I1227 10:07:34.544046  528292 main.go:144] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-133340' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-133340/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-133340' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1227 10:07:34.702495  528292 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	I1227 10:07:34.702521  528292 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22344-301174/.minikube CaCertPath:/home/jenkins/minikube-integration/22344-301174/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22344-301174/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22344-301174/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22344-301174/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22344-301174/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22344-301174/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22344-301174/.minikube}
	I1227 10:07:34.702551  528292 ubuntu.go:190] setting up certificates
	I1227 10:07:34.702561  528292 provision.go:84] configureAuth start
	I1227 10:07:34.702629  528292 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-133340
	I1227 10:07:34.720673  528292 provision.go:143] copyHostCerts
	I1227 10:07:34.720739  528292 exec_runner.go:144] found /home/jenkins/minikube-integration/22344-301174/.minikube/ca.pem, removing ...
	I1227 10:07:34.720756  528292 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22344-301174/.minikube/ca.pem
	I1227 10:07:34.720817  528292 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22344-301174/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22344-301174/.minikube/ca.pem (1082 bytes)
	I1227 10:07:34.720922  528292 exec_runner.go:144] found /home/jenkins/minikube-integration/22344-301174/.minikube/cert.pem, removing ...
	I1227 10:07:34.720927  528292 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22344-301174/.minikube/cert.pem
	I1227 10:07:34.720947  528292 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22344-301174/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22344-301174/.minikube/cert.pem (1123 bytes)
	I1227 10:07:34.721003  528292 exec_runner.go:144] found /home/jenkins/minikube-integration/22344-301174/.minikube/key.pem, removing ...
	I1227 10:07:34.721008  528292 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22344-301174/.minikube/key.pem
	I1227 10:07:34.721032  528292 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22344-301174/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22344-301174/.minikube/key.pem (1675 bytes)
	I1227 10:07:34.721089  528292 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22344-301174/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22344-301174/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22344-301174/.minikube/certs/ca-key.pem org=jenkins.newest-cni-133340 san=[127.0.0.1 192.168.76.2 localhost minikube newest-cni-133340]
	I1227 10:07:34.992235  528292 provision.go:177] copyRemoteCerts
	I1227 10:07:34.992349  528292 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1227 10:07:34.992407  528292 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-133340
	I1227 10:07:35.024152  528292 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33471 SSHKeyPath:/home/jenkins/minikube-integration/22344-301174/.minikube/machines/newest-cni-133340/id_rsa Username:docker}
	I1227 10:07:35.141958  528292 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22344-301174/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1227 10:07:35.166084  528292 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22344-301174/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1227 10:07:35.188385  528292 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22344-301174/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1227 10:07:35.212486  528292 provision.go:87] duration metric: took 509.900129ms to configureAuth
	I1227 10:07:35.212566  528292 ubuntu.go:206] setting minikube options for container-runtime
	I1227 10:07:35.212828  528292 config.go:182] Loaded profile config "newest-cni-133340": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1227 10:07:35.212991  528292 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-133340
	I1227 10:07:35.233464  528292 main.go:144] libmachine: Using SSH client type: native
	I1227 10:07:35.233765  528292 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dbbe0] 0x3de0e0 <nil>  [] 0s} 127.0.0.1 33471 <nil> <nil>}
	I1227 10:07:35.233779  528292 main.go:144] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1227 10:07:35.600708  528292 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1227 10:07:35.600729  528292 machine.go:97] duration metric: took 4.458277637s to provisionDockerMachine
	I1227 10:07:35.600740  528292 start.go:293] postStartSetup for "newest-cni-133340" (driver="docker")
	I1227 10:07:35.600751  528292 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1227 10:07:35.600840  528292 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1227 10:07:35.600885  528292 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-133340
	I1227 10:07:35.631940  528292 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33471 SSHKeyPath:/home/jenkins/minikube-integration/22344-301174/.minikube/machines/newest-cni-133340/id_rsa Username:docker}
	I1227 10:07:35.740128  528292 ssh_runner.go:195] Run: cat /etc/os-release
	I1227 10:07:35.744262  528292 main.go:144] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1227 10:07:35.744288  528292 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1227 10:07:35.744300  528292 filesync.go:126] Scanning /home/jenkins/minikube-integration/22344-301174/.minikube/addons for local assets ...
	I1227 10:07:35.744356  528292 filesync.go:126] Scanning /home/jenkins/minikube-integration/22344-301174/.minikube/files for local assets ...
	I1227 10:07:35.744438  528292 filesync.go:149] local asset: /home/jenkins/minikube-integration/22344-301174/.minikube/files/etc/ssl/certs/3030432.pem -> 3030432.pem in /etc/ssl/certs
	I1227 10:07:35.744542  528292 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1227 10:07:35.758865  528292 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22344-301174/.minikube/files/etc/ssl/certs/3030432.pem --> /etc/ssl/certs/3030432.pem (1708 bytes)
	I1227 10:07:35.787251  528292 start.go:296] duration metric: took 186.495985ms for postStartSetup
	I1227 10:07:35.787329  528292 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1227 10:07:35.787382  528292 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-133340
	I1227 10:07:35.813827  528292 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33471 SSHKeyPath:/home/jenkins/minikube-integration/22344-301174/.minikube/machines/newest-cni-133340/id_rsa Username:docker}
	I1227 10:07:35.916352  528292 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1227 10:07:35.922125  528292 fix.go:56] duration metric: took 5.121172304s for fixHost
	I1227 10:07:35.922263  528292 start.go:83] releasing machines lock for "newest-cni-133340", held for 5.121332256s
	I1227 10:07:35.922358  528292 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-133340
	I1227 10:07:35.945272  528292 ssh_runner.go:195] Run: cat /version.json
	I1227 10:07:35.945321  528292 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-133340
	I1227 10:07:35.945564  528292 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1227 10:07:35.945620  528292 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-133340
	I1227 10:07:35.995121  528292 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33471 SSHKeyPath:/home/jenkins/minikube-integration/22344-301174/.minikube/machines/newest-cni-133340/id_rsa Username:docker}
	I1227 10:07:35.995928  528292 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33471 SSHKeyPath:/home/jenkins/minikube-integration/22344-301174/.minikube/machines/newest-cni-133340/id_rsa Username:docker}
	I1227 10:07:36.106061  528292 ssh_runner.go:195] Run: systemctl --version
	I1227 10:07:36.211544  528292 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1227 10:07:36.251766  528292 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1227 10:07:36.256338  528292 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1227 10:07:36.256425  528292 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1227 10:07:36.268221  528292 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1227 10:07:36.268242  528292 start.go:496] detecting cgroup driver to use...
	I1227 10:07:36.268275  528292 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1227 10:07:36.268325  528292 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1227 10:07:36.285307  528292 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1227 10:07:36.300036  528292 docker.go:218] disabling cri-docker service (if available) ...
	I1227 10:07:36.300105  528292 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1227 10:07:36.316999  528292 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1227 10:07:36.331385  528292 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1227 10:07:36.474969  528292 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1227 10:07:36.618805  528292 docker.go:234] disabling docker service ...
	I1227 10:07:36.618867  528292 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1227 10:07:36.637752  528292 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1227 10:07:36.652905  528292 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1227 10:07:36.818556  528292 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1227 10:07:36.981407  528292 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1227 10:07:36.996765  528292 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1227 10:07:37.014607  528292 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1227 10:07:37.014679  528292 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 10:07:37.026767  528292 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1227 10:07:37.026832  528292 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 10:07:37.037278  528292 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 10:07:37.047562  528292 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 10:07:37.057317  528292 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1227 10:07:37.066674  528292 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 10:07:37.076081  528292 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 10:07:37.084865  528292 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 10:07:37.094751  528292 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1227 10:07:37.103832  528292 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1227 10:07:37.112546  528292 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1227 10:07:37.261632  528292 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1227 10:07:37.448233  528292 start.go:553] Will wait 60s for socket path /var/run/crio/crio.sock
	I1227 10:07:37.448300  528292 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1227 10:07:37.454226  528292 start.go:574] Will wait 60s for crictl version
	I1227 10:07:37.454288  528292 ssh_runner.go:195] Run: which crictl
	I1227 10:07:37.458310  528292 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1227 10:07:37.507285  528292 start.go:590] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1227 10:07:37.507372  528292 ssh_runner.go:195] Run: crio --version
	I1227 10:07:37.540745  528292 ssh_runner.go:195] Run: crio --version
	I1227 10:07:37.586631  528292 out.go:179] * Preparing Kubernetes v1.35.0 on CRI-O 1.34.3 ...
	I1227 10:07:35.452952  526230 cli_runner.go:164] Run: docker network inspect auto-246753 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1227 10:07:35.476776  526230 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1227 10:07:35.481366  526230 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1227 10:07:35.492005  526230 kubeadm.go:884] updating cluster {Name:auto-246753 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:auto-246753 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:
[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMne
tClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false} ...
	I1227 10:07:35.492151  526230 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime crio
	I1227 10:07:35.492213  526230 ssh_runner.go:195] Run: sudo crictl images --output json
	I1227 10:07:35.533088  526230 crio.go:561] all images are preloaded for cri-o runtime.
	I1227 10:07:35.533114  526230 crio.go:433] Images already preloaded, skipping extraction
	I1227 10:07:35.533172  526230 ssh_runner.go:195] Run: sudo crictl images --output json
	I1227 10:07:35.560910  526230 crio.go:561] all images are preloaded for cri-o runtime.
	I1227 10:07:35.560936  526230 cache_images.go:86] Images are preloaded, skipping loading
	I1227 10:07:35.560944  526230 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.35.0 crio true true} ...
	I1227 10:07:35.561071  526230 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=auto-246753 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0 ClusterName:auto-246753 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1227 10:07:35.561203  526230 ssh_runner.go:195] Run: crio config
	I1227 10:07:35.665825  526230 cni.go:84] Creating CNI manager for ""
	I1227 10:07:35.665894  526230 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1227 10:07:35.665925  526230 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1227 10:07:35.665966  526230 kubeadm.go:197] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.35.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:auto-246753 NodeName:auto-246753 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/
manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock failCgroupV1:false hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1227 10:07:35.666129  526230 kubeadm.go:203] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "auto-246753"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	failCgroupV1: false
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1227 10:07:35.666259  526230 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0
	I1227 10:07:35.675340  526230 binaries.go:51] Found k8s binaries, skipping transfer
	I1227 10:07:35.675462  526230 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1227 10:07:35.684462  526230 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (361 bytes)
	I1227 10:07:35.698746  526230 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1227 10:07:35.717207  526230 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2228 bytes)
	I1227 10:07:35.730179  526230 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1227 10:07:35.735783  526230 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1227 10:07:35.748180  526230 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1227 10:07:35.884511  526230 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1227 10:07:35.900419  526230 certs.go:69] Setting up /home/jenkins/minikube-integration/22344-301174/.minikube/profiles/auto-246753 for IP: 192.168.85.2
	I1227 10:07:35.900438  526230 certs.go:195] generating shared ca certs ...
	I1227 10:07:35.900454  526230 certs.go:227] acquiring lock for ca certs: {Name:mk8bd99999eb300c33129cb11fd55a1be3001328 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 10:07:35.900592  526230 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22344-301174/.minikube/ca.key
	I1227 10:07:35.900668  526230 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22344-301174/.minikube/proxy-client-ca.key
	I1227 10:07:35.900675  526230 certs.go:257] generating profile certs ...
	I1227 10:07:35.900737  526230 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22344-301174/.minikube/profiles/auto-246753/client.key
	I1227 10:07:35.900752  526230 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22344-301174/.minikube/profiles/auto-246753/client.crt with IP's: []
	I1227 10:07:36.141895  526230 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22344-301174/.minikube/profiles/auto-246753/client.crt ...
	I1227 10:07:36.141930  526230 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22344-301174/.minikube/profiles/auto-246753/client.crt: {Name:mkf0ce9b15cb1d547dcb69259189b5bd4371836c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 10:07:36.142124  526230 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22344-301174/.minikube/profiles/auto-246753/client.key ...
	I1227 10:07:36.142139  526230 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22344-301174/.minikube/profiles/auto-246753/client.key: {Name:mk3b606387df1d40c0813baaf1d7802470b1d10a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 10:07:36.142277  526230 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22344-301174/.minikube/profiles/auto-246753/apiserver.key.3f0dc94a
	I1227 10:07:36.142298  526230 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22344-301174/.minikube/profiles/auto-246753/apiserver.crt.3f0dc94a with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.85.2]
	I1227 10:07:36.506335  526230 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22344-301174/.minikube/profiles/auto-246753/apiserver.crt.3f0dc94a ...
	I1227 10:07:36.506368  526230 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22344-301174/.minikube/profiles/auto-246753/apiserver.crt.3f0dc94a: {Name:mk95e18bee89c196840e37dc9a03521c66824287 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 10:07:36.506555  526230 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22344-301174/.minikube/profiles/auto-246753/apiserver.key.3f0dc94a ...
	I1227 10:07:36.506569  526230 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22344-301174/.minikube/profiles/auto-246753/apiserver.key.3f0dc94a: {Name:mkf71db83df1cc85db92268d54c0edff2cbfac8e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 10:07:36.506664  526230 certs.go:382] copying /home/jenkins/minikube-integration/22344-301174/.minikube/profiles/auto-246753/apiserver.crt.3f0dc94a -> /home/jenkins/minikube-integration/22344-301174/.minikube/profiles/auto-246753/apiserver.crt
	I1227 10:07:36.506748  526230 certs.go:386] copying /home/jenkins/minikube-integration/22344-301174/.minikube/profiles/auto-246753/apiserver.key.3f0dc94a -> /home/jenkins/minikube-integration/22344-301174/.minikube/profiles/auto-246753/apiserver.key
	I1227 10:07:36.506809  526230 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22344-301174/.minikube/profiles/auto-246753/proxy-client.key
	I1227 10:07:36.506825  526230 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22344-301174/.minikube/profiles/auto-246753/proxy-client.crt with IP's: []
	I1227 10:07:36.621658  526230 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22344-301174/.minikube/profiles/auto-246753/proxy-client.crt ...
	I1227 10:07:36.621689  526230 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22344-301174/.minikube/profiles/auto-246753/proxy-client.crt: {Name:mk3534a59dcc1f87c2493eb49bdfcb0cf5d09a7d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 10:07:36.621854  526230 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22344-301174/.minikube/profiles/auto-246753/proxy-client.key ...
	I1227 10:07:36.621868  526230 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22344-301174/.minikube/profiles/auto-246753/proxy-client.key: {Name:mk35b6a3adeb9ea216b0ad02a7f5e7e29c6e4a0f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 10:07:36.622049  526230 certs.go:484] found cert: /home/jenkins/minikube-integration/22344-301174/.minikube/certs/303043.pem (1338 bytes)
	W1227 10:07:36.622100  526230 certs.go:480] ignoring /home/jenkins/minikube-integration/22344-301174/.minikube/certs/303043_empty.pem, impossibly tiny 0 bytes
	I1227 10:07:36.622114  526230 certs.go:484] found cert: /home/jenkins/minikube-integration/22344-301174/.minikube/certs/ca-key.pem (1675 bytes)
	I1227 10:07:36.622143  526230 certs.go:484] found cert: /home/jenkins/minikube-integration/22344-301174/.minikube/certs/ca.pem (1082 bytes)
	I1227 10:07:36.622190  526230 certs.go:484] found cert: /home/jenkins/minikube-integration/22344-301174/.minikube/certs/cert.pem (1123 bytes)
	I1227 10:07:36.622218  526230 certs.go:484] found cert: /home/jenkins/minikube-integration/22344-301174/.minikube/certs/key.pem (1675 bytes)
	I1227 10:07:36.622272  526230 certs.go:484] found cert: /home/jenkins/minikube-integration/22344-301174/.minikube/files/etc/ssl/certs/3030432.pem (1708 bytes)
	I1227 10:07:36.622834  526230 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22344-301174/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1227 10:07:36.649204  526230 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22344-301174/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1227 10:07:36.679146  526230 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22344-301174/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1227 10:07:36.710358  526230 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22344-301174/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1227 10:07:36.769368  526230 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22344-301174/.minikube/profiles/auto-246753/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1415 bytes)
	I1227 10:07:36.791256  526230 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22344-301174/.minikube/profiles/auto-246753/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1227 10:07:36.813439  526230 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22344-301174/.minikube/profiles/auto-246753/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1227 10:07:36.832939  526230 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22344-301174/.minikube/profiles/auto-246753/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1227 10:07:36.851320  526230 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22344-301174/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1227 10:07:36.877085  526230 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22344-301174/.minikube/certs/303043.pem --> /usr/share/ca-certificates/303043.pem (1338 bytes)
	I1227 10:07:36.904809  526230 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22344-301174/.minikube/files/etc/ssl/certs/3030432.pem --> /usr/share/ca-certificates/3030432.pem (1708 bytes)
	I1227 10:07:36.929748  526230 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (722 bytes)
	I1227 10:07:36.944877  526230 ssh_runner.go:195] Run: openssl version
	I1227 10:07:36.951832  526230 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/3030432.pem
	I1227 10:07:36.960000  526230 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/3030432.pem /etc/ssl/certs/3030432.pem
	I1227 10:07:36.968344  526230 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3030432.pem
	I1227 10:07:36.972613  526230 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 27 09:17 /usr/share/ca-certificates/3030432.pem
	I1227 10:07:36.972750  526230 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3030432.pem
	I1227 10:07:37.015782  526230 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1227 10:07:37.025966  526230 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/3030432.pem /etc/ssl/certs/3ec20f2e.0
	I1227 10:07:37.035375  526230 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1227 10:07:37.044469  526230 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1227 10:07:37.053266  526230 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1227 10:07:37.057741  526230 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 27 09:13 /usr/share/ca-certificates/minikubeCA.pem
	I1227 10:07:37.057855  526230 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1227 10:07:37.100638  526230 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1227 10:07:37.109038  526230 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1227 10:07:37.117603  526230 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/303043.pem
	I1227 10:07:37.125941  526230 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/303043.pem /etc/ssl/certs/303043.pem
	I1227 10:07:37.135015  526230 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/303043.pem
	I1227 10:07:37.139532  526230 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 27 09:17 /usr/share/ca-certificates/303043.pem
	I1227 10:07:37.139595  526230 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/303043.pem
	I1227 10:07:37.195138  526230 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1227 10:07:37.203686  526230 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/303043.pem /etc/ssl/certs/51391683.0
	I1227 10:07:37.211395  526230 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1227 10:07:37.215890  526230 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1227 10:07:37.215950  526230 kubeadm.go:401] StartCluster: {Name:auto-246753 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:auto-246753 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[]
APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetCl
ientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1227 10:07:37.216028  526230 cri.go:61] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1227 10:07:37.216087  526230 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1227 10:07:37.243392  526230 cri.go:96] found id: ""
	I1227 10:07:37.243500  526230 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1227 10:07:37.253003  526230 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1227 10:07:37.260830  526230 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1227 10:07:37.260892  526230 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1227 10:07:37.271475  526230 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1227 10:07:37.271550  526230 kubeadm.go:158] found existing configuration files:
	
	I1227 10:07:37.271635  526230 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1227 10:07:37.281814  526230 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1227 10:07:37.281890  526230 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1227 10:07:37.293268  526230 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1227 10:07:37.302255  526230 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1227 10:07:37.302316  526230 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1227 10:07:37.310723  526230 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1227 10:07:37.319206  526230 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1227 10:07:37.319282  526230 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1227 10:07:37.326832  526230 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1227 10:07:37.335959  526230 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1227 10:07:37.336022  526230 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1227 10:07:37.343075  526230 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1227 10:07:37.405015  526230 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0
	I1227 10:07:37.405196  526230 kubeadm.go:319] [preflight] Running pre-flight checks
	I1227 10:07:37.495683  526230 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1227 10:07:37.495758  526230 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1227 10:07:37.495798  526230 kubeadm.go:319] OS: Linux
	I1227 10:07:37.495861  526230 kubeadm.go:319] CGROUPS_CPU: enabled
	I1227 10:07:37.495914  526230 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1227 10:07:37.495964  526230 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1227 10:07:37.496015  526230 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1227 10:07:37.496067  526230 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1227 10:07:37.496118  526230 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1227 10:07:37.496167  526230 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1227 10:07:37.496218  526230 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1227 10:07:37.496273  526230 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1227 10:07:37.585634  526230 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1227 10:07:37.585774  526230 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1227 10:07:37.585925  526230 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1227 10:07:37.614646  526230 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1227 10:07:37.589772  528292 cli_runner.go:164] Run: docker network inspect newest-cni-133340 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1227 10:07:37.618522  528292 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1227 10:07:37.623370  528292 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1227 10:07:37.636096  528292 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I1227 10:07:37.638936  528292 kubeadm.go:884] updating cluster {Name:newest-cni-133340 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:newest-cni-133340 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:
262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false} ...
	I1227 10:07:37.639100  528292 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime crio
	I1227 10:07:37.639165  528292 ssh_runner.go:195] Run: sudo crictl images --output json
	I1227 10:07:37.674972  528292 crio.go:561] all images are preloaded for cri-o runtime.
	I1227 10:07:37.674994  528292 crio.go:433] Images already preloaded, skipping extraction
	I1227 10:07:37.675048  528292 ssh_runner.go:195] Run: sudo crictl images --output json
	I1227 10:07:37.719683  528292 crio.go:561] all images are preloaded for cri-o runtime.
	I1227 10:07:37.719757  528292 cache_images.go:86] Images are preloaded, skipping loading
	I1227 10:07:37.719780  528292 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.35.0 crio true true} ...
	I1227 10:07:37.719943  528292 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=newest-cni-133340 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0 ClusterName:newest-cni-133340 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1227 10:07:37.720065  528292 ssh_runner.go:195] Run: crio config
	I1227 10:07:37.802253  528292 cni.go:84] Creating CNI manager for ""
	I1227 10:07:37.802321  528292 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1227 10:07:37.802354  528292 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1227 10:07:37.802409  528292 kubeadm.go:197] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.35.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-133340 NodeName:newest-cni-133340 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/
kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock failCgroupV1:false hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1227 10:07:37.802588  528292 kubeadm.go:203] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-133340"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	failCgroupV1: false
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1227 10:07:37.802703  528292 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0
	I1227 10:07:37.811204  528292 binaries.go:51] Found k8s binaries, skipping transfer
	I1227 10:07:37.811320  528292 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1227 10:07:37.819279  528292 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I1227 10:07:37.838327  528292 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1227 10:07:37.859717  528292 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2232 bytes)
	I1227 10:07:37.885407  528292 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1227 10:07:37.891392  528292 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1227 10:07:37.905565  528292 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1227 10:07:38.065820  528292 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1227 10:07:38.087953  528292 certs.go:69] Setting up /home/jenkins/minikube-integration/22344-301174/.minikube/profiles/newest-cni-133340 for IP: 192.168.76.2
	I1227 10:07:38.087977  528292 certs.go:195] generating shared ca certs ...
	I1227 10:07:38.087994  528292 certs.go:227] acquiring lock for ca certs: {Name:mk8bd99999eb300c33129cb11fd55a1be3001328 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 10:07:38.088180  528292 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22344-301174/.minikube/ca.key
	I1227 10:07:38.088271  528292 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22344-301174/.minikube/proxy-client-ca.key
	I1227 10:07:38.088287  528292 certs.go:257] generating profile certs ...
	I1227 10:07:38.088416  528292 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22344-301174/.minikube/profiles/newest-cni-133340/client.key
	I1227 10:07:38.088525  528292 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22344-301174/.minikube/profiles/newest-cni-133340/apiserver.key.5a59841a
	I1227 10:07:38.088586  528292 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22344-301174/.minikube/profiles/newest-cni-133340/proxy-client.key
	I1227 10:07:38.088742  528292 certs.go:484] found cert: /home/jenkins/minikube-integration/22344-301174/.minikube/certs/303043.pem (1338 bytes)
	W1227 10:07:38.088796  528292 certs.go:480] ignoring /home/jenkins/minikube-integration/22344-301174/.minikube/certs/303043_empty.pem, impossibly tiny 0 bytes
	I1227 10:07:38.088814  528292 certs.go:484] found cert: /home/jenkins/minikube-integration/22344-301174/.minikube/certs/ca-key.pem (1675 bytes)
	I1227 10:07:38.088844  528292 certs.go:484] found cert: /home/jenkins/minikube-integration/22344-301174/.minikube/certs/ca.pem (1082 bytes)
	I1227 10:07:38.088888  528292 certs.go:484] found cert: /home/jenkins/minikube-integration/22344-301174/.minikube/certs/cert.pem (1123 bytes)
	I1227 10:07:38.088932  528292 certs.go:484] found cert: /home/jenkins/minikube-integration/22344-301174/.minikube/certs/key.pem (1675 bytes)
	I1227 10:07:38.088997  528292 certs.go:484] found cert: /home/jenkins/minikube-integration/22344-301174/.minikube/files/etc/ssl/certs/3030432.pem (1708 bytes)
	I1227 10:07:38.089805  528292 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22344-301174/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1227 10:07:38.116578  528292 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22344-301174/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1227 10:07:38.144137  528292 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22344-301174/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1227 10:07:38.190121  528292 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22344-301174/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1227 10:07:38.261666  528292 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22344-301174/.minikube/profiles/newest-cni-133340/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1227 10:07:38.334591  528292 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22344-301174/.minikube/profiles/newest-cni-133340/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1227 10:07:38.366097  528292 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22344-301174/.minikube/profiles/newest-cni-133340/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1227 10:07:38.405330  528292 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22344-301174/.minikube/profiles/newest-cni-133340/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1227 10:07:38.424715  528292 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22344-301174/.minikube/files/etc/ssl/certs/3030432.pem --> /usr/share/ca-certificates/3030432.pem (1708 bytes)
	I1227 10:07:38.449471  528292 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22344-301174/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1227 10:07:38.476555  528292 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22344-301174/.minikube/certs/303043.pem --> /usr/share/ca-certificates/303043.pem (1338 bytes)
	I1227 10:07:38.501385  528292 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (722 bytes)
	I1227 10:07:38.519994  528292 ssh_runner.go:195] Run: openssl version
	I1227 10:07:38.527473  528292 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/3030432.pem
	I1227 10:07:38.537585  528292 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/3030432.pem /etc/ssl/certs/3030432.pem
	I1227 10:07:38.546095  528292 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3030432.pem
	I1227 10:07:38.550055  528292 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 27 09:17 /usr/share/ca-certificates/3030432.pem
	I1227 10:07:38.550191  528292 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3030432.pem
	I1227 10:07:38.592498  528292 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1227 10:07:38.601578  528292 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1227 10:07:38.609580  528292 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1227 10:07:38.617934  528292 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1227 10:07:38.622405  528292 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 27 09:13 /usr/share/ca-certificates/minikubeCA.pem
	I1227 10:07:38.622531  528292 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1227 10:07:38.664331  528292 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1227 10:07:38.672585  528292 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/303043.pem
	I1227 10:07:38.680551  528292 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/303043.pem /etc/ssl/certs/303043.pem
	I1227 10:07:38.688566  528292 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/303043.pem
	I1227 10:07:38.692605  528292 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 27 09:17 /usr/share/ca-certificates/303043.pem
	I1227 10:07:38.692720  528292 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/303043.pem
	I1227 10:07:38.736008  528292 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1227 10:07:38.744397  528292 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1227 10:07:38.748535  528292 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1227 10:07:38.790685  528292 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1227 10:07:38.890669  528292 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1227 10:07:38.994352  528292 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1227 10:07:39.061759  528292 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1227 10:07:39.212463  528292 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1227 10:07:39.347487  528292 kubeadm.go:401] StartCluster: {Name:newest-cni-133340 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:newest-cni-133340 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262
144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1227 10:07:39.347630  528292 cri.go:61] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1227 10:07:39.347724  528292 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1227 10:07:39.463794  528292 cri.go:96] found id: "cee879e19c3e8958ad16f09760810606a388ece1bda9a1e8a85a7c7adec6f94f"
	I1227 10:07:39.463871  528292 cri.go:96] found id: "c342531334826d76fa90df32a74fc2a20cd2f093ec252388fe14bd342b7da596"
	I1227 10:07:39.463901  528292 cri.go:96] found id: "39aa7832ba5e4fe2d6d2d87f20829e140105ab816a70d2f5a7edfa283eaa5e91"
	I1227 10:07:39.463919  528292 cri.go:96] found id: "7a9d01b797fbdb9019d71cb9c00b0d02027fd52f235f01e6a682c54cbb7beeb2"
	I1227 10:07:39.463952  528292 cri.go:96] found id: ""
	I1227 10:07:39.464037  528292 ssh_runner.go:195] Run: sudo runc list -f json
	W1227 10:07:39.511844  528292 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-27T10:07:39Z" level=error msg="open /run/runc: no such file or directory"
	I1227 10:07:39.511965  528292 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1227 10:07:39.546819  528292 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1227 10:07:39.546889  528292 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1227 10:07:39.546975  528292 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1227 10:07:39.563851  528292 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1227 10:07:39.564354  528292 kubeconfig.go:47] verify endpoint returned: get endpoint: "newest-cni-133340" does not appear in /home/jenkins/minikube-integration/22344-301174/kubeconfig
	I1227 10:07:39.564502  528292 kubeconfig.go:62] /home/jenkins/minikube-integration/22344-301174/kubeconfig needs updating (will repair): [kubeconfig missing "newest-cni-133340" cluster setting kubeconfig missing "newest-cni-133340" context setting]
	I1227 10:07:39.564857  528292 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22344-301174/kubeconfig: {Name:mk3a3b85dc4ae4614cb3a9a7459a1ec0ee3f4d9f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 10:07:39.566431  528292 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1227 10:07:39.591889  528292 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.76.2
	I1227 10:07:39.591965  528292 kubeadm.go:602] duration metric: took 45.038493ms to restartPrimaryControlPlane
	I1227 10:07:39.591994  528292 kubeadm.go:403] duration metric: took 244.515464ms to StartCluster
	I1227 10:07:39.592043  528292 settings.go:142] acquiring lock: {Name:mkd3c0090464e277fd903dd0170c6ace5a6172ea Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 10:07:39.592132  528292 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22344-301174/kubeconfig
	I1227 10:07:39.592854  528292 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22344-301174/kubeconfig: {Name:mk3a3b85dc4ae4614cb3a9a7459a1ec0ee3f4d9f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 10:07:39.593118  528292 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1227 10:07:39.593566  528292 config.go:182] Loaded profile config "newest-cni-133340": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1227 10:07:39.593566  528292 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1227 10:07:39.593641  528292 addons.go:70] Setting storage-provisioner=true in profile "newest-cni-133340"
	I1227 10:07:39.593663  528292 addons.go:239] Setting addon storage-provisioner=true in "newest-cni-133340"
	W1227 10:07:39.593675  528292 addons.go:248] addon storage-provisioner should already be in state true
	I1227 10:07:39.593698  528292 host.go:66] Checking if "newest-cni-133340" exists ...
	I1227 10:07:39.593731  528292 addons.go:70] Setting dashboard=true in profile "newest-cni-133340"
	I1227 10:07:39.593756  528292 addons.go:239] Setting addon dashboard=true in "newest-cni-133340"
	W1227 10:07:39.593786  528292 addons.go:248] addon dashboard should already be in state true
	I1227 10:07:39.593820  528292 host.go:66] Checking if "newest-cni-133340" exists ...
	I1227 10:07:39.594284  528292 cli_runner.go:164] Run: docker container inspect newest-cni-133340 --format={{.State.Status}}
	I1227 10:07:39.594819  528292 cli_runner.go:164] Run: docker container inspect newest-cni-133340 --format={{.State.Status}}
	I1227 10:07:39.594885  528292 addons.go:70] Setting default-storageclass=true in profile "newest-cni-133340"
	I1227 10:07:39.594915  528292 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-133340"
	I1227 10:07:39.595647  528292 cli_runner.go:164] Run: docker container inspect newest-cni-133340 --format={{.State.Status}}
	I1227 10:07:39.602219  528292 out.go:179] * Verifying Kubernetes components...
	I1227 10:07:39.610273  528292 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1227 10:07:39.650957  528292 addons.go:239] Setting addon default-storageclass=true in "newest-cni-133340"
	W1227 10:07:39.650980  528292 addons.go:248] addon default-storageclass should already be in state true
	I1227 10:07:39.651006  528292 host.go:66] Checking if "newest-cni-133340" exists ...
	I1227 10:07:39.651970  528292 cli_runner.go:164] Run: docker container inspect newest-cni-133340 --format={{.State.Status}}
	I1227 10:07:39.674748  528292 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1227 10:07:39.679735  528292 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1227 10:07:39.679819  528292 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1227 10:07:37.620144  526230 out.go:252]   - Generating certificates and keys ...
	I1227 10:07:37.620284  526230 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1227 10:07:37.620382  526230 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1227 10:07:37.800226  526230 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1227 10:07:38.363721  526230 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1227 10:07:38.847221  526230 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1227 10:07:39.382240  526230 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1227 10:07:39.587529  526230 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1227 10:07:39.588295  526230 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [auto-246753 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1227 10:07:39.841366  526230 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1227 10:07:39.848989  526230 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [auto-246753 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1227 10:07:39.687466  528292 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1227 10:07:39.687498  528292 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1227 10:07:39.687590  528292 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-133340
	I1227 10:07:39.687838  528292 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1227 10:07:39.687852  528292 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1227 10:07:39.687891  528292 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-133340
	I1227 10:07:39.690585  528292 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1227 10:07:39.690604  528292 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1227 10:07:39.690657  528292 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-133340
	I1227 10:07:39.738395  528292 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33471 SSHKeyPath:/home/jenkins/minikube-integration/22344-301174/.minikube/machines/newest-cni-133340/id_rsa Username:docker}
	I1227 10:07:39.750890  528292 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33471 SSHKeyPath:/home/jenkins/minikube-integration/22344-301174/.minikube/machines/newest-cni-133340/id_rsa Username:docker}
	I1227 10:07:39.760026  528292 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33471 SSHKeyPath:/home/jenkins/minikube-integration/22344-301174/.minikube/machines/newest-cni-133340/id_rsa Username:docker}
	I1227 10:07:40.058753  528292 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1227 10:07:40.111871  528292 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1227 10:07:40.177095  528292 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1227 10:07:40.177160  528292 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1227 10:07:40.264091  528292 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1227 10:07:40.266645  528292 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1227 10:07:40.266723  528292 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1227 10:07:40.316117  528292 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1227 10:07:40.316194  528292 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1227 10:07:40.389941  528292 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1227 10:07:40.390027  528292 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1227 10:07:40.482599  528292 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1227 10:07:40.482621  528292 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1227 10:07:40.536161  528292 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1227 10:07:40.536187  528292 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1227 10:07:40.563536  528292 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1227 10:07:40.563561  528292 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1227 10:07:40.522284  526230 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1227 10:07:40.759706  526230 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1227 10:07:40.998357  526230 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1227 10:07:40.998638  526230 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1227 10:07:41.078528  526230 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1227 10:07:41.191565  526230 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1227 10:07:41.638357  526230 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1227 10:07:42.364284  526230 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1227 10:07:42.739038  526230 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1227 10:07:42.739845  526230 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1227 10:07:42.742666  526230 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1227 10:07:42.746639  526230 out.go:252]   - Booting up control plane ...
	I1227 10:07:42.746772  526230 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1227 10:07:42.751349  526230 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1227 10:07:42.754280  526230 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1227 10:07:42.785688  526230 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1227 10:07:42.786307  526230 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1227 10:07:42.796666  526230 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1227 10:07:42.798903  526230 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1227 10:07:42.799304  526230 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1227 10:07:42.971767  526230 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1227 10:07:42.971899  526230 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1227 10:07:43.974515  526230 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.001579911s
	I1227 10:07:43.975253  526230 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1227 10:07:43.975474  526230 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.85.2:8443/livez
	I1227 10:07:43.975568  526230 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1227 10:07:43.975647  526230 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1227 10:07:40.612710  528292 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1227 10:07:40.612782  528292 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1227 10:07:40.653744  528292 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1227 10:07:40.653818  528292 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1227 10:07:40.704720  528292 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1227 10:07:46.804987  528292 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (6.746139088s)
	I1227 10:07:46.805061  528292 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (6.693111154s)
	I1227 10:07:46.805101  528292 api_server.go:52] waiting for apiserver process to appear ...
	I1227 10:07:46.805163  528292 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 10:07:46.805239  528292 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (6.541078572s)
	I1227 10:07:46.805617  528292 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (6.100806585s)
	I1227 10:07:46.808669  528292 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p newest-cni-133340 addons enable metrics-server
	
	I1227 10:07:46.848621  528292 api_server.go:72] duration metric: took 7.255439956s to wait for apiserver process to appear ...
	I1227 10:07:46.848646  528292 api_server.go:88] waiting for apiserver healthz status ...
	I1227 10:07:46.848664  528292 api_server.go:299] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1227 10:07:46.854402  528292 out.go:179] * Enabled addons: storage-provisioner, dashboard, default-storageclass
	I1227 10:07:46.857186  528292 addons.go:530] duration metric: took 7.263620496s for enable addons: enabled=[storage-provisioner dashboard default-storageclass]
	I1227 10:07:46.867060  528292 api_server.go:325] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1227 10:07:46.868199  528292 api_server.go:141] control plane version: v1.35.0
	I1227 10:07:46.868228  528292 api_server.go:131] duration metric: took 19.572256ms to wait for apiserver health ...
	I1227 10:07:46.868237  528292 system_pods.go:43] waiting for kube-system pods to appear ...
	I1227 10:07:46.877385  528292 system_pods.go:59] 8 kube-system pods found
	I1227 10:07:46.877426  528292 system_pods.go:61] "coredns-7d764666f9-ztmc7" [f239b963-7c4a-4112-8652-c5b0f615f94f] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint(s). no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1227 10:07:46.877437  528292 system_pods.go:61] "etcd-newest-cni-133340" [cfbaeb70-0fb0-4c4a-9a7e-163789d297a4] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1227 10:07:46.877443  528292 system_pods.go:61] "kindnet-fgjsl" [c7827a10-1fba-4ca9-a964-97f5b7ea1ceb] Running
	I1227 10:07:46.877450  528292 system_pods.go:61] "kube-apiserver-newest-cni-133340" [9c2aa856-552a-4144-af72-84fde5e9c118] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1227 10:07:46.877458  528292 system_pods.go:61] "kube-controller-manager-newest-cni-133340" [4d2a8823-9c53-4856-8b61-0f5847c7877d] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1227 10:07:46.877467  528292 system_pods.go:61] "kube-proxy-524xs" [21306208-0f93-4fa6-9524-38dc4245c9de] Running
	I1227 10:07:46.877474  528292 system_pods.go:61] "kube-scheduler-newest-cni-133340" [f9032e07-acb4-4316-af98-a51df2721f9f] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1227 10:07:46.877488  528292 system_pods.go:61] "storage-provisioner" [00d34553-4b22-4ac9-9a3b-c1a9cb443967] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint(s). no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1227 10:07:46.877494  528292 system_pods.go:74] duration metric: took 9.246712ms to wait for pod list to return data ...
	I1227 10:07:46.877508  528292 default_sa.go:34] waiting for default service account to be created ...
	I1227 10:07:46.881274  528292 default_sa.go:45] found service account: "default"
	I1227 10:07:46.881301  528292 default_sa.go:55] duration metric: took 3.785049ms for default service account to be created ...
	I1227 10:07:46.881315  528292 kubeadm.go:587] duration metric: took 7.288140679s to wait for: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1227 10:07:46.881331  528292 node_conditions.go:102] verifying NodePressure condition ...
	I1227 10:07:46.886298  528292 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1227 10:07:46.886332  528292 node_conditions.go:123] node cpu capacity is 2
	I1227 10:07:46.886346  528292 node_conditions.go:105] duration metric: took 5.009966ms to run NodePressure ...
	I1227 10:07:46.886359  528292 start.go:242] waiting for startup goroutines ...
	I1227 10:07:46.886366  528292 start.go:247] waiting for cluster config update ...
	I1227 10:07:46.886382  528292 start.go:256] writing updated cluster config ...
	I1227 10:07:46.886661  528292 ssh_runner.go:195] Run: rm -f paused
	I1227 10:07:46.988457  528292 start.go:625] kubectl: 1.33.2, cluster: 1.35.0 (minor skew: 2)
	I1227 10:07:46.991626  528292 out.go:203] 
	W1227 10:07:46.994467  528292 out.go:285] ! /usr/local/bin/kubectl is version 1.33.2, which may have incompatibilities with Kubernetes 1.35.0.
	I1227 10:07:46.997489  528292 out.go:179]   - Want kubectl v1.35.0? Try 'minikube kubectl -- get pods -A'
	I1227 10:07:47.000502  528292 out.go:179] * Done! kubectl is now configured to use "newest-cni-133340" cluster and "default" namespace by default
	I1227 10:07:46.007092  526230 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 2.031054061s
	I1227 10:07:49.316192  526230 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 5.340024044s
	I1227 10:07:51.478119  526230 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 7.502470207s
	I1227 10:07:51.524906  526230 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1227 10:07:51.547190  526230 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1227 10:07:51.573099  526230 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1227 10:07:51.573572  526230 kubeadm.go:319] [mark-control-plane] Marking the node auto-246753 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1227 10:07:51.587850  526230 kubeadm.go:319] [bootstrap-token] Using token: lfcrep.bpm8owu6w15mzgek
	I1227 10:07:51.590882  526230 out.go:252]   - Configuring RBAC rules ...
	I1227 10:07:51.591011  526230 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1227 10:07:51.598499  526230 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1227 10:07:51.612219  526230 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1227 10:07:51.617116  526230 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1227 10:07:51.623815  526230 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1227 10:07:51.629823  526230 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1227 10:07:51.886708  526230 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1227 10:07:52.371325  526230 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1227 10:07:52.887170  526230 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1227 10:07:52.888599  526230 kubeadm.go:319] 
	I1227 10:07:52.888683  526230 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1227 10:07:52.888689  526230 kubeadm.go:319] 
	I1227 10:07:52.888766  526230 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1227 10:07:52.888770  526230 kubeadm.go:319] 
	I1227 10:07:52.888801  526230 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1227 10:07:52.888860  526230 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1227 10:07:52.888919  526230 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1227 10:07:52.888923  526230 kubeadm.go:319] 
	I1227 10:07:52.888977  526230 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1227 10:07:52.888980  526230 kubeadm.go:319] 
	I1227 10:07:52.889028  526230 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1227 10:07:52.889032  526230 kubeadm.go:319] 
	I1227 10:07:52.889083  526230 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1227 10:07:52.889160  526230 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1227 10:07:52.889230  526230 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1227 10:07:52.889234  526230 kubeadm.go:319] 
	I1227 10:07:52.889318  526230 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1227 10:07:52.889395  526230 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1227 10:07:52.889399  526230 kubeadm.go:319] 
	I1227 10:07:52.889483  526230 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token lfcrep.bpm8owu6w15mzgek \
	I1227 10:07:52.889585  526230 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:08d8899242412948893d7bdf12a68d0584eea0f77b00577c527378f6462e133c \
	I1227 10:07:52.889606  526230 kubeadm.go:319] 	--control-plane 
	I1227 10:07:52.889609  526230 kubeadm.go:319] 
	I1227 10:07:52.889694  526230 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1227 10:07:52.889698  526230 kubeadm.go:319] 
	I1227 10:07:52.889779  526230 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token lfcrep.bpm8owu6w15mzgek \
	I1227 10:07:52.889894  526230 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:08d8899242412948893d7bdf12a68d0584eea0f77b00577c527378f6462e133c 
	I1227 10:07:52.893377  526230 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1227 10:07:52.893806  526230 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1227 10:07:52.893914  526230 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1227 10:07:52.893934  526230 cni.go:84] Creating CNI manager for ""
	I1227 10:07:52.893941  526230 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1227 10:07:52.897224  526230 out.go:179] * Configuring CNI (Container Networking Interface) ...
	
	
	==> CRI-O <==
	Dec 27 10:07:44 newest-cni-133340 crio[616]: time="2025-12-27T10:07:44.742065157Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 27 10:07:44 newest-cni-133340 crio[616]: time="2025-12-27T10:07:44.746790205Z" level=info msg="Running pod sandbox: kube-system/kindnet-fgjsl/POD" id=d9472fc2-6098-48fb-adce-a40df9ae6897 name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 27 10:07:44 newest-cni-133340 crio[616]: time="2025-12-27T10:07:44.746840126Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 27 10:07:44 newest-cni-133340 crio[616]: time="2025-12-27T10:07:44.771093597Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=86ac3e44-6018-431e-af28-b72b52d147b1 name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 27 10:07:44 newest-cni-133340 crio[616]: time="2025-12-27T10:07:44.77139116Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=d9472fc2-6098-48fb-adce-a40df9ae6897 name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 27 10:07:44 newest-cni-133340 crio[616]: time="2025-12-27T10:07:44.790078515Z" level=info msg="Ran pod sandbox a91c4e11ef4022ea552669de31bdcfa10adbcb6e75cad845f199e8bdb55bf05d with infra container: kube-system/kube-proxy-524xs/POD" id=86ac3e44-6018-431e-af28-b72b52d147b1 name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 27 10:07:44 newest-cni-133340 crio[616]: time="2025-12-27T10:07:44.828013305Z" level=info msg="Ran pod sandbox 4ed055e8d3f101d8915448acaf8d143aa45834bf1976030165a37ff572280a85 with infra container: kube-system/kindnet-fgjsl/POD" id=d9472fc2-6098-48fb-adce-a40df9ae6897 name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 27 10:07:44 newest-cni-133340 crio[616]: time="2025-12-27T10:07:44.843846684Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20251212-v0.29.0-alpha-105-g20ccfc88" id=0d28402f-ddc7-4913-9ba8-f63edc45c32d name=/runtime.v1.ImageService/ImageStatus
	Dec 27 10:07:44 newest-cni-133340 crio[616]: time="2025-12-27T10:07:44.844262623Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.35.0" id=26800f9d-c076-4dd6-9716-af958a9d5e16 name=/runtime.v1.ImageService/ImageStatus
	Dec 27 10:07:44 newest-cni-133340 crio[616]: time="2025-12-27T10:07:44.848317878Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20251212-v0.29.0-alpha-105-g20ccfc88" id=3ad79379-6f7a-478c-8920-94a9acd3fa6c name=/runtime.v1.ImageService/ImageStatus
	Dec 27 10:07:44 newest-cni-133340 crio[616]: time="2025-12-27T10:07:44.848660684Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.35.0" id=b3cb7e17-dfb7-432d-b7db-a720b9fcf396 name=/runtime.v1.ImageService/ImageStatus
	Dec 27 10:07:44 newest-cni-133340 crio[616]: time="2025-12-27T10:07:44.852517996Z" level=info msg="Creating container: kube-system/kindnet-fgjsl/kindnet-cni" id=f4212c4f-f106-464b-8f7b-50bad3ba95ba name=/runtime.v1.RuntimeService/CreateContainer
	Dec 27 10:07:44 newest-cni-133340 crio[616]: time="2025-12-27T10:07:44.852629915Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 27 10:07:44 newest-cni-133340 crio[616]: time="2025-12-27T10:07:44.854715636Z" level=info msg="Creating container: kube-system/kube-proxy-524xs/kube-proxy" id=113df369-888b-4a2d-a3bb-f19fe837c946 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 27 10:07:44 newest-cni-133340 crio[616]: time="2025-12-27T10:07:44.864790584Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 27 10:07:44 newest-cni-133340 crio[616]: time="2025-12-27T10:07:44.87846021Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 27 10:07:44 newest-cni-133340 crio[616]: time="2025-12-27T10:07:44.893311776Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 27 10:07:44 newest-cni-133340 crio[616]: time="2025-12-27T10:07:44.893893642Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 27 10:07:44 newest-cni-133340 crio[616]: time="2025-12-27T10:07:44.910567999Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 27 10:07:45 newest-cni-133340 crio[616]: time="2025-12-27T10:07:45.046537627Z" level=info msg="Created container 78fd010d78a7cb2f45faea09e3932928ca8bc285ea91b147f7d5b6b8969325e2: kube-system/kindnet-fgjsl/kindnet-cni" id=f4212c4f-f106-464b-8f7b-50bad3ba95ba name=/runtime.v1.RuntimeService/CreateContainer
	Dec 27 10:07:45 newest-cni-133340 crio[616]: time="2025-12-27T10:07:45.062728213Z" level=info msg="Starting container: 78fd010d78a7cb2f45faea09e3932928ca8bc285ea91b147f7d5b6b8969325e2" id=83b5d769-6cd5-4fd8-b225-3df9d0809768 name=/runtime.v1.RuntimeService/StartContainer
	Dec 27 10:07:45 newest-cni-133340 crio[616]: time="2025-12-27T10:07:45.067758979Z" level=info msg="Created container 3ec22f1315d2ec502303d3a0be296af3a51716ae2198f1d68384e2893282cac1: kube-system/kube-proxy-524xs/kube-proxy" id=113df369-888b-4a2d-a3bb-f19fe837c946 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 27 10:07:45 newest-cni-133340 crio[616]: time="2025-12-27T10:07:45.078962343Z" level=info msg="Starting container: 3ec22f1315d2ec502303d3a0be296af3a51716ae2198f1d68384e2893282cac1" id=ada0ed6f-1116-474c-a7c7-75efa690bec3 name=/runtime.v1.RuntimeService/StartContainer
	Dec 27 10:07:45 newest-cni-133340 crio[616]: time="2025-12-27T10:07:45.082344153Z" level=info msg="Started container" PID=1074 containerID=78fd010d78a7cb2f45faea09e3932928ca8bc285ea91b147f7d5b6b8969325e2 description=kube-system/kindnet-fgjsl/kindnet-cni id=83b5d769-6cd5-4fd8-b225-3df9d0809768 name=/runtime.v1.RuntimeService/StartContainer sandboxID=4ed055e8d3f101d8915448acaf8d143aa45834bf1976030165a37ff572280a85
	Dec 27 10:07:45 newest-cni-133340 crio[616]: time="2025-12-27T10:07:45.110856492Z" level=info msg="Started container" PID=1076 containerID=3ec22f1315d2ec502303d3a0be296af3a51716ae2198f1d68384e2893282cac1 description=kube-system/kube-proxy-524xs/kube-proxy id=ada0ed6f-1116-474c-a7c7-75efa690bec3 name=/runtime.v1.RuntimeService/StartContainer sandboxID=a91c4e11ef4022ea552669de31bdcfa10adbcb6e75cad845f199e8bdb55bf05d
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	3ec22f1315d2e       de369f46c2ff55c31ea783a663eb203caa820f3db1f9b9c935e79e7d1e9fd9e5   9 seconds ago       Running             kube-proxy                1                   a91c4e11ef402       kube-proxy-524xs                            kube-system
	78fd010d78a7c       c96ee3c17498748ccc544ba99ee8ffeb020fc335b230b43cd28bf43bed229a13   9 seconds ago       Running             kindnet-cni               1                   4ed055e8d3f10       kindnet-fgjsl                               kube-system
	cee879e19c3e8       ddc8422d4d35a6fc66c34be61e24df795e5cebf197eb546f62740d0bafef874f   15 seconds ago      Running             kube-scheduler            1                   9f5f82965a8ef       kube-scheduler-newest-cni-133340            kube-system
	c342531334826       271e49a0ebc56647476845128fcd2a73bb138beeca3878cc3bf52b4ff1172a57   15 seconds ago      Running             etcd                      1                   abff1b2348346       etcd-newest-cni-133340                      kube-system
	39aa7832ba5e4       88898f1d1a62a3ea9db5d4d099dee7aa52ebe8191016c5b3c721388a309983e0   15 seconds ago      Running             kube-controller-manager   1                   0b1143a36cb92       kube-controller-manager-newest-cni-133340   kube-system
	7a9d01b797fbd       c3fcf259c473a57a5d7da116e29161904491091743512d27467c907c5516f856   15 seconds ago      Running             kube-apiserver            1                   d569adc450bc2       kube-apiserver-newest-cni-133340            kube-system
	
	
	==> describe nodes <==
	Name:               newest-cni-133340
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=newest-cni-133340
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=a2daf445edf4872fd9586416ba5dbf507613db86
	                    minikube.k8s.io/name=newest-cni-133340
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_27T10_07_18_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 27 Dec 2025 10:07:14 +0000
	Taints:             node.kubernetes.io/not-ready:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  newest-cni-133340
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 27 Dec 2025 10:07:43 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 27 Dec 2025 10:07:44 +0000   Sat, 27 Dec 2025 10:07:10 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 27 Dec 2025 10:07:44 +0000   Sat, 27 Dec 2025 10:07:10 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 27 Dec 2025 10:07:44 +0000   Sat, 27 Dec 2025 10:07:10 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            False   Sat, 27 Dec 2025 10:07:44 +0000   Sat, 27 Dec 2025 10:07:10 +0000   KubeletNotReady              container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/cni/net.d/. Has your network provider started?
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    newest-cni-133340
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022304Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022304Ki
	  pods:               110
	System Info:
	  Machine ID:                 56495bcb733b49eb3642bf15694bbc8c
	  System UUID:                2ffc56ef-4a0e-4350-837c-13fb816f4d7e
	  Boot ID:                    bc2ae459-717b-455e-ba6b-b9cf89c5e3c8
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.3
	  Kubelet Version:            v1.35.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.42.0.0/24
	PodCIDRs:                     10.42.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  kube-system                 etcd-newest-cni-133340                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         37s
	  kube-system                 kindnet-fgjsl                                100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      32s
	  kube-system                 kube-apiserver-newest-cni-133340             250m (12%)    0 (0%)      0 (0%)           0 (0%)         39s
	  kube-system                 kube-controller-manager-newest-cni-133340    200m (10%)    0 (0%)      0 (0%)           0 (0%)         37s
	  kube-system                 kube-proxy-524xs                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         32s
	  kube-system                 kube-scheduler-newest-cni-133340             100m (5%)     0 (0%)      0 (0%)           0 (0%)         37s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (1%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type    Reason          Age   From             Message
	  ----    ------          ----  ----             -------
	  Normal  RegisteredNode  33s   node-controller  Node newest-cni-133340 event: Registered Node newest-cni-133340 in Controller
	  Normal  RegisteredNode  6s    node-controller  Node newest-cni-133340 event: Registered Node newest-cni-133340 in Controller
	
	
	==> dmesg <==
	[Dec27 09:37] overlayfs: idmapped layers are currently not supported
	[  +4.962986] overlayfs: idmapped layers are currently not supported
	[Dec27 09:38] overlayfs: idmapped layers are currently not supported
	[ +23.051169] overlayfs: idmapped layers are currently not supported
	[Dec27 09:39] overlayfs: idmapped layers are currently not supported
	[ +34.383287] overlayfs: idmapped layers are currently not supported
	[Dec27 09:41] overlayfs: idmapped layers are currently not supported
	[Dec27 09:43] overlayfs: idmapped layers are currently not supported
	[ +35.522529] overlayfs: idmapped layers are currently not supported
	[Dec27 09:48] overlayfs: idmapped layers are currently not supported
	[Dec27 09:49] overlayfs: idmapped layers are currently not supported
	[Dec27 09:51] overlayfs: idmapped layers are currently not supported
	[Dec27 09:52] overlayfs: idmapped layers are currently not supported
	[Dec27 09:59] overlayfs: idmapped layers are currently not supported
	[ +37.649191] overlayfs: idmapped layers are currently not supported
	[Dec27 10:01] overlayfs: idmapped layers are currently not supported
	[Dec27 10:02] overlayfs: idmapped layers are currently not supported
	[Dec27 10:03] overlayfs: idmapped layers are currently not supported
	[Dec27 10:04] overlayfs: idmapped layers are currently not supported
	[Dec27 10:05] overlayfs: idmapped layers are currently not supported
	[ +42.108139] overlayfs: idmapped layers are currently not supported
	[Dec27 10:06] overlayfs: idmapped layers are currently not supported
	[Dec27 10:07] overlayfs: idmapped layers are currently not supported
	[ +29.217037] overlayfs: idmapped layers are currently not supported
	[  +5.170102] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [c342531334826d76fa90df32a74fc2a20cd2f093ec252388fe14bd342b7da596] <==
	{"level":"info","ts":"2025-12-27T10:07:39.493495Z","caller":"fileutil/purge.go:49","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-12-27T10:07:39.493560Z","caller":"fileutil/purge.go:49","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-12-27T10:07:39.493801Z","caller":"embed/etcd.go:640","msg":"serving peer traffic","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2025-12-27T10:07:39.493842Z","caller":"embed/etcd.go:611","msg":"cmux::serve","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2025-12-27T10:07:39.512984Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1981","msg":"ea7e25599daad906 switched to configuration voters=(16896983918768216326)"}
	{"level":"info","ts":"2025-12-27T10:07:39.513187Z","caller":"membership/cluster.go:433","msg":"ignore already added member","cluster-id":"6f20f2c4b2fb5f8a","local-member-id":"ea7e25599daad906","added-peer-id":"ea7e25599daad906","added-peer-peer-urls":["https://192.168.76.2:2380"],"added-peer-is-learner":false}
	{"level":"info","ts":"2025-12-27T10:07:39.513345Z","caller":"membership/cluster.go:674","msg":"updated cluster version","cluster-id":"6f20f2c4b2fb5f8a","local-member-id":"ea7e25599daad906","from":"3.6","to":"3.6"}
	{"level":"info","ts":"2025-12-27T10:07:40.138282Z","logger":"raft","caller":"v3@v3.6.0/raft.go:988","msg":"ea7e25599daad906 is starting a new election at term 2"}
	{"level":"info","ts":"2025-12-27T10:07:40.138332Z","logger":"raft","caller":"v3@v3.6.0/raft.go:930","msg":"ea7e25599daad906 became pre-candidate at term 2"}
	{"level":"info","ts":"2025-12-27T10:07:40.138373Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1077","msg":"ea7e25599daad906 received MsgPreVoteResp from ea7e25599daad906 at term 2"}
	{"level":"info","ts":"2025-12-27T10:07:40.138385Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1693","msg":"ea7e25599daad906 has received 1 MsgPreVoteResp votes and 0 vote rejections"}
	{"level":"info","ts":"2025-12-27T10:07:40.138401Z","logger":"raft","caller":"v3@v3.6.0/raft.go:912","msg":"ea7e25599daad906 became candidate at term 3"}
	{"level":"info","ts":"2025-12-27T10:07:40.142367Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1077","msg":"ea7e25599daad906 received MsgVoteResp from ea7e25599daad906 at term 3"}
	{"level":"info","ts":"2025-12-27T10:07:40.142430Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1693","msg":"ea7e25599daad906 has received 1 MsgVoteResp votes and 0 vote rejections"}
	{"level":"info","ts":"2025-12-27T10:07:40.142458Z","logger":"raft","caller":"v3@v3.6.0/raft.go:970","msg":"ea7e25599daad906 became leader at term 3"}
	{"level":"info","ts":"2025-12-27T10:07:40.142468Z","logger":"raft","caller":"v3@v3.6.0/node.go:370","msg":"raft.node: ea7e25599daad906 elected leader ea7e25599daad906 at term 3"}
	{"level":"info","ts":"2025-12-27T10:07:40.150558Z","caller":"embed/serve.go:138","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-12-27T10:07:40.151507Z","caller":"v3rpc/health.go:63","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-12-27T10:07:40.153531Z","caller":"embed/serve.go:283","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.76.2:2379"}
	{"level":"info","ts":"2025-12-27T10:07:40.153840Z","caller":"embed/serve.go:138","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-12-27T10:07:40.150527Z","caller":"etcdserver/server.go:1820","msg":"published local member to cluster through raft","local-member-id":"ea7e25599daad906","local-member-attributes":"{Name:newest-cni-133340 ClientURLs:[https://192.168.76.2:2379]}","cluster-id":"6f20f2c4b2fb5f8a","publish-timeout":"7s"}
	{"level":"info","ts":"2025-12-27T10:07:40.171070Z","caller":"v3rpc/health.go:63","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-12-27T10:07:40.171820Z","caller":"embed/serve.go:283","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2025-12-27T10:07:40.194249Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-12-27T10:07:40.194287Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	
	
	==> kernel <==
	 10:07:54 up  2:50,  0 user,  load average: 6.75, 3.72, 2.70
	Linux newest-cni-133340 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [78fd010d78a7cb2f45faea09e3932928ca8bc285ea91b147f7d5b6b8969325e2] <==
	I1227 10:07:45.189682       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1227 10:07:45.244514       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1227 10:07:45.244655       1 main.go:148] setting mtu 1500 for CNI 
	I1227 10:07:45.244670       1 main.go:178] kindnetd IP family: "ipv4"
	I1227 10:07:45.244685       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-12-27T10:07:45Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1227 10:07:45.467606       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1227 10:07:45.467629       1 controller.go:381] "Waiting for informer caches to sync"
	I1227 10:07:45.467638       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1227 10:07:45.467946       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	
	
	==> kube-apiserver [7a9d01b797fbdb9019d71cb9c00b0d02027fd52f235f01e6a682c54cbb7beeb2] <==
	I1227 10:07:44.136669       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1227 10:07:44.136675       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1227 10:07:44.136765       1 shared_informer.go:377] "Caches are synced"
	I1227 10:07:44.136783       1 shared_informer.go:377] "Caches are synced"
	I1227 10:07:44.136819       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1227 10:07:44.137154       1 cidrallocator.go:302] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1227 10:07:44.137185       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1227 10:07:44.154828       1 shared_informer.go:377] "Caches are synced"
	I1227 10:07:44.154850       1 policy_source.go:248] refreshing policies
	I1227 10:07:44.164375       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1227 10:07:44.164415       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1227 10:07:44.230607       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	E1227 10:07:44.333102       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1227 10:07:44.584546       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1227 10:07:44.716728       1 storage_scheduling.go:139] all system priority classes are created successfully or already exist.
	I1227 10:07:46.041035       1 controller.go:667] quota admission added evaluator for: namespaces
	I1227 10:07:46.144958       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1227 10:07:46.201385       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1227 10:07:46.241809       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1227 10:07:46.385959       1 alloc.go:329] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.111.121.23"}
	I1227 10:07:46.421498       1 alloc.go:329] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.106.72.173"}
	I1227 10:07:48.379695       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1227 10:07:48.634167       1 controller.go:667] quota admission added evaluator for: endpoints
	I1227 10:07:48.691873       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1227 10:07:48.839118       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	
	
	==> kube-controller-manager [39aa7832ba5e4fe2d6d2d87f20829e140105ab816a70d2f5a7edfa283eaa5e91] <==
	I1227 10:07:48.207662       1 shared_informer.go:377] "Caches are synced"
	I1227 10:07:48.207734       1 shared_informer.go:377] "Caches are synced"
	I1227 10:07:48.209838       1 shared_informer.go:377] "Caches are synced"
	I1227 10:07:48.209919       1 shared_informer.go:377] "Caches are synced"
	I1227 10:07:48.209969       1 shared_informer.go:377] "Caches are synced"
	I1227 10:07:48.210034       1 shared_informer.go:377] "Caches are synced"
	I1227 10:07:48.210083       1 shared_informer.go:377] "Caches are synced"
	I1227 10:07:48.210140       1 shared_informer.go:377] "Caches are synced"
	I1227 10:07:48.210405       1 shared_informer.go:377] "Caches are synced"
	I1227 10:07:48.214501       1 shared_informer.go:377] "Caches are synced"
	I1227 10:07:48.214651       1 shared_informer.go:377] "Caches are synced"
	I1227 10:07:48.214749       1 shared_informer.go:377] "Caches are synced"
	I1227 10:07:48.215280       1 shared_informer.go:377] "Caches are synced"
	I1227 10:07:48.216277       1 shared_informer.go:377] "Caches are synced"
	I1227 10:07:48.216325       1 shared_informer.go:377] "Caches are synced"
	I1227 10:07:48.216465       1 shared_informer.go:377] "Caches are synced"
	I1227 10:07:48.221579       1 shared_informer.go:377] "Caches are synced"
	I1227 10:07:48.221732       1 shared_informer.go:377] "Caches are synced"
	I1227 10:07:48.221803       1 shared_informer.go:377] "Caches are synced"
	I1227 10:07:48.223679       1 shared_informer.go:377] "Caches are synced"
	I1227 10:07:48.235506       1 shared_informer.go:377] "Caches are synced"
	I1227 10:07:48.284400       1 shared_informer.go:377] "Caches are synced"
	I1227 10:07:48.304282       1 shared_informer.go:377] "Caches are synced"
	I1227 10:07:48.304363       1 garbagecollector.go:166] "Garbage collector: all resource monitors have synced"
	I1227 10:07:48.305917       1 garbagecollector.go:169] "Proceeding to collect garbage"
	
	
	==> kube-proxy [3ec22f1315d2ec502303d3a0be296af3a51716ae2198f1d68384e2893282cac1] <==
	I1227 10:07:45.794414       1 server_linux.go:53] "Using iptables proxy"
	I1227 10:07:46.339886       1 shared_informer.go:370] "Waiting for caches to sync"
	I1227 10:07:46.521341       1 shared_informer.go:377] "Caches are synced"
	I1227 10:07:46.521377       1 server.go:218] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1227 10:07:46.521450       1 server.go:255] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1227 10:07:46.672237       1 server.go:264] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1227 10:07:46.672291       1 server_linux.go:136] "Using iptables Proxier"
	I1227 10:07:46.683090       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1227 10:07:46.683476       1 server.go:529] "Version info" version="v1.35.0"
	I1227 10:07:46.683664       1 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1227 10:07:46.684855       1 config.go:200] "Starting service config controller"
	I1227 10:07:46.684936       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1227 10:07:46.684987       1 config.go:106] "Starting endpoint slice config controller"
	I1227 10:07:46.685015       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1227 10:07:46.685051       1 config.go:403] "Starting serviceCIDR config controller"
	I1227 10:07:46.685079       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1227 10:07:46.708492       1 config.go:309] "Starting node config controller"
	I1227 10:07:46.708581       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1227 10:07:46.708616       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1227 10:07:46.796335       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1227 10:07:46.796439       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1227 10:07:46.796457       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [cee879e19c3e8958ad16f09760810606a388ece1bda9a1e8a85a7c7adec6f94f] <==
	I1227 10:07:42.093575       1 serving.go:386] Generated self-signed cert in-memory
	W1227 10:07:43.846405       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1227 10:07:43.846438       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1227 10:07:43.846450       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1227 10:07:43.846458       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1227 10:07:44.052405       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.35.0"
	I1227 10:07:44.052444       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1227 10:07:44.074110       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1227 10:07:44.079714       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1227 10:07:44.079811       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1227 10:07:44.079835       1 shared_informer.go:370] "Waiting for caches to sync"
	I1227 10:07:44.280427       1 shared_informer.go:377] "Caches are synced"
	
	
	==> kubelet <==
	Dec 27 10:07:44 newest-cni-133340 kubelet[736]: I1227 10:07:44.322847     736 kubelet_node_status.go:123] "Node was previously registered" node="newest-cni-133340"
	Dec 27 10:07:44 newest-cni-133340 kubelet[736]: I1227 10:07:44.322946     736 kubelet_node_status.go:77] "Successfully registered node" node="newest-cni-133340"
	Dec 27 10:07:44 newest-cni-133340 kubelet[736]: I1227 10:07:44.322986     736 kuberuntime_manager.go:2062] "Updating runtime config through cri with podcidr" CIDR="10.42.0.0/24"
	Dec 27 10:07:44 newest-cni-133340 kubelet[736]: I1227 10:07:44.347607     736 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.42.0.0/24"
	Dec 27 10:07:44 newest-cni-133340 kubelet[736]: E1227 10:07:44.348408     736 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-apiserver-newest-cni-133340\" already exists" pod="kube-system/kube-apiserver-newest-cni-133340"
	Dec 27 10:07:44 newest-cni-133340 kubelet[736]: I1227 10:07:44.348428     736 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-newest-cni-133340"
	Dec 27 10:07:44 newest-cni-133340 kubelet[736]: E1227 10:07:44.392685     736 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-newest-cni-133340" containerName="kube-apiserver"
	Dec 27 10:07:44 newest-cni-133340 kubelet[736]: E1227 10:07:44.393166     736 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-controller-manager-newest-cni-133340\" already exists" pod="kube-system/kube-controller-manager-newest-cni-133340"
	Dec 27 10:07:44 newest-cni-133340 kubelet[736]: I1227 10:07:44.393204     736 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-newest-cni-133340"
	Dec 27 10:07:44 newest-cni-133340 kubelet[736]: E1227 10:07:44.393367     736 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-controller-manager-newest-cni-133340" containerName="kube-controller-manager"
	Dec 27 10:07:44 newest-cni-133340 kubelet[736]: E1227 10:07:44.399924     736 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/etcd-newest-cni-133340" containerName="etcd"
	Dec 27 10:07:44 newest-cni-133340 kubelet[736]: E1227 10:07:44.401240     736 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-newest-cni-133340" containerName="kube-scheduler"
	Dec 27 10:07:44 newest-cni-133340 kubelet[736]: E1227 10:07:44.422282     736 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-scheduler-newest-cni-133340\" already exists" pod="kube-system/kube-scheduler-newest-cni-133340"
	Dec 27 10:07:44 newest-cni-133340 kubelet[736]: I1227 10:07:44.481357     736 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world"
	Dec 27 10:07:44 newest-cni-133340 kubelet[736]: I1227 10:07:44.562414     736 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/21306208-0f93-4fa6-9524-38dc4245c9de-xtables-lock\") pod \"kube-proxy-524xs\" (UID: \"21306208-0f93-4fa6-9524-38dc4245c9de\") " pod="kube-system/kube-proxy-524xs"
	Dec 27 10:07:44 newest-cni-133340 kubelet[736]: I1227 10:07:44.562461     736 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c7827a10-1fba-4ca9-a964-97f5b7ea1ceb-xtables-lock\") pod \"kindnet-fgjsl\" (UID: \"c7827a10-1fba-4ca9-a964-97f5b7ea1ceb\") " pod="kube-system/kindnet-fgjsl"
	Dec 27 10:07:44 newest-cni-133340 kubelet[736]: I1227 10:07:44.562483     736 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/21306208-0f93-4fa6-9524-38dc4245c9de-lib-modules\") pod \"kube-proxy-524xs\" (UID: \"21306208-0f93-4fa6-9524-38dc4245c9de\") " pod="kube-system/kube-proxy-524xs"
	Dec 27 10:07:44 newest-cni-133340 kubelet[736]: I1227 10:07:44.562517     736 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c7827a10-1fba-4ca9-a964-97f5b7ea1ceb-lib-modules\") pod \"kindnet-fgjsl\" (UID: \"c7827a10-1fba-4ca9-a964-97f5b7ea1ceb\") " pod="kube-system/kindnet-fgjsl"
	Dec 27 10:07:44 newest-cni-133340 kubelet[736]: I1227 10:07:44.562561     736 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/c7827a10-1fba-4ca9-a964-97f5b7ea1ceb-cni-cfg\") pod \"kindnet-fgjsl\" (UID: \"c7827a10-1fba-4ca9-a964-97f5b7ea1ceb\") " pod="kube-system/kindnet-fgjsl"
	Dec 27 10:07:44 newest-cni-133340 kubelet[736]: I1227 10:07:44.642603     736 swap_util.go:78] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory"
	Dec 27 10:07:44 newest-cni-133340 kubelet[736]: W1227 10:07:44.821484     736 manager.go:1172] Failed to process watch event {EventType:0 Name:/docker/83f3564ca7865c8f0fff28cd70ed0fbf3baa50a4f4aaaa707de637f5214ccf50/crio-4ed055e8d3f101d8915448acaf8d143aa45834bf1976030165a37ff572280a85 WatchSource:0}: Error finding container 4ed055e8d3f101d8915448acaf8d143aa45834bf1976030165a37ff572280a85: Status 404 returned error can't find the container with id 4ed055e8d3f101d8915448acaf8d143aa45834bf1976030165a37ff572280a85
	Dec 27 10:07:45 newest-cni-133340 kubelet[736]: E1227 10:07:45.226569     736 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-newest-cni-133340" containerName="kube-scheduler"
	Dec 27 10:07:48 newest-cni-133340 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Dec 27 10:07:48 newest-cni-133340 systemd[1]: kubelet.service: Deactivated successfully.
	Dec 27 10:07:48 newest-cni-133340 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-133340 -n newest-cni-133340
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-133340 -n newest-cni-133340: exit status 2 (412.897902ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:270: (dbg) Run:  kubectl --context newest-cni-133340 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:281: non-running pods: coredns-7d764666f9-ztmc7 storage-provisioner dashboard-metrics-scraper-867fb5f87b-r5k2g kubernetes-dashboard-b84665fb8-r2c5t
helpers_test.go:283: ======> post-mortem[TestStartStop/group/newest-cni/serial/Pause]: describe non-running pods <======
helpers_test.go:286: (dbg) Run:  kubectl --context newest-cni-133340 describe pod coredns-7d764666f9-ztmc7 storage-provisioner dashboard-metrics-scraper-867fb5f87b-r5k2g kubernetes-dashboard-b84665fb8-r2c5t
helpers_test.go:286: (dbg) Non-zero exit: kubectl --context newest-cni-133340 describe pod coredns-7d764666f9-ztmc7 storage-provisioner dashboard-metrics-scraper-867fb5f87b-r5k2g kubernetes-dashboard-b84665fb8-r2c5t: exit status 1 (99.539275ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "coredns-7d764666f9-ztmc7" not found
	Error from server (NotFound): pods "storage-provisioner" not found
	Error from server (NotFound): pods "dashboard-metrics-scraper-867fb5f87b-r5k2g" not found
	Error from server (NotFound): pods "kubernetes-dashboard-b84665fb8-r2c5t" not found

                                                
                                                
** /stderr **
helpers_test.go:288: kubectl --context newest-cni-133340 describe pod coredns-7d764666f9-ztmc7 storage-provisioner dashboard-metrics-scraper-867fb5f87b-r5k2g kubernetes-dashboard-b84665fb8-r2c5t: exit status 1
--- FAIL: TestStartStop/group/newest-cni/serial/Pause (7.81s)

                                                
                                    

Test pass (274/332)

Order passed test Duration
3 TestDownloadOnly/v1.28.0/json-events 5.8
4 TestDownloadOnly/v1.28.0/preload-exists 0
8 TestDownloadOnly/v1.28.0/LogsDuration 0.09
9 TestDownloadOnly/v1.28.0/DeleteAll 0.21
10 TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds 0.13
12 TestDownloadOnly/v1.35.0/json-events 3.25
13 TestDownloadOnly/v1.35.0/preload-exists 0
17 TestDownloadOnly/v1.35.0/LogsDuration 0.09
18 TestDownloadOnly/v1.35.0/DeleteAll 0.21
19 TestDownloadOnly/v1.35.0/DeleteAlwaysSucceeds 0.15
21 TestBinaryMirror 0.65
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.2
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.17
27 TestAddons/Setup 140.49
31 TestAddons/serial/GCPAuth/Namespaces 0.19
32 TestAddons/serial/GCPAuth/FakeCredentials 9.85
48 TestAddons/StoppedEnableDisable 12.48
49 TestCertOptions 30.07
50 TestCertExpiration 224.66
58 TestErrorSpam/setup 23.5
59 TestErrorSpam/start 0.84
60 TestErrorSpam/status 1.13
61 TestErrorSpam/pause 5.9
62 TestErrorSpam/unpause 5.5
63 TestErrorSpam/stop 1.52
66 TestFunctional/serial/CopySyncFile 0
67 TestFunctional/serial/StartWithProxy 46.45
68 TestFunctional/serial/AuditLog 0
69 TestFunctional/serial/SoftStart 28.58
70 TestFunctional/serial/KubeContext 0.08
71 TestFunctional/serial/KubectlGetPods 0.1
74 TestFunctional/serial/CacheCmd/cache/add_remote 3.47
75 TestFunctional/serial/CacheCmd/cache/add_local 1.24
76 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.06
77 TestFunctional/serial/CacheCmd/cache/list 0.06
78 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.3
79 TestFunctional/serial/CacheCmd/cache/cache_reload 1.82
80 TestFunctional/serial/CacheCmd/cache/delete 0.11
81 TestFunctional/serial/MinikubeKubectlCmd 0.14
82 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.14
83 TestFunctional/serial/ExtraConfig 32.28
84 TestFunctional/serial/ComponentHealth 0.09
85 TestFunctional/serial/LogsCmd 1.42
86 TestFunctional/serial/LogsFileCmd 1.52
87 TestFunctional/serial/InvalidService 4.44
89 TestFunctional/parallel/ConfigCmd 0.47
90 TestFunctional/parallel/DashboardCmd 11.68
91 TestFunctional/parallel/DryRun 0.45
92 TestFunctional/parallel/InternationalLanguage 0.26
93 TestFunctional/parallel/StatusCmd 1.19
97 TestFunctional/parallel/ServiceCmdConnect 8.63
98 TestFunctional/parallel/AddonsCmd 0.14
99 TestFunctional/parallel/PersistentVolumeClaim 21.94
101 TestFunctional/parallel/SSHCmd 0.71
102 TestFunctional/parallel/CpCmd 2.39
104 TestFunctional/parallel/FileSync 0.34
105 TestFunctional/parallel/CertSync 2.6
109 TestFunctional/parallel/NodeLabels 0.09
111 TestFunctional/parallel/NonActiveRuntimeDisabled 0.7
113 TestFunctional/parallel/License 0.31
115 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.67
116 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
118 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 9.45
119 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.08
120 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0
124 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.11
125 TestFunctional/parallel/ServiceCmd/DeployApp 6.21
126 TestFunctional/parallel/ProfileCmd/profile_not_create 0.44
127 TestFunctional/parallel/ProfileCmd/profile_list 0.43
128 TestFunctional/parallel/ProfileCmd/profile_json_output 0.42
129 TestFunctional/parallel/MountCmd/any-port 8.47
130 TestFunctional/parallel/ServiceCmd/List 0.61
131 TestFunctional/parallel/ServiceCmd/JSONOutput 0.59
132 TestFunctional/parallel/ServiceCmd/HTTPS 0.43
133 TestFunctional/parallel/ServiceCmd/Format 0.38
134 TestFunctional/parallel/ServiceCmd/URL 0.37
135 TestFunctional/parallel/MountCmd/specific-port 2.36
136 TestFunctional/parallel/MountCmd/VerifyCleanup 2.45
137 TestFunctional/parallel/Version/short 0.08
138 TestFunctional/parallel/Version/components 0.73
139 TestFunctional/parallel/ImageCommands/ImageListShort 0.28
140 TestFunctional/parallel/ImageCommands/ImageListTable 0.28
141 TestFunctional/parallel/ImageCommands/ImageListJson 0.26
142 TestFunctional/parallel/ImageCommands/ImageListYaml 0.29
143 TestFunctional/parallel/ImageCommands/ImageBuild 3.84
144 TestFunctional/parallel/ImageCommands/Setup 0.64
145 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 1.13
146 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 0.89
147 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 1.39
148 TestFunctional/parallel/UpdateContextCmd/no_changes 0.21
149 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.19
150 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.22
151 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.94
152 TestFunctional/parallel/ImageCommands/ImageRemove 0.64
153 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.8
154 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.55
155 TestFunctional/delete_echo-server_images 0.04
156 TestFunctional/delete_my-image_image 0.02
157 TestFunctional/delete_minikube_cached_images 0.02
162 TestMultiControlPlane/serial/StartCluster 137.07
163 TestMultiControlPlane/serial/DeployApp 6.33
164 TestMultiControlPlane/serial/PingHostFromPods 1.53
165 TestMultiControlPlane/serial/AddWorkerNode 31.64
166 TestMultiControlPlane/serial/NodeLabels 0.12
167 TestMultiControlPlane/serial/HAppyAfterClusterStart 1.08
168 TestMultiControlPlane/serial/CopyFile 19.87
169 TestMultiControlPlane/serial/StopSecondaryNode 12.97
170 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.81
171 TestMultiControlPlane/serial/RestartSecondaryNode 21.27
172 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 1.31
173 TestMultiControlPlane/serial/RestartClusterKeepsNodes 110.19
174 TestMultiControlPlane/serial/DeleteSecondaryNode 11.8
175 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.78
176 TestMultiControlPlane/serial/StopCluster 36.21
177 TestMultiControlPlane/serial/RestartCluster 69.73
178 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.81
179 TestMultiControlPlane/serial/AddSecondaryNode 50.56
180 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 1.08
185 TestJSONOutput/start/Command 45.31
186 TestJSONOutput/start/Audit 0
188 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
189 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
192 TestJSONOutput/pause/Audit 0
194 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
195 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
198 TestJSONOutput/unpause/Audit 0
200 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
201 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
203 TestJSONOutput/stop/Command 5.82
204 TestJSONOutput/stop/Audit 0
206 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
207 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
208 TestErrorJSONOutput 0.24
210 TestKicCustomNetwork/create_custom_network 32.37
211 TestKicCustomNetwork/use_default_bridge_network 30.13
212 TestKicExistingNetwork 30.28
213 TestKicCustomSubnet 30.92
214 TestKicStaticIP 31.84
215 TestMainNoArgs 0.06
216 TestMinikubeProfile 63.51
219 TestMountStart/serial/StartWithMountFirst 8.86
220 TestMountStart/serial/VerifyMountFirst 0.27
221 TestMountStart/serial/StartWithMountSecond 8.66
222 TestMountStart/serial/VerifyMountSecond 0.27
223 TestMountStart/serial/DeleteFirst 1.71
224 TestMountStart/serial/VerifyMountPostDelete 0.27
225 TestMountStart/serial/Stop 1.3
226 TestMountStart/serial/RestartStopped 8.54
227 TestMountStart/serial/VerifyMountPostStop 0.27
230 TestMultiNode/serial/FreshStart2Nodes 70.04
231 TestMultiNode/serial/DeployApp2Nodes 4.92
232 TestMultiNode/serial/PingHostFrom2Pods 0.87
233 TestMultiNode/serial/AddNode 28.28
234 TestMultiNode/serial/MultiNodeLabels 0.09
235 TestMultiNode/serial/ProfileList 0.74
236 TestMultiNode/serial/CopyFile 10.57
237 TestMultiNode/serial/StopNode 2.39
238 TestMultiNode/serial/StartAfterStop 8.63
239 TestMultiNode/serial/RestartKeepsNodes 72.71
240 TestMultiNode/serial/DeleteNode 5.68
241 TestMultiNode/serial/StopMultiNode 23.99
242 TestMultiNode/serial/RestartMultiNode 50.98
243 TestMultiNode/serial/ValidateNameConflict 30.24
250 TestScheduledStopUnix 102.35
253 TestInsufficientStorage 12.62
254 TestRunningBinaryUpgrade 307.5
256 TestKubernetesUpgrade 186.5
257 TestMissingContainerUpgrade 119.26
259 TestPause/serial/Start 57.24
260 TestPause/serial/SecondStartNoReconfiguration 16
262 TestStoppedBinaryUpgrade/Setup 0.83
263 TestStoppedBinaryUpgrade/Upgrade 309.71
264 TestStoppedBinaryUpgrade/MinikubeLogs 1.94
272 TestPreload/Start-NoPreload-PullImage 65.11
273 TestPreload/Restart-With-Preload-Check-User-Image 51.29
276 TestNoKubernetes/serial/StartNoK8sWithVersion 0.11
277 TestNoKubernetes/serial/StartWithK8s 27.76
278 TestNoKubernetes/serial/StartWithStopK8s 17.63
279 TestNoKubernetes/serial/Start 7.75
280 TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads 0
281 TestNoKubernetes/serial/VerifyK8sNotRunning 0.31
282 TestNoKubernetes/serial/ProfileList 1.06
283 TestNoKubernetes/serial/Stop 1.3
284 TestNoKubernetes/serial/StartNoArgs 7.05
285 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.31
293 TestNetworkPlugins/group/false 3.62
298 TestStartStop/group/old-k8s-version/serial/FirstStart 64.04
299 TestStartStop/group/old-k8s-version/serial/DeployApp 8.4
301 TestStartStop/group/old-k8s-version/serial/Stop 12.03
302 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.2
303 TestStartStop/group/old-k8s-version/serial/SecondStart 53.73
304 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 6
305 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.12
306 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.24
309 TestStartStop/group/no-preload/serial/FirstStart 55.81
310 TestStartStop/group/no-preload/serial/DeployApp 8.3
312 TestStartStop/group/no-preload/serial/Stop 12.03
313 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.19
314 TestStartStop/group/no-preload/serial/SecondStart 50.81
315 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 6
317 TestStartStop/group/embed-certs/serial/FirstStart 54.06
318 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.12
319 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.28
322 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 49.53
323 TestStartStop/group/embed-certs/serial/DeployApp 9.36
325 TestStartStop/group/embed-certs/serial/Stop 12.11
326 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 9.31
327 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.21
328 TestStartStop/group/embed-certs/serial/SecondStart 51.54
330 TestStartStop/group/default-k8s-diff-port/serial/Stop 12.31
331 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.26
332 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 55.04
333 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 6
334 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.13
335 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.25
337 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 6
339 TestStartStop/group/newest-cni/serial/FirstStart 36.02
340 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.12
341 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.28
343 TestPreload/PreloadSrc/gcs 4.56
344 TestPreload/PreloadSrc/github 3.94
345 TestPreload/PreloadSrc/gcs-cached 0.55
346 TestNetworkPlugins/group/auto/Start 54.44
347 TestStartStop/group/newest-cni/serial/DeployApp 0
349 TestStartStop/group/newest-cni/serial/Stop 1.68
350 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.19
351 TestStartStop/group/newest-cni/serial/SecondStart 17.11
352 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
353 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
354 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.33
356 TestNetworkPlugins/group/kindnet/Start 47.89
357 TestNetworkPlugins/group/auto/KubeletFlags 0.46
358 TestNetworkPlugins/group/auto/NetCatPod 12.34
359 TestNetworkPlugins/group/auto/DNS 0.19
360 TestNetworkPlugins/group/auto/Localhost 0.16
361 TestNetworkPlugins/group/auto/HairPin 0.16
362 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
363 TestNetworkPlugins/group/calico/Start 71.94
364 TestNetworkPlugins/group/kindnet/KubeletFlags 0.38
365 TestNetworkPlugins/group/kindnet/NetCatPod 12.29
366 TestNetworkPlugins/group/kindnet/DNS 0.21
367 TestNetworkPlugins/group/kindnet/Localhost 0.21
368 TestNetworkPlugins/group/kindnet/HairPin 0.17
369 TestNetworkPlugins/group/custom-flannel/Start 55.89
370 TestNetworkPlugins/group/calico/ControllerPod 6.01
371 TestNetworkPlugins/group/calico/KubeletFlags 0.37
372 TestNetworkPlugins/group/calico/NetCatPod 11.43
373 TestNetworkPlugins/group/calico/DNS 0.19
374 TestNetworkPlugins/group/calico/Localhost 0.14
375 TestNetworkPlugins/group/calico/HairPin 0.13
376 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.41
377 TestNetworkPlugins/group/custom-flannel/NetCatPod 12.4
378 TestNetworkPlugins/group/custom-flannel/DNS 0.21
379 TestNetworkPlugins/group/custom-flannel/Localhost 0.22
380 TestNetworkPlugins/group/custom-flannel/HairPin 0.19
381 TestNetworkPlugins/group/enable-default-cni/Start 50.01
382 TestNetworkPlugins/group/flannel/Start 56.96
383 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.37
384 TestNetworkPlugins/group/enable-default-cni/NetCatPod 11.35
385 TestNetworkPlugins/group/enable-default-cni/DNS 0.26
386 TestNetworkPlugins/group/enable-default-cni/Localhost 0.14
387 TestNetworkPlugins/group/enable-default-cni/HairPin 0.16
388 TestNetworkPlugins/group/flannel/ControllerPod 6
389 TestNetworkPlugins/group/bridge/Start 47.48
390 TestNetworkPlugins/group/flannel/KubeletFlags 0.38
391 TestNetworkPlugins/group/flannel/NetCatPod 10.42
392 TestNetworkPlugins/group/flannel/DNS 0.19
393 TestNetworkPlugins/group/flannel/Localhost 0.17
394 TestNetworkPlugins/group/flannel/HairPin 0.16
395 TestNetworkPlugins/group/bridge/KubeletFlags 0.3
396 TestNetworkPlugins/group/bridge/NetCatPod 9.26
397 TestNetworkPlugins/group/bridge/DNS 0.17
398 TestNetworkPlugins/group/bridge/Localhost 0.13
399 TestNetworkPlugins/group/bridge/HairPin 0.13
x
+
TestDownloadOnly/v1.28.0/json-events (5.8s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-421590 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-421590 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio: (5.803986978s)
--- PASS: TestDownloadOnly/v1.28.0/json-events (5.80s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/preload-exists
I1227 09:12:39.296808  303043 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime crio
I1227 09:12:39.296882  303043 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22344-301174/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4
--- PASS: TestDownloadOnly/v1.28.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/LogsDuration (0.09s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-421590
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-421590: exit status 85 (90.153409ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬──────────┐
	│ COMMAND │                                                                                   ARGS                                                                                    │       PROFILE        │  USER   │ VERSION │     START TIME      │ END TIME │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼──────────┤
	│ start   │ -o=json --download-only -p download-only-421590 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio │ download-only-421590 │ jenkins │ v1.37.0 │ 27 Dec 25 09:12 UTC │          │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴──────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/27 09:12:33
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1227 09:12:33.538667  303049 out.go:360] Setting OutFile to fd 1 ...
	I1227 09:12:33.538872  303049 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 09:12:33.538900  303049 out.go:374] Setting ErrFile to fd 2...
	I1227 09:12:33.538920  303049 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 09:12:33.539222  303049 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22344-301174/.minikube/bin
	W1227 09:12:33.539397  303049 root.go:314] Error reading config file at /home/jenkins/minikube-integration/22344-301174/.minikube/config/config.json: open /home/jenkins/minikube-integration/22344-301174/.minikube/config/config.json: no such file or directory
	I1227 09:12:33.539855  303049 out.go:368] Setting JSON to true
	I1227 09:12:33.540728  303049 start.go:133] hostinfo: {"hostname":"ip-172-31-31-251","uptime":6903,"bootTime":1766819851,"procs":165,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1227 09:12:33.540829  303049 start.go:143] virtualization:  
	I1227 09:12:33.546652  303049 out.go:99] [download-only-421590] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	W1227 09:12:33.546856  303049 preload.go:372] Failed to list preload files: open /home/jenkins/minikube-integration/22344-301174/.minikube/cache/preloaded-tarball: no such file or directory
	I1227 09:12:33.546981  303049 notify.go:221] Checking for updates...
	I1227 09:12:33.550417  303049 out.go:171] MINIKUBE_LOCATION=22344
	I1227 09:12:33.554288  303049 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1227 09:12:33.557580  303049 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/22344-301174/kubeconfig
	I1227 09:12:33.560619  303049 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/22344-301174/.minikube
	I1227 09:12:33.563776  303049 out.go:171] MINIKUBE_BIN=out/minikube-linux-arm64
	W1227 09:12:33.569796  303049 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1227 09:12:33.570120  303049 driver.go:422] Setting default libvirt URI to qemu:///system
	I1227 09:12:33.604062  303049 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1227 09:12:33.604173  303049 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1227 09:12:33.664435  303049 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:61 SystemTime:2025-12-27 09:12:33.655246701 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1227 09:12:33.664545  303049 docker.go:319] overlay module found
	I1227 09:12:33.667694  303049 out.go:99] Using the docker driver based on user configuration
	I1227 09:12:33.667748  303049 start.go:309] selected driver: docker
	I1227 09:12:33.667755  303049 start.go:928] validating driver "docker" against <nil>
	I1227 09:12:33.667853  303049 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1227 09:12:33.720506  303049 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:61 SystemTime:2025-12-27 09:12:33.711839598 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1227 09:12:33.720664  303049 start_flags.go:333] no existing cluster config was found, will generate one from the flags 
	I1227 09:12:33.720952  303049 start_flags.go:417] Using suggested 3072MB memory alloc based on sys=7834MB, container=7834MB
	I1227 09:12:33.721107  303049 start_flags.go:1001] Wait components to verify : map[apiserver:true system_pods:true]
	I1227 09:12:33.724266  303049 out.go:171] Using Docker driver with root privileges
	I1227 09:12:33.727150  303049 cni.go:84] Creating CNI manager for ""
	I1227 09:12:33.727219  303049 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1227 09:12:33.727231  303049 start_flags.go:342] Found "CNI" CNI - setting NetworkPlugin=cni
	I1227 09:12:33.727317  303049 start.go:353] cluster config:
	{Name:download-only-421590 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:download-only-421590 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1227 09:12:33.730205  303049 out.go:99] Starting "download-only-421590" primary control-plane node in "download-only-421590" cluster
	I1227 09:12:33.730222  303049 cache.go:134] Beginning downloading kic base image for docker with crio
	I1227 09:12:33.733204  303049 out.go:99] Pulling base image v0.0.48-1766570851-22316 ...
	I1227 09:12:33.733240  303049 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1227 09:12:33.733391  303049 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a in local docker daemon
	I1227 09:12:33.747515  303049 cache.go:163] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a to local cache
	I1227 09:12:33.747716  303049 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a in local cache directory
	I1227 09:12:33.747815  303049 image.go:150] Writing gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a to local cache
	I1227 09:12:33.783360  303049 preload.go:148] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4
	I1227 09:12:33.783399  303049 cache.go:65] Caching tarball of preloaded images
	I1227 09:12:33.784275  303049 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1227 09:12:33.787624  303049 out.go:99] Downloading Kubernetes v1.28.0 preload ...
	I1227 09:12:33.787653  303049 preload.go:269] Downloading preload from https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4
	I1227 09:12:33.787661  303049 preload.go:336] getting checksum for preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4 from gcs api...
	I1227 09:12:33.867901  303049 preload.go:313] Got checksum from GCS API "e092595ade89dbfc477bd4cd6b9c633b"
	I1227 09:12:33.868042  303049 download.go:114] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4?checksum=md5:e092595ade89dbfc477bd4cd6b9c633b -> /home/jenkins/minikube-integration/22344-301174/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4
	I1227 09:12:36.693174  303049 cache.go:68] Finished verifying existence of preloaded tar for v1.28.0 on crio
	I1227 09:12:36.693921  303049 profile.go:143] Saving config to /home/jenkins/minikube-integration/22344-301174/.minikube/profiles/download-only-421590/config.json ...
	I1227 09:12:36.693998  303049 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22344-301174/.minikube/profiles/download-only-421590/config.json: {Name:mkbe3cc065045560ee7fc983c2495176c91e8c6f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 09:12:36.695071  303049 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1227 09:12:36.695908  303049 download.go:114] Downloading: https://dl.k8s.io/release/v1.28.0/bin/linux/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.28.0/bin/linux/arm64/kubectl.sha256 -> /home/jenkins/minikube-integration/22344-301174/.minikube/cache/linux/arm64/v1.28.0/kubectl
	
	
	* The control-plane node download-only-421590 host does not exist
	  To start a cluster, run: "minikube start -p download-only-421590"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.28.0/LogsDuration (0.09s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAll (0.21s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/v1.28.0/DeleteAll (0.21s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-421590
--- PASS: TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0/json-events (3.25s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-432357 --force --alsologtostderr --kubernetes-version=v1.35.0 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-432357 --force --alsologtostderr --kubernetes-version=v1.35.0 --container-runtime=crio --driver=docker  --container-runtime=crio: (3.254035072s)
--- PASS: TestDownloadOnly/v1.35.0/json-events (3.25s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0/preload-exists
I1227 09:12:42.989806  303043 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime crio
I1227 09:12:42.989844  303043 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22344-301174/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-cri-o-overlay-arm64.tar.lz4
--- PASS: TestDownloadOnly/v1.35.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0/LogsDuration (0.09s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-432357
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-432357: exit status 85 (92.374757ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                   ARGS                                                                                    │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -o=json --download-only -p download-only-421590 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio │ download-only-421590 │ jenkins │ v1.37.0 │ 27 Dec 25 09:12 UTC │                     │
	│ delete  │ --all                                                                                                                                                                     │ minikube             │ jenkins │ v1.37.0 │ 27 Dec 25 09:12 UTC │ 27 Dec 25 09:12 UTC │
	│ delete  │ -p download-only-421590                                                                                                                                                   │ download-only-421590 │ jenkins │ v1.37.0 │ 27 Dec 25 09:12 UTC │ 27 Dec 25 09:12 UTC │
	│ start   │ -o=json --download-only -p download-only-432357 --force --alsologtostderr --kubernetes-version=v1.35.0 --container-runtime=crio --driver=docker  --container-runtime=crio │ download-only-432357 │ jenkins │ v1.37.0 │ 27 Dec 25 09:12 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/27 09:12:39
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1227 09:12:39.778021  303245 out.go:360] Setting OutFile to fd 1 ...
	I1227 09:12:39.778132  303245 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 09:12:39.778144  303245 out.go:374] Setting ErrFile to fd 2...
	I1227 09:12:39.778177  303245 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 09:12:39.782416  303245 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22344-301174/.minikube/bin
	I1227 09:12:39.783052  303245 out.go:368] Setting JSON to true
	I1227 09:12:39.784206  303245 start.go:133] hostinfo: {"hostname":"ip-172-31-31-251","uptime":6909,"bootTime":1766819851,"procs":160,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1227 09:12:39.784319  303245 start.go:143] virtualization:  
	I1227 09:12:39.787610  303245 out.go:99] [download-only-432357] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1227 09:12:39.787817  303245 notify.go:221] Checking for updates...
	I1227 09:12:39.790469  303245 out.go:171] MINIKUBE_LOCATION=22344
	I1227 09:12:39.793316  303245 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1227 09:12:39.796222  303245 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/22344-301174/kubeconfig
	I1227 09:12:39.799041  303245 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/22344-301174/.minikube
	I1227 09:12:39.802040  303245 out.go:171] MINIKUBE_BIN=out/minikube-linux-arm64
	W1227 09:12:39.807681  303245 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1227 09:12:39.807985  303245 driver.go:422] Setting default libvirt URI to qemu:///system
	I1227 09:12:39.830430  303245 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1227 09:12:39.830525  303245 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1227 09:12:39.890556  303245 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:49 SystemTime:2025-12-27 09:12:39.881387429 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1227 09:12:39.890664  303245 docker.go:319] overlay module found
	I1227 09:12:39.893658  303245 out.go:99] Using the docker driver based on user configuration
	I1227 09:12:39.893698  303245 start.go:309] selected driver: docker
	I1227 09:12:39.893707  303245 start.go:928] validating driver "docker" against <nil>
	I1227 09:12:39.893828  303245 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1227 09:12:39.952862  303245 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:49 SystemTime:2025-12-27 09:12:39.94334019 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pa
th:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1227 09:12:39.953019  303245 start_flags.go:333] no existing cluster config was found, will generate one from the flags 
	I1227 09:12:39.953277  303245 start_flags.go:417] Using suggested 3072MB memory alloc based on sys=7834MB, container=7834MB
	I1227 09:12:39.953437  303245 start_flags.go:1001] Wait components to verify : map[apiserver:true system_pods:true]
	I1227 09:12:39.956513  303245 out.go:171] Using Docker driver with root privileges
	
	
	* The control-plane node download-only-432357 host does not exist
	  To start a cluster, run: "minikube start -p download-only-432357"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.35.0/LogsDuration (0.09s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0/DeleteAll (0.21s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/v1.35.0/DeleteAll (0.21s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0/DeleteAlwaysSucceeds (0.15s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-432357
--- PASS: TestDownloadOnly/v1.35.0/DeleteAlwaysSucceeds (0.15s)

                                                
                                    
x
+
TestBinaryMirror (0.65s)

                                                
                                                
=== RUN   TestBinaryMirror
I1227 09:12:44.153506  303043 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0/bin/linux/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.35.0/bin/linux/arm64/kubectl.sha256
aaa_download_only_test.go:309: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p binary-mirror-053347 --alsologtostderr --binary-mirror http://127.0.0.1:39055 --driver=docker  --container-runtime=crio
helpers_test.go:176: Cleaning up "binary-mirror-053347" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p binary-mirror-053347
--- PASS: TestBinaryMirror (0.65s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.2s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:1002: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-730938
addons_test.go:1002: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable dashboard -p addons-730938: exit status 85 (200.418593ms)

                                                
                                                
-- stdout --
	* Profile "addons-730938" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-730938"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.20s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.17s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:1013: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-730938
addons_test.go:1013: (dbg) Non-zero exit: out/minikube-linux-arm64 addons disable dashboard -p addons-730938: exit status 85 (167.586756ms)

                                                
                                                
-- stdout --
	* Profile "addons-730938" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-730938"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.17s)

                                                
                                    
x
+
TestAddons/Setup (140.49s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:110: (dbg) Run:  out/minikube-linux-arm64 start -p addons-730938 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher
addons_test.go:110: (dbg) Done: out/minikube-linux-arm64 start -p addons-730938 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher: (2m20.492958377s)
--- PASS: TestAddons/Setup (140.49s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.19s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:632: (dbg) Run:  kubectl --context addons-730938 create ns new-namespace
addons_test.go:646: (dbg) Run:  kubectl --context addons-730938 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.19s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/FakeCredentials (9.85s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/FakeCredentials
addons_test.go:677: (dbg) Run:  kubectl --context addons-730938 create -f testdata/busybox.yaml
addons_test.go:684: (dbg) Run:  kubectl --context addons-730938 create sa gcp-auth-test
addons_test.go:690: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:353: "busybox" [96197acb-19f5-4921-858e-2a6227b427b6] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:353: "busybox" [96197acb-19f5-4921-858e-2a6227b427b6] Running
addons_test.go:690: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: integration-test=busybox healthy within 9.004521611s
addons_test.go:696: (dbg) Run:  kubectl --context addons-730938 exec busybox -- /bin/sh -c "printenv GOOGLE_APPLICATION_CREDENTIALS"
addons_test.go:708: (dbg) Run:  kubectl --context addons-730938 describe sa gcp-auth-test
addons_test.go:722: (dbg) Run:  kubectl --context addons-730938 exec busybox -- /bin/sh -c "cat /google-app-creds.json"
addons_test.go:746: (dbg) Run:  kubectl --context addons-730938 exec busybox -- /bin/sh -c "printenv GOOGLE_CLOUD_PROJECT"
--- PASS: TestAddons/serial/GCPAuth/FakeCredentials (9.85s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (12.48s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:174: (dbg) Run:  out/minikube-linux-arm64 stop -p addons-730938
addons_test.go:174: (dbg) Done: out/minikube-linux-arm64 stop -p addons-730938: (12.198470079s)
addons_test.go:178: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-730938
addons_test.go:182: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-730938
addons_test.go:187: (dbg) Run:  out/minikube-linux-arm64 addons disable gvisor -p addons-730938
--- PASS: TestAddons/StoppedEnableDisable (12.48s)

                                                
                                    
x
+
TestCertOptions (30.07s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-arm64 start -p cert-options-057459 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio
E1227 09:59:17.719404  303043 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22344-301174/.minikube/profiles/functional-725125/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
cert_options_test.go:49: (dbg) Done: out/minikube-linux-arm64 start -p cert-options-057459 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio: (27.311638821s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-arm64 -p cert-options-057459 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-057459 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-arm64 ssh -p cert-options-057459 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:176: Cleaning up "cert-options-057459" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-options-057459
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p cert-options-057459: (2.054077721s)
--- PASS: TestCertOptions (30.07s)

                                                
                                    
x
+
TestCertExpiration (224.66s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-028595 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio
cert_options_test.go:123: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-028595 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio: (25.689022305s)
E1227 09:53:09.864748  303043 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22344-301174/.minikube/profiles/addons-730938/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 09:54:17.722048  303043 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22344-301174/.minikube/profiles/functional-725125/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 09:55:06.818452  303043 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22344-301174/.minikube/profiles/addons-730938/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-028595 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio
cert_options_test.go:131: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-028595 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio: (16.540943462s)
helpers_test.go:176: Cleaning up "cert-expiration-028595" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-expiration-028595
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p cert-expiration-028595: (2.423263399s)
--- PASS: TestCertExpiration (224.66s)

                                                
                                    
x
+
TestErrorSpam/setup (23.5s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -p nospam-508697 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-508697 --driver=docker  --container-runtime=crio
error_spam_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -p nospam-508697 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-508697 --driver=docker  --container-runtime=crio: (23.503846662s)
error_spam_test.go:91: acceptable stderr: "! /usr/local/bin/kubectl is version 1.33.2, which may have incompatibilities with Kubernetes 1.35.0."
--- PASS: TestErrorSpam/setup (23.50s)

                                                
                                    
x
+
TestErrorSpam/start (0.84s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:206: Cleaning up 1 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-508697 --log_dir /tmp/nospam-508697 start --dry-run
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-508697 --log_dir /tmp/nospam-508697 start --dry-run
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-508697 --log_dir /tmp/nospam-508697 start --dry-run
--- PASS: TestErrorSpam/start (0.84s)

                                                
                                    
x
+
TestErrorSpam/status (1.13s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-508697 --log_dir /tmp/nospam-508697 status
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-508697 --log_dir /tmp/nospam-508697 status
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-508697 --log_dir /tmp/nospam-508697 status
--- PASS: TestErrorSpam/status (1.13s)

                                                
                                    
x
+
TestErrorSpam/pause (5.9s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-508697 --log_dir /tmp/nospam-508697 pause
error_spam_test.go:149: (dbg) Non-zero exit: out/minikube-linux-arm64 -p nospam-508697 --log_dir /tmp/nospam-508697 pause: exit status 80 (1.83692207s)

                                                
                                                
-- stdout --
	* Pausing node nospam-508697 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-27T09:16:57Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_1.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:151: "out/minikube-linux-arm64 -p nospam-508697 --log_dir /tmp/nospam-508697 pause" failed: exit status 80
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-508697 --log_dir /tmp/nospam-508697 pause
error_spam_test.go:149: (dbg) Non-zero exit: out/minikube-linux-arm64 -p nospam-508697 --log_dir /tmp/nospam-508697 pause: exit status 80 (2.373727283s)

                                                
                                                
-- stdout --
	* Pausing node nospam-508697 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-27T09:16:59Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_1.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:151: "out/minikube-linux-arm64 -p nospam-508697 --log_dir /tmp/nospam-508697 pause" failed: exit status 80
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-508697 --log_dir /tmp/nospam-508697 pause
error_spam_test.go:172: (dbg) Non-zero exit: out/minikube-linux-arm64 -p nospam-508697 --log_dir /tmp/nospam-508697 pause: exit status 80 (1.684488608s)

                                                
                                                
-- stdout --
	* Pausing node nospam-508697 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-27T09:17:01Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_1.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:174: "out/minikube-linux-arm64 -p nospam-508697 --log_dir /tmp/nospam-508697 pause" failed: exit status 80
--- PASS: TestErrorSpam/pause (5.90s)

                                                
                                    
x
+
TestErrorSpam/unpause (5.5s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-508697 --log_dir /tmp/nospam-508697 unpause
error_spam_test.go:149: (dbg) Non-zero exit: out/minikube-linux-arm64 -p nospam-508697 --log_dir /tmp/nospam-508697 unpause: exit status 80 (1.765043419s)

                                                
                                                
-- stdout --
	* Unpausing node nospam-508697 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_UNPAUSE: Pause: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-27T09:17:03Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_1.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:151: "out/minikube-linux-arm64 -p nospam-508697 --log_dir /tmp/nospam-508697 unpause" failed: exit status 80
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-508697 --log_dir /tmp/nospam-508697 unpause
error_spam_test.go:149: (dbg) Non-zero exit: out/minikube-linux-arm64 -p nospam-508697 --log_dir /tmp/nospam-508697 unpause: exit status 80 (1.609089242s)

                                                
                                                
-- stdout --
	* Unpausing node nospam-508697 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_UNPAUSE: Pause: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-27T09:17:04Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_1.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:151: "out/minikube-linux-arm64 -p nospam-508697 --log_dir /tmp/nospam-508697 unpause" failed: exit status 80
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-508697 --log_dir /tmp/nospam-508697 unpause
error_spam_test.go:172: (dbg) Non-zero exit: out/minikube-linux-arm64 -p nospam-508697 --log_dir /tmp/nospam-508697 unpause: exit status 80 (2.12602117s)

                                                
                                                
-- stdout --
	* Unpausing node nospam-508697 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_UNPAUSE: Pause: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-27T09:17:07Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_1.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:174: "out/minikube-linux-arm64 -p nospam-508697 --log_dir /tmp/nospam-508697 unpause" failed: exit status 80
--- PASS: TestErrorSpam/unpause (5.50s)

                                                
                                    
x
+
TestErrorSpam/stop (1.52s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-508697 --log_dir /tmp/nospam-508697 stop
error_spam_test.go:149: (dbg) Done: out/minikube-linux-arm64 -p nospam-508697 --log_dir /tmp/nospam-508697 stop: (1.312945401s)
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-508697 --log_dir /tmp/nospam-508697 stop
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-508697 --log_dir /tmp/nospam-508697 stop
--- PASS: TestErrorSpam/stop (1.52s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1865: local sync path: /home/jenkins/minikube-integration/22344-301174/.minikube/files/etc/test/nested/copy/303043/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (46.45s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2244: (dbg) Run:  out/minikube-linux-arm64 start -p functional-725125 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio
functional_test.go:2244: (dbg) Done: out/minikube-linux-arm64 start -p functional-725125 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio: (46.45254566s)
--- PASS: TestFunctional/serial/StartWithProxy (46.45s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (28.58s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
I1227 09:18:00.013737  303043 config.go:182] Loaded profile config "functional-725125": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
functional_test.go:674: (dbg) Run:  out/minikube-linux-arm64 start -p functional-725125 --alsologtostderr -v=8
functional_test.go:674: (dbg) Done: out/minikube-linux-arm64 start -p functional-725125 --alsologtostderr -v=8: (28.582924448s)
functional_test.go:678: soft start took 28.583468783s for "functional-725125" cluster.
I1227 09:18:28.596954  303043 config.go:182] Loaded profile config "functional-725125": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
--- PASS: TestFunctional/serial/SoftStart (28.58s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.08s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:696: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.08s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:711: (dbg) Run:  kubectl --context functional-725125 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.10s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (3.47s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1069: (dbg) Run:  out/minikube-linux-arm64 -p functional-725125 cache add registry.k8s.io/pause:3.1
functional_test.go:1069: (dbg) Done: out/minikube-linux-arm64 -p functional-725125 cache add registry.k8s.io/pause:3.1: (1.166870337s)
functional_test.go:1069: (dbg) Run:  out/minikube-linux-arm64 -p functional-725125 cache add registry.k8s.io/pause:3.3
functional_test.go:1069: (dbg) Done: out/minikube-linux-arm64 -p functional-725125 cache add registry.k8s.io/pause:3.3: (1.170954778s)
functional_test.go:1069: (dbg) Run:  out/minikube-linux-arm64 -p functional-725125 cache add registry.k8s.io/pause:latest
functional_test.go:1069: (dbg) Done: out/minikube-linux-arm64 -p functional-725125 cache add registry.k8s.io/pause:latest: (1.130748199s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (3.47s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.24s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1097: (dbg) Run:  docker build -t minikube-local-cache-test:functional-725125 /tmp/TestFunctionalserialCacheCmdcacheadd_local4119555816/001
functional_test.go:1109: (dbg) Run:  out/minikube-linux-arm64 -p functional-725125 cache add minikube-local-cache-test:functional-725125
functional_test.go:1114: (dbg) Run:  out/minikube-linux-arm64 -p functional-725125 cache delete minikube-local-cache-test:functional-725125
functional_test.go:1103: (dbg) Run:  docker rmi minikube-local-cache-test:functional-725125
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.24s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1122: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1130: (dbg) Run:  out/minikube-linux-arm64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.3s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1144: (dbg) Run:  out/minikube-linux-arm64 -p functional-725125 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.30s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.82s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1167: (dbg) Run:  out/minikube-linux-arm64 -p functional-725125 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1173: (dbg) Run:  out/minikube-linux-arm64 -p functional-725125 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1173: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-725125 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (299.190082ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1178: (dbg) Run:  out/minikube-linux-arm64 -p functional-725125 cache reload
functional_test.go:1183: (dbg) Run:  out/minikube-linux-arm64 -p functional-725125 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.82s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1192: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1192: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.11s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.14s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:731: (dbg) Run:  out/minikube-linux-arm64 -p functional-725125 kubectl -- --context functional-725125 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.14s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.14s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:756: (dbg) Run:  out/kubectl --context functional-725125 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.14s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (32.28s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:772: (dbg) Run:  out/minikube-linux-arm64 start -p functional-725125 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:772: (dbg) Done: out/minikube-linux-arm64 start -p functional-725125 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (32.279008119s)
functional_test.go:776: restart took 32.279100083s for "functional-725125" cluster.
I1227 09:19:08.395200  303043 config.go:182] Loaded profile config "functional-725125": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
--- PASS: TestFunctional/serial/ExtraConfig (32.28s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.09s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:825: (dbg) Run:  kubectl --context functional-725125 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:840: etcd phase: Running
functional_test.go:850: etcd status: Ready
functional_test.go:840: kube-apiserver phase: Running
functional_test.go:850: kube-apiserver status: Ready
functional_test.go:840: kube-controller-manager phase: Running
functional_test.go:850: kube-controller-manager status: Ready
functional_test.go:840: kube-scheduler phase: Running
functional_test.go:850: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.09s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.42s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1256: (dbg) Run:  out/minikube-linux-arm64 -p functional-725125 logs
functional_test.go:1256: (dbg) Done: out/minikube-linux-arm64 -p functional-725125 logs: (1.419492838s)
--- PASS: TestFunctional/serial/LogsCmd (1.42s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.52s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1270: (dbg) Run:  out/minikube-linux-arm64 -p functional-725125 logs --file /tmp/TestFunctionalserialLogsFileCmd4189206972/001/logs.txt
functional_test.go:1270: (dbg) Done: out/minikube-linux-arm64 -p functional-725125 logs --file /tmp/TestFunctionalserialLogsFileCmd4189206972/001/logs.txt: (1.518975732s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.52s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.44s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2331: (dbg) Run:  kubectl --context functional-725125 apply -f testdata/invalidsvc.yaml
functional_test.go:2345: (dbg) Run:  out/minikube-linux-arm64 service invalid-svc -p functional-725125
functional_test.go:2345: (dbg) Non-zero exit: out/minikube-linux-arm64 service invalid-svc -p functional-725125: exit status 115 (410.458948ms)

                                                
                                                
-- stdout --
	┌───────────┬─────────────┬─────────────┬───────────────────────────┐
	│ NAMESPACE │    NAME     │ TARGET PORT │            URL            │
	├───────────┼─────────────┼─────────────┼───────────────────────────┤
	│ default   │ invalid-svc │ 80          │ http://192.168.49.2:31382 │
	└───────────┴─────────────┴─────────────┴───────────────────────────┘
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2337: (dbg) Run:  kubectl --context functional-725125 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (4.44s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1219: (dbg) Run:  out/minikube-linux-arm64 -p functional-725125 config unset cpus
functional_test.go:1219: (dbg) Run:  out/minikube-linux-arm64 -p functional-725125 config get cpus
functional_test.go:1219: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-725125 config get cpus: exit status 14 (65.613059ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1219: (dbg) Run:  out/minikube-linux-arm64 -p functional-725125 config set cpus 2
functional_test.go:1219: (dbg) Run:  out/minikube-linux-arm64 -p functional-725125 config get cpus
functional_test.go:1219: (dbg) Run:  out/minikube-linux-arm64 -p functional-725125 config unset cpus
functional_test.go:1219: (dbg) Run:  out/minikube-linux-arm64 -p functional-725125 config get cpus
functional_test.go:1219: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-725125 config get cpus: exit status 14 (65.314872ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.47s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (11.68s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:920: (dbg) daemon: [out/minikube-linux-arm64 dashboard --url --port 0 -p functional-725125 --alsologtostderr -v=1]
functional_test.go:925: (dbg) stopping [out/minikube-linux-arm64 dashboard --url --port 0 -p functional-725125 --alsologtostderr -v=1] ...
helpers_test.go:526: unable to kill pid 326752: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (11.68s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:994: (dbg) Run:  out/minikube-linux-arm64 start -p functional-725125 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio
functional_test.go:994: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-725125 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio: exit status 23 (197.690122ms)

                                                
                                                
-- stdout --
	* [functional-725125] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=22344
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22344-301174/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22344-301174/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1227 09:19:46.219097  326488 out.go:360] Setting OutFile to fd 1 ...
	I1227 09:19:46.219235  326488 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 09:19:46.219249  326488 out.go:374] Setting ErrFile to fd 2...
	I1227 09:19:46.219268  326488 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 09:19:46.219550  326488 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22344-301174/.minikube/bin
	I1227 09:19:46.219956  326488 out.go:368] Setting JSON to false
	I1227 09:19:46.220937  326488 start.go:133] hostinfo: {"hostname":"ip-172-31-31-251","uptime":7336,"bootTime":1766819851,"procs":189,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1227 09:19:46.221046  326488 start.go:143] virtualization:  
	I1227 09:19:46.224595  326488 out.go:179] * [functional-725125] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1227 09:19:46.227692  326488 out.go:179]   - MINIKUBE_LOCATION=22344
	I1227 09:19:46.227849  326488 notify.go:221] Checking for updates...
	I1227 09:19:46.233582  326488 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1227 09:19:46.236528  326488 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22344-301174/kubeconfig
	I1227 09:19:46.239321  326488 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22344-301174/.minikube
	I1227 09:19:46.242107  326488 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1227 09:19:46.244960  326488 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1227 09:19:46.248400  326488 config.go:182] Loaded profile config "functional-725125": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1227 09:19:46.249021  326488 driver.go:422] Setting default libvirt URI to qemu:///system
	I1227 09:19:46.287547  326488 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1227 09:19:46.287669  326488 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1227 09:19:46.346430  326488 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:53 SystemTime:2025-12-27 09:19:46.336145041 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1227 09:19:46.346548  326488 docker.go:319] overlay module found
	I1227 09:19:46.349596  326488 out.go:179] * Using the docker driver based on existing profile
	I1227 09:19:46.352530  326488 start.go:309] selected driver: docker
	I1227 09:19:46.352554  326488 start.go:928] validating driver "docker" against &{Name:functional-725125 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:functional-725125 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Moun
tPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1227 09:19:46.352664  326488 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1227 09:19:46.356206  326488 out.go:203] 
	W1227 09:19:46.359258  326488 out.go:285] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I1227 09:19:46.362244  326488 out.go:203] 

                                                
                                                
** /stderr **
functional_test.go:1011: (dbg) Run:  out/minikube-linux-arm64 start -p functional-725125 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
--- PASS: TestFunctional/parallel/DryRun (0.45s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1040: (dbg) Run:  out/minikube-linux-arm64 start -p functional-725125 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio
functional_test.go:1040: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-725125 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio: exit status 23 (258.507272ms)

                                                
                                                
-- stdout --
	* [functional-725125] minikube v1.37.0 sur Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=22344
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22344-301174/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22344-301174/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1227 09:19:45.974422  326428 out.go:360] Setting OutFile to fd 1 ...
	I1227 09:19:45.974638  326428 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 09:19:45.974677  326428 out.go:374] Setting ErrFile to fd 2...
	I1227 09:19:45.974698  326428 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 09:19:45.975112  326428 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22344-301174/.minikube/bin
	I1227 09:19:45.975518  326428 out.go:368] Setting JSON to false
	I1227 09:19:45.976936  326428 start.go:133] hostinfo: {"hostname":"ip-172-31-31-251","uptime":7335,"bootTime":1766819851,"procs":189,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1227 09:19:45.977035  326428 start.go:143] virtualization:  
	I1227 09:19:45.981583  326428 out.go:179] * [functional-725125] minikube v1.37.0 sur Ubuntu 20.04 (arm64)
	I1227 09:19:45.986039  326428 out.go:179]   - MINIKUBE_LOCATION=22344
	I1227 09:19:45.986330  326428 notify.go:221] Checking for updates...
	I1227 09:19:45.994201  326428 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1227 09:19:46.001189  326428 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22344-301174/kubeconfig
	I1227 09:19:46.005267  326428 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22344-301174/.minikube
	I1227 09:19:46.008856  326428 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1227 09:19:46.012159  326428 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1227 09:19:46.015891  326428 config.go:182] Loaded profile config "functional-725125": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1227 09:19:46.016500  326428 driver.go:422] Setting default libvirt URI to qemu:///system
	I1227 09:19:46.067176  326428 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1227 09:19:46.067292  326428 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1227 09:19:46.149106  326428 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:53 SystemTime:2025-12-27 09:19:46.139216513 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1227 09:19:46.149218  326428 docker.go:319] overlay module found
	I1227 09:19:46.152558  326428 out.go:179] * Utilisation du pilote docker basé sur le profil existant
	I1227 09:19:46.155562  326428 start.go:309] selected driver: docker
	I1227 09:19:46.155582  326428 start.go:928] validating driver "docker" against &{Name:functional-725125 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:functional-725125 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Moun
tPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1227 09:19:46.155694  326428 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1227 09:19:46.159338  326428 out.go:203] 
	W1227 09:19:46.162253  326428 out.go:285] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1227 09:19:46.165081  326428 out.go:203] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:869: (dbg) Run:  out/minikube-linux-arm64 -p functional-725125 status
functional_test.go:875: (dbg) Run:  out/minikube-linux-arm64 -p functional-725125 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:887: (dbg) Run:  out/minikube-linux-arm64 -p functional-725125 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.19s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (8.63s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1641: (dbg) Run:  kubectl --context functional-725125 create deployment hello-node-connect --image ghcr.io/medyagh/image-mirrors/kicbase/echo-server
functional_test.go:1645: (dbg) Run:  kubectl --context functional-725125 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1650: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:353: "hello-node-connect-5d95464fd4-9b8gf" [c602a88d-c7fc-4720-8832-f64dd62d71bf] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
helpers_test.go:353: "hello-node-connect-5d95464fd4-9b8gf" [c602a88d-c7fc-4720-8832-f64dd62d71bf] Running
functional_test.go:1650: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 8.011946663s
functional_test.go:1659: (dbg) Run:  out/minikube-linux-arm64 -p functional-725125 service hello-node-connect --url
functional_test.go:1665: found endpoint for hello-node-connect: http://192.168.49.2:31739
functional_test.go:1685: http://192.168.49.2:31739: success! body:
Request served by hello-node-connect-5d95464fd4-9b8gf

                                                
                                                
HTTP/1.1 GET /

                                                
                                                
Host: 192.168.49.2:31739
Accept-Encoding: gzip
User-Agent: Go-http-client/1.1
--- PASS: TestFunctional/parallel/ServiceCmdConnect (8.63s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1700: (dbg) Run:  out/minikube-linux-arm64 -p functional-725125 addons list
functional_test.go:1712: (dbg) Run:  out/minikube-linux-arm64 -p functional-725125 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (21.94s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:353: "storage-provisioner" [1d4c1ae5-2d67-4404-8ff6-b990d84d9a87] Running
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.003858328s
functional_test_pvc_test.go:55: (dbg) Run:  kubectl --context functional-725125 get storageclass -o=json
functional_test_pvc_test.go:75: (dbg) Run:  kubectl --context functional-725125 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:82: (dbg) Run:  kubectl --context functional-725125 get pvc myclaim -o=json
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-725125 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:353: "sp-pod" [c688b020-5c1e-4ce8-a982-183620747efc] Pending
helpers_test.go:353: "sp-pod" [c688b020-5c1e-4ce8-a982-183620747efc] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:353: "sp-pod" [c688b020-5c1e-4ce8-a982-183620747efc] Running
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 8.003801678s
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-725125 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:112: (dbg) Run:  kubectl --context functional-725125 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-725125 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:353: "sp-pod" [58387672-92c1-4f53-8114-f1e0b6fa8d13] Pending
helpers_test.go:353: "sp-pod" [58387672-92c1-4f53-8114-f1e0b6fa8d13] Running
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 7.003391629s
functional_test_pvc_test.go:120: (dbg) Run:  kubectl --context functional-725125 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (21.94s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.71s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1735: (dbg) Run:  out/minikube-linux-arm64 -p functional-725125 ssh "echo hello"
functional_test.go:1752: (dbg) Run:  out/minikube-linux-arm64 -p functional-725125 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.71s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (2.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p functional-725125 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p functional-725125 ssh -n functional-725125 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p functional-725125 cp functional-725125:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd2001622394/001/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p functional-725125 ssh -n functional-725125 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p functional-725125 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p functional-725125 ssh -n functional-725125 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (2.39s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1939: Checking for existence of /etc/test/nested/copy/303043/hosts within VM
functional_test.go:1941: (dbg) Run:  out/minikube-linux-arm64 -p functional-725125 ssh "sudo cat /etc/test/nested/copy/303043/hosts"
functional_test.go:1946: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.34s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (2.6s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1982: Checking for existence of /etc/ssl/certs/303043.pem within VM
functional_test.go:1983: (dbg) Run:  out/minikube-linux-arm64 -p functional-725125 ssh "sudo cat /etc/ssl/certs/303043.pem"
functional_test.go:1982: Checking for existence of /usr/share/ca-certificates/303043.pem within VM
functional_test.go:1983: (dbg) Run:  out/minikube-linux-arm64 -p functional-725125 ssh "sudo cat /usr/share/ca-certificates/303043.pem"
functional_test.go:1982: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1983: (dbg) Run:  out/minikube-linux-arm64 -p functional-725125 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:2009: Checking for existence of /etc/ssl/certs/3030432.pem within VM
functional_test.go:2010: (dbg) Run:  out/minikube-linux-arm64 -p functional-725125 ssh "sudo cat /etc/ssl/certs/3030432.pem"
functional_test.go:2009: Checking for existence of /usr/share/ca-certificates/3030432.pem within VM
functional_test.go:2010: (dbg) Run:  out/minikube-linux-arm64 -p functional-725125 ssh "sudo cat /usr/share/ca-certificates/3030432.pem"
functional_test.go:2009: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2010: (dbg) Run:  out/minikube-linux-arm64 -p functional-725125 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (2.60s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:234: (dbg) Run:  kubectl --context functional-725125 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.7s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2037: (dbg) Run:  out/minikube-linux-arm64 -p functional-725125 ssh "sudo systemctl is-active docker"
functional_test.go:2037: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-725125 ssh "sudo systemctl is-active docker": exit status 1 (372.376187ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2037: (dbg) Run:  out/minikube-linux-arm64 -p functional-725125 ssh "sudo systemctl is-active containerd"
functional_test.go:2037: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-725125 ssh "sudo systemctl is-active containerd": exit status 1 (325.445894ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.70s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2298: (dbg) Run:  out/minikube-linux-arm64 license
--- PASS: TestFunctional/parallel/License (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.67s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-725125 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-725125 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-725125 tunnel --alsologtostderr] ...
helpers_test.go:526: unable to kill pid 324265: os: process already finished
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-725125 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.67s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-arm64 -p functional-725125 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (9.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-725125 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:353: "nginx-svc" [397766b9-343f-422f-a8d0-9e6f2039d1ce] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:353: "nginx-svc" [397766b9-343f-422f-a8d0-9e6f2039d1ce] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 9.003440043s
I1227 09:19:27.168083  303043 kapi.go:150] Service nginx-svc in namespace default found.
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (9.45s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-725125 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.104.56.69 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-arm64 -p functional-725125 tunnel --alsologtostderr] ...
functional_test_tunnel_test.go:437: failed to stop process: signal: terminated
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (6.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1456: (dbg) Run:  kubectl --context functional-725125 create deployment hello-node --image ghcr.io/medyagh/image-mirrors/kicbase/echo-server
functional_test.go:1460: (dbg) Run:  kubectl --context functional-725125 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1465: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:353: "hello-node-684ffdf98c-76fmc" [2c16eabc-f845-42bd-94e8-4c7bd93ee764] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
helpers_test.go:353: "hello-node-684ffdf98c-76fmc" [2c16eabc-f845-42bd-94e8-4c7bd93ee764] Running
functional_test.go:1465: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 6.003244076s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (6.21s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1290: (dbg) Run:  out/minikube-linux-arm64 profile lis
functional_test.go:1295: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.44s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1330: (dbg) Run:  out/minikube-linux-arm64 profile list
functional_test.go:1335: Took "370.928762ms" to run "out/minikube-linux-arm64 profile list"
functional_test.go:1344: (dbg) Run:  out/minikube-linux-arm64 profile list -l
functional_test.go:1349: Took "59.64116ms" to run "out/minikube-linux-arm64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.43s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1381: (dbg) Run:  out/minikube-linux-arm64 profile list -o json
functional_test.go:1386: Took "363.142975ms" to run "out/minikube-linux-arm64 profile list -o json"
functional_test.go:1394: (dbg) Run:  out/minikube-linux-arm64 profile list -o json --light
functional_test.go:1399: Took "59.159365ms" to run "out/minikube-linux-arm64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.42s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (8.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:74: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-725125 /tmp/TestFunctionalparallelMountCmdany-port607863901/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:108: wrote "test-1766827181493754868" to /tmp/TestFunctionalparallelMountCmdany-port607863901/001/created-by-test
functional_test_mount_test.go:108: wrote "test-1766827181493754868" to /tmp/TestFunctionalparallelMountCmdany-port607863901/001/created-by-test-removed-by-pod
functional_test_mount_test.go:108: wrote "test-1766827181493754868" to /tmp/TestFunctionalparallelMountCmdany-port607863901/001/test-1766827181493754868
functional_test_mount_test.go:116: (dbg) Run:  out/minikube-linux-arm64 -p functional-725125 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:116: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-725125 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (340.537027ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1227 09:19:41.834580  303043 retry.go:84] will retry after 500ms: exit status 1
functional_test_mount_test.go:116: (dbg) Run:  out/minikube-linux-arm64 -p functional-725125 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:130: (dbg) Run:  out/minikube-linux-arm64 -p functional-725125 ssh -- ls -la /mount-9p
functional_test_mount_test.go:134: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Dec 27 09:19 created-by-test
-rw-r--r-- 1 docker docker 24 Dec 27 09:19 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Dec 27 09:19 test-1766827181493754868
functional_test_mount_test.go:138: (dbg) Run:  out/minikube-linux-arm64 -p functional-725125 ssh cat /mount-9p/test-1766827181493754868
functional_test_mount_test.go:149: (dbg) Run:  kubectl --context functional-725125 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:154: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:353: "busybox-mount" [d30e337c-9923-49c2-882d-b3c24818321b] Pending
helpers_test.go:353: "busybox-mount" [d30e337c-9923-49c2-882d-b3c24818321b] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:353: "busybox-mount" [d30e337c-9923-49c2-882d-b3c24818321b] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:353: "busybox-mount" [d30e337c-9923-49c2-882d-b3c24818321b] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:154: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 5.004341735s
functional_test_mount_test.go:170: (dbg) Run:  kubectl --context functional-725125 logs busybox-mount
functional_test_mount_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p functional-725125 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p functional-725125 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:91: (dbg) Run:  out/minikube-linux-arm64 -p functional-725125 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:95: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-725125 /tmp/TestFunctionalparallelMountCmdany-port607863901/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (8.47s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.61s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1474: (dbg) Run:  out/minikube-linux-arm64 -p functional-725125 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.61s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.59s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1504: (dbg) Run:  out/minikube-linux-arm64 -p functional-725125 service list -o json
functional_test.go:1509: Took "585.361688ms" to run "out/minikube-linux-arm64 -p functional-725125 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.59s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1524: (dbg) Run:  out/minikube-linux-arm64 -p functional-725125 service --namespace=default --https --url hello-node
functional_test.go:1537: found endpoint: https://192.168.49.2:31226
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.43s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1555: (dbg) Run:  out/minikube-linux-arm64 -p functional-725125 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.38s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1574: (dbg) Run:  out/minikube-linux-arm64 -p functional-725125 service hello-node --url
functional_test.go:1580: found endpoint for hello-node: http://192.168.49.2:31226
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.37s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (2.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:219: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-725125 /tmp/TestFunctionalparallelMountCmdspecific-port1799830041/001:/mount-9p --alsologtostderr -v=1 --port 41505]
functional_test_mount_test.go:249: (dbg) Run:  out/minikube-linux-arm64 -p functional-725125 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:249: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-725125 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (639.532045ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1227 09:19:50.602344  303043 retry.go:84] will retry after 400ms: exit status 1
functional_test_mount_test.go:249: (dbg) Run:  out/minikube-linux-arm64 -p functional-725125 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:263: (dbg) Run:  out/minikube-linux-arm64 -p functional-725125 ssh -- ls -la /mount-9p
functional_test_mount_test.go:267: guest mount directory contents
total 0
functional_test_mount_test.go:269: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-725125 /tmp/TestFunctionalparallelMountCmdspecific-port1799830041/001:/mount-9p --alsologtostderr -v=1 --port 41505] ...
functional_test_mount_test.go:270: reading mount text
functional_test_mount_test.go:284: done reading mount text
functional_test_mount_test.go:236: (dbg) Run:  out/minikube-linux-arm64 -p functional-725125 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:236: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-725125 ssh "sudo umount -f /mount-9p": exit status 1 (387.195482ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:238: "out/minikube-linux-arm64 -p functional-725125 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:240: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-725125 /tmp/TestFunctionalparallelMountCmdspecific-port1799830041/001:/mount-9p --alsologtostderr -v=1 --port 41505] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (2.36s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (2.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:304: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-725125 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2754442241/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:304: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-725125 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2754442241/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:304: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-725125 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2754442241/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:331: (dbg) Run:  out/minikube-linux-arm64 -p functional-725125 ssh "findmnt -T" /mount1
functional_test_mount_test.go:331: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-725125 ssh "findmnt -T" /mount1: exit status 1 (961.689524ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:331: (dbg) Run:  out/minikube-linux-arm64 -p functional-725125 ssh "findmnt -T" /mount1
functional_test_mount_test.go:331: (dbg) Run:  out/minikube-linux-arm64 -p functional-725125 ssh "findmnt -T" /mount2
functional_test_mount_test.go:331: (dbg) Run:  out/minikube-linux-arm64 -p functional-725125 ssh "findmnt -T" /mount3
functional_test_mount_test.go:376: (dbg) Run:  out/minikube-linux-arm64 mount -p functional-725125 --kill=true
functional_test_mount_test.go:319: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-725125 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2754442241/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:319: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-725125 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2754442241/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:319: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-725125 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2754442241/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (2.45s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2266: (dbg) Run:  out/minikube-linux-arm64 -p functional-725125 version --short
--- PASS: TestFunctional/parallel/Version/short (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.73s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2280: (dbg) Run:  out/minikube-linux-arm64 -p functional-725125 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.73s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-725125 image ls --format short --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-725125 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10.1
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.35.0
registry.k8s.io/kube-proxy:v1.35.0
registry.k8s.io/kube-controller-manager:v1.35.0
registry.k8s.io/kube-apiserver:v1.35.0
registry.k8s.io/etcd:3.6.6-0
registry.k8s.io/coredns/coredns:v1.13.1
public.ecr.aws/nginx/nginx:alpine
localhost/minikube-local-cache-test:functional-725125
ghcr.io/medyagh/image-mirrors/kicbase/echo-server:latest
ghcr.io/medyagh/image-mirrors/kicbase/echo-server:functional-725125
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/kindest/kindnetd:v20251212-v0.29.0-alpha-105-g20ccfc88
docker.io/kindest/kindnetd:v20250512-df8de77b
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-725125 image ls --format short --alsologtostderr:
I1227 09:20:02.815768  329343 out.go:360] Setting OutFile to fd 1 ...
I1227 09:20:02.822097  329343 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1227 09:20:02.822127  329343 out.go:374] Setting ErrFile to fd 2...
I1227 09:20:02.822134  329343 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1227 09:20:02.822545  329343 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22344-301174/.minikube/bin
I1227 09:20:02.825405  329343 config.go:182] Loaded profile config "functional-725125": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
I1227 09:20:02.825590  329343 config.go:182] Loaded profile config "functional-725125": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
I1227 09:20:02.826344  329343 cli_runner.go:164] Run: docker container inspect functional-725125 --format={{.State.Status}}
I1227 09:20:02.849854  329343 ssh_runner.go:195] Run: systemctl --version
I1227 09:20:02.849918  329343 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-725125
I1227 09:20:02.873403  329343 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33151 SSHKeyPath:/home/jenkins/minikube-integration/22344-301174/.minikube/machines/functional-725125/id_rsa Username:docker}
I1227 09:20:02.977307  329343 ssh_runner.go:195] Run: sudo crictl --timeout=10s images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-725125 image ls --format table --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-725125 image ls --format table --alsologtostderr:
┌───────────────────────────────────────────────────┬───────────────────────────────────────┬───────────────┬────────┐
│                       IMAGE                       │                  TAG                  │   IMAGE ID    │  SIZE  │
├───────────────────────────────────────────────────┼───────────────────────────────────────┼───────────────┼────────┤
│ gcr.io/k8s-minikube/storage-provisioner           │ v5                                    │ ba04bb24b9575 │ 29MB   │
│ registry.k8s.io/kube-scheduler                    │ v1.35.0                               │ ddc8422d4d35a │ 49.8MB │
│ registry.k8s.io/pause                             │ latest                                │ 8cb2091f603e7 │ 246kB  │
│ registry.k8s.io/etcd                              │ 3.6.6-0                               │ 271e49a0ebc56 │ 60.9MB │
│ registry.k8s.io/pause                             │ 3.10.1                                │ d7b100cd9a77b │ 520kB  │
│ registry.k8s.io/pause                             │ 3.3                                   │ 3d18732f8686c │ 487kB  │
│ docker.io/kindest/kindnetd                        │ v20250512-df8de77b                    │ b1a8c6f707935 │ 111MB  │
│ registry.k8s.io/coredns/coredns                   │ v1.13.1                               │ e08f4d9d2e6ed │ 74.5MB │
│ registry.k8s.io/kube-apiserver                    │ v1.35.0                               │ c3fcf259c473a │ 85MB   │
│ registry.k8s.io/kube-proxy                        │ v1.35.0                               │ de369f46c2ff5 │ 74.1MB │
│ registry.k8s.io/pause                             │ 3.1                                   │ 8057e0500773a │ 529kB  │
│ gcr.io/k8s-minikube/busybox                       │ 1.28.4-glibc                          │ 1611cd07b61d5 │ 3.77MB │
│ ghcr.io/medyagh/image-mirrors/kicbase/echo-server │ functional-725125                     │ ce2d2cda2d858 │ 4.79MB │
│ ghcr.io/medyagh/image-mirrors/kicbase/echo-server │ latest                                │ ce2d2cda2d858 │ 4.79MB │
│ localhost/minikube-local-cache-test               │ functional-725125                     │ 08b2a00fb1134 │ 3.33kB │
│ public.ecr.aws/nginx/nginx                        │ alpine                                │ 962dbbc0e55ec │ 55.1MB │
│ registry.k8s.io/kube-controller-manager           │ v1.35.0                               │ 88898f1d1a62a │ 72.2MB │
│ docker.io/kindest/kindnetd                        │ v20251212-v0.29.0-alpha-105-g20ccfc88 │ c96ee3c174987 │ 108MB  │
└───────────────────────────────────────────────────┴───────────────────────────────────────┴───────────────┴────────┘
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-725125 image ls --format table --alsologtostderr:
I1227 09:20:03.380822  329515 out.go:360] Setting OutFile to fd 1 ...
I1227 09:20:03.380968  329515 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1227 09:20:03.380980  329515 out.go:374] Setting ErrFile to fd 2...
I1227 09:20:03.380986  329515 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1227 09:20:03.381306  329515 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22344-301174/.minikube/bin
I1227 09:20:03.381961  329515 config.go:182] Loaded profile config "functional-725125": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
I1227 09:20:03.382125  329515 config.go:182] Loaded profile config "functional-725125": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
I1227 09:20:03.382854  329515 cli_runner.go:164] Run: docker container inspect functional-725125 --format={{.State.Status}}
I1227 09:20:03.402350  329515 ssh_runner.go:195] Run: systemctl --version
I1227 09:20:03.402408  329515 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-725125
I1227 09:20:03.435599  329515 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33151 SSHKeyPath:/home/jenkins/minikube-integration/22344-301174/.minikube/machines/functional-725125/id_rsa Username:docker}
I1227 09:20:03.549278  329515 ssh_runner.go:195] Run: sudo crictl --timeout=10s images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-725125 image ls --format json --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-725125 image ls --format json --alsologtostderr:
[{"id":"b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c","repoDigests":["docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a","docker.io/kindest/kindnetd@sha256:2bdc3188f2ddc8e54841f69ef900a8dde1280057c97500f966a7ef31364021f1"],"repoTags":["docker.io/kindest/kindnetd:v20250512-df8de77b"],"size":"111333938"},{"id":"c96ee3c17498748ccc544ba99ee8ffeb020fc335b230b43cd28bf43bed229a13","repoDigests":["docker.io/kindest/kindnetd@sha256:377e2e7a513148f7c942b51cd57bdce1589940df856105384ac7f753a1ab43ae","docker.io/kindest/kindnetd@sha256:f1260f5691195cc9a693dc0b55178aa724d944efd62486a8320f0583272b1fa3"],"repoTags":["docker.io/kindest/kindnetd:v20251212-v0.29.0-alpha-105-g20ccfc88"],"size":"108362109"},{"id":"88898f1d1a62a3ea9db5d4d099dee7aa52ebe8191016c5b3c721388a309983e0","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:061d470c1ad66ac12ef70502f257dfb1771cb45ea840d875ef53781a61e81503","registry.k8s.io/kube-controller-manager@sha256:3e
343fd915d2e214b9a68c045b94017832927edb89aafa471324f8d05a191111"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.35.0"],"size":"72170321"},{"id":"3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300","repoDigests":["registry.k8s.io/pause@sha256:e59730b14890252c14f85976e22ab1c47ec28b111ffed407f34bca1b44447476"],"repoTags":["registry.k8s.io/pause:3.3"],"size":"487479"},{"id":"8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a","repoDigests":["registry.k8s.io/pause@sha256:f5e31d44aa14d5669e030380b656463a7e45934c03994e72e3dbf83d4a645cca"],"repoTags":["registry.k8s.io/pause:latest"],"size":"246070"},{"id":"962dbbc0e55ec93371166cf3e1f723875ce281259bb90b8092248398555aff67","repoDigests":["public.ecr.aws/nginx/nginx@sha256:7cf0c9cc3c6b7ce30b46fa0fe53d95bee9d7803900edb965d3995ddf9ae12d03","public.ecr.aws/nginx/nginx@sha256:a411c634df4374901a4a9370626801998f159652f627b1cdfbbbe012adcd6c76"],"repoTags":["public.ecr.aws/nginx/nginx:alpine"],"size":"55077764"},{"id":"c3fcf259c473a57a5d7da1
16e29161904491091743512d27467c907c5516f856","repoDigests":["registry.k8s.io/kube-apiserver@sha256:32f98b308862e1cf98c900927d84630fb86a836a480f02752a779eb85c1489f3","registry.k8s.io/kube-apiserver@sha256:bd1ea721ef1552db1884b5e8753c61667620556e5e0bfe6be8b32b6a77d7a16d"],"repoTags":["registry.k8s.io/kube-apiserver:v1.35.0"],"size":"85015535"},{"id":"de369f46c2ff55c31ea783a663eb203caa820f3db1f9b9c935e79e7d1e9fd9e5","repoDigests":["registry.k8s.io/kube-proxy@sha256:817c21201edf58f5fe5be560c11178a250f7ba08a010a4cb73efcb0d98b467a5","registry.k8s.io/kube-proxy@sha256:c818ca1eff765e35348b77e484da915175cdf483f298e1f9885ed706fcbcb34c"],"repoTags":["registry.k8s.io/kube-proxy:v1.35.0"],"size":"74106775"},{"id":"08b2a00fb11348f8ff57624bf0ff5fec6d2cd203577133951833c29717334006","repoDigests":["localhost/minikube-local-cache-test@sha256:a1b33dd28af4cbb17cc83ac1cbd7098fa365ea8aa0e959d81061ded53391daeb"],"repoTags":["localhost/minikube-local-cache-test:functional-725125"],"size":"3328"},{"id":"e08f4d9d2e6ede8185064c13b41f8ee
ee95b609c0ca93b6fe7509fe527c907cf","repoDigests":["registry.k8s.io/coredns/coredns@sha256:9b9128672209474da07c91439bf15ed704ae05ad918dd6454e5b6ae14e35fee6","registry.k8s.io/coredns/coredns@sha256:cbd225373d1800b8d9aa2cac02d5be4172ad301cf7a1ffb509ddf8ca1fe06d74"],"repoTags":["registry.k8s.io/coredns/coredns:v1.13.1"],"size":"74491780"},{"id":"271e49a0ebc56647476845128fcd2a73bb138beeca3878cc3bf52b4ff1172a57","repoDigests":["registry.k8s.io/etcd@sha256:60a30b5d81b2217555e2cfb9537f655b7ba97220b99c39ee2e162a7127225890","registry.k8s.io/etcd@sha256:aa0d8bc8f6a6c3655b8efe0a10c5bf052f5574ebe13f904c5b0c9002ce4b2561"],"repoTags":["registry.k8s.io/etcd:3.6.6-0"],"size":"60850387"},{"id":"d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd","repoDigests":["registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c","registry.k8s.io/pause@sha256:e9c466420bcaeede00f46ecfa0ca8cd854c549f2f13330e2723173d88f2de70f"],"repoTags":["registry.k8s.io/pause:3.10.1"],"size":"519884"},{"i
d":"20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8","repoDigests":["docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93","docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf"],"repoTags":[],"size":"247562353"},{"id":"a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a","repoDigests":["docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c","docker.io/kubernetesui/metrics-scraper@sha256:853c43f3cced687cb211708aa0024304a5adb33ec45ebf5915d318358822e09a"],"repoTags":[],"size":"42263767"},{"id":"1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e","gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-g
libc"],"size":"3774172"},{"id":"ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:0ba370588274b88531ab311a5d2e645d240a853555c1e58fd1dd428fc333c9d2","gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"29037500"},{"id":"ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17","repoDigests":["ghcr.io/medyagh/image-mirrors/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6","ghcr.io/medyagh/image-mirrors/kicbase/echo-server@sha256:42a89d9b22e5307cb88494990d5d929c401339f508c0a7e98a4d8ac52623fc5b","ghcr.io/medyagh/image-mirrors/kicbase/echo-server@sha256:49260110d6ce1914d3de292ed370ee11a2e34ab577b97e6011d795cb13534d4a"],"repoTags":["ghcr.io/medyagh/image-mirrors/kicbase/echo-server:functional-725125","ghcr.io/medyagh/image-mirrors/kicbase/echo-server:latest"],"s
ize":"4789170"},{"id":"ddc8422d4d35a6fc66c34be61e24df795e5cebf197eb546f62740d0bafef874f","repoDigests":["registry.k8s.io/kube-scheduler@sha256:0ab622491a82532e01876d55e365c08c5bac01bcd5444a8ed58c1127ab47819f","registry.k8s.io/kube-scheduler@sha256:36fe4e2d4335ff20aa335e673e7490151d57ffa753ef9282b8786930e6014ee3"],"repoTags":["registry.k8s.io/kube-scheduler:v1.35.0"],"size":"49822549"},{"id":"8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5","repoDigests":["registry.k8s.io/pause@sha256:b0602c9f938379133ff8017007894b48c1112681c9468f82a1e4cbf8a4498b67"],"repoTags":["registry.k8s.io/pause:3.1"],"size":"528622"}]
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-725125 image ls --format json --alsologtostderr:
I1227 09:20:03.131646  329422 out.go:360] Setting OutFile to fd 1 ...
I1227 09:20:03.131752  329422 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1227 09:20:03.131758  329422 out.go:374] Setting ErrFile to fd 2...
I1227 09:20:03.131763  329422 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1227 09:20:03.132154  329422 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22344-301174/.minikube/bin
I1227 09:20:03.132900  329422 config.go:182] Loaded profile config "functional-725125": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
I1227 09:20:03.133019  329422 config.go:182] Loaded profile config "functional-725125": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
I1227 09:20:03.133550  329422 cli_runner.go:164] Run: docker container inspect functional-725125 --format={{.State.Status}}
I1227 09:20:03.153000  329422 ssh_runner.go:195] Run: systemctl --version
I1227 09:20:03.153058  329422 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-725125
I1227 09:20:03.171429  329422 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33151 SSHKeyPath:/home/jenkins/minikube-integration/22344-301174/.minikube/machines/functional-725125/id_rsa Username:docker}
I1227 09:20:03.273915  329422 ssh_runner.go:195] Run: sudo crictl --timeout=10s images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-725125 image ls --format yaml --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-725125 image ls --format yaml --alsologtostderr:
- id: 20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8
repoDigests:
- docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93
- docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf
repoTags: []
size: "247562353"
- id: 1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
- gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "3774172"
- id: 08b2a00fb11348f8ff57624bf0ff5fec6d2cd203577133951833c29717334006
repoDigests:
- localhost/minikube-local-cache-test@sha256:a1b33dd28af4cbb17cc83ac1cbd7098fa365ea8aa0e959d81061ded53391daeb
repoTags:
- localhost/minikube-local-cache-test:functional-725125
size: "3328"
- id: e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:9b9128672209474da07c91439bf15ed704ae05ad918dd6454e5b6ae14e35fee6
- registry.k8s.io/coredns/coredns@sha256:cbd225373d1800b8d9aa2cac02d5be4172ad301cf7a1ffb509ddf8ca1fe06d74
repoTags:
- registry.k8s.io/coredns/coredns:v1.13.1
size: "74491780"
- id: ddc8422d4d35a6fc66c34be61e24df795e5cebf197eb546f62740d0bafef874f
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:0ab622491a82532e01876d55e365c08c5bac01bcd5444a8ed58c1127ab47819f
- registry.k8s.io/kube-scheduler@sha256:36fe4e2d4335ff20aa335e673e7490151d57ffa753ef9282b8786930e6014ee3
repoTags:
- registry.k8s.io/kube-scheduler:v1.35.0
size: "49822549"
- id: 3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300
repoDigests:
- registry.k8s.io/pause@sha256:e59730b14890252c14f85976e22ab1c47ec28b111ffed407f34bca1b44447476
repoTags:
- registry.k8s.io/pause:3.3
size: "487479"
- id: a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a
repoDigests:
- docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c
- docker.io/kubernetesui/metrics-scraper@sha256:853c43f3cced687cb211708aa0024304a5adb33ec45ebf5915d318358822e09a
repoTags: []
size: "42263767"
- id: ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:0ba370588274b88531ab311a5d2e645d240a853555c1e58fd1dd428fc333c9d2
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "29037500"
- id: 962dbbc0e55ec93371166cf3e1f723875ce281259bb90b8092248398555aff67
repoDigests:
- public.ecr.aws/nginx/nginx@sha256:7cf0c9cc3c6b7ce30b46fa0fe53d95bee9d7803900edb965d3995ddf9ae12d03
- public.ecr.aws/nginx/nginx@sha256:a411c634df4374901a4a9370626801998f159652f627b1cdfbbbe012adcd6c76
repoTags:
- public.ecr.aws/nginx/nginx:alpine
size: "55077764"
- id: de369f46c2ff55c31ea783a663eb203caa820f3db1f9b9c935e79e7d1e9fd9e5
repoDigests:
- registry.k8s.io/kube-proxy@sha256:817c21201edf58f5fe5be560c11178a250f7ba08a010a4cb73efcb0d98b467a5
- registry.k8s.io/kube-proxy@sha256:c818ca1eff765e35348b77e484da915175cdf483f298e1f9885ed706fcbcb34c
repoTags:
- registry.k8s.io/kube-proxy:v1.35.0
size: "74106775"
- id: 8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a
repoDigests:
- registry.k8s.io/pause@sha256:f5e31d44aa14d5669e030380b656463a7e45934c03994e72e3dbf83d4a645cca
repoTags:
- registry.k8s.io/pause:latest
size: "246070"
- id: 88898f1d1a62a3ea9db5d4d099dee7aa52ebe8191016c5b3c721388a309983e0
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:061d470c1ad66ac12ef70502f257dfb1771cb45ea840d875ef53781a61e81503
- registry.k8s.io/kube-controller-manager@sha256:3e343fd915d2e214b9a68c045b94017832927edb89aafa471324f8d05a191111
repoTags:
- registry.k8s.io/kube-controller-manager:v1.35.0
size: "72170321"
- id: c96ee3c17498748ccc544ba99ee8ffeb020fc335b230b43cd28bf43bed229a13
repoDigests:
- docker.io/kindest/kindnetd@sha256:377e2e7a513148f7c942b51cd57bdce1589940df856105384ac7f753a1ab43ae
- docker.io/kindest/kindnetd@sha256:f1260f5691195cc9a693dc0b55178aa724d944efd62486a8320f0583272b1fa3
repoTags:
- docker.io/kindest/kindnetd:v20251212-v0.29.0-alpha-105-g20ccfc88
size: "108362109"
- id: ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17
repoDigests:
- ghcr.io/medyagh/image-mirrors/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6
- ghcr.io/medyagh/image-mirrors/kicbase/echo-server@sha256:42a89d9b22e5307cb88494990d5d929c401339f508c0a7e98a4d8ac52623fc5b
- ghcr.io/medyagh/image-mirrors/kicbase/echo-server@sha256:49260110d6ce1914d3de292ed370ee11a2e34ab577b97e6011d795cb13534d4a
repoTags:
- ghcr.io/medyagh/image-mirrors/kicbase/echo-server:functional-725125
- ghcr.io/medyagh/image-mirrors/kicbase/echo-server:latest
size: "4789170"
- id: 271e49a0ebc56647476845128fcd2a73bb138beeca3878cc3bf52b4ff1172a57
repoDigests:
- registry.k8s.io/etcd@sha256:60a30b5d81b2217555e2cfb9537f655b7ba97220b99c39ee2e162a7127225890
- registry.k8s.io/etcd@sha256:aa0d8bc8f6a6c3655b8efe0a10c5bf052f5574ebe13f904c5b0c9002ce4b2561
repoTags:
- registry.k8s.io/etcd:3.6.6-0
size: "60850387"
- id: c3fcf259c473a57a5d7da116e29161904491091743512d27467c907c5516f856
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:32f98b308862e1cf98c900927d84630fb86a836a480f02752a779eb85c1489f3
- registry.k8s.io/kube-apiserver@sha256:bd1ea721ef1552db1884b5e8753c61667620556e5e0bfe6be8b32b6a77d7a16d
repoTags:
- registry.k8s.io/kube-apiserver:v1.35.0
size: "85015535"
- id: d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd
repoDigests:
- registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c
- registry.k8s.io/pause@sha256:e9c466420bcaeede00f46ecfa0ca8cd854c549f2f13330e2723173d88f2de70f
repoTags:
- registry.k8s.io/pause:3.10.1
size: "519884"
- id: b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c
repoDigests:
- docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a
- docker.io/kindest/kindnetd@sha256:2bdc3188f2ddc8e54841f69ef900a8dde1280057c97500f966a7ef31364021f1
repoTags:
- docker.io/kindest/kindnetd:v20250512-df8de77b
size: "111333938"
- id: 8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5
repoDigests:
- registry.k8s.io/pause@sha256:b0602c9f938379133ff8017007894b48c1112681c9468f82a1e4cbf8a4498b67
repoTags:
- registry.k8s.io/pause:3.1
size: "528622"

                                                
                                                
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-725125 image ls --format yaml --alsologtostderr:
I1227 09:20:02.836678  329347 out.go:360] Setting OutFile to fd 1 ...
I1227 09:20:02.836898  329347 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1227 09:20:02.836926  329347 out.go:374] Setting ErrFile to fd 2...
I1227 09:20:02.836947  329347 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1227 09:20:02.837250  329347 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22344-301174/.minikube/bin
I1227 09:20:02.837986  329347 config.go:182] Loaded profile config "functional-725125": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
I1227 09:20:02.838213  329347 config.go:182] Loaded profile config "functional-725125": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
I1227 09:20:02.838956  329347 cli_runner.go:164] Run: docker container inspect functional-725125 --format={{.State.Status}}
I1227 09:20:02.856915  329347 ssh_runner.go:195] Run: systemctl --version
I1227 09:20:02.856970  329347 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-725125
I1227 09:20:02.882586  329347 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33151 SSHKeyPath:/home/jenkins/minikube-integration/22344-301174/.minikube/machines/functional-725125/id_rsa Username:docker}
I1227 09:20:02.985392  329347 ssh_runner.go:195] Run: sudo crictl --timeout=10s images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (3.84s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:323: (dbg) Run:  out/minikube-linux-arm64 -p functional-725125 ssh pgrep buildkitd
functional_test.go:323: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-725125 ssh pgrep buildkitd: exit status 1 (379.208764ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:330: (dbg) Run:  out/minikube-linux-arm64 -p functional-725125 image build -t localhost/my-image:functional-725125 testdata/build --alsologtostderr
functional_test.go:330: (dbg) Done: out/minikube-linux-arm64 -p functional-725125 image build -t localhost/my-image:functional-725125 testdata/build --alsologtostderr: (3.230347399s)
functional_test.go:335: (dbg) Stdout: out/minikube-linux-arm64 -p functional-725125 image build -t localhost/my-image:functional-725125 testdata/build --alsologtostderr:
STEP 1/3: FROM gcr.io/k8s-minikube/busybox
STEP 2/3: RUN true
--> 958415e76fe
STEP 3/3: ADD content.txt /
COMMIT localhost/my-image:functional-725125
--> 7f15ddffa7f
Successfully tagged localhost/my-image:functional-725125
7f15ddffa7f542b15e7aee8eeff060ed2576cb41feefc4c12d9a059bce4e2693
functional_test.go:338: (dbg) Stderr: out/minikube-linux-arm64 -p functional-725125 image build -t localhost/my-image:functional-725125 testdata/build --alsologtostderr:
I1227 09:20:03.478304  329535 out.go:360] Setting OutFile to fd 1 ...
I1227 09:20:03.481734  329535 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1227 09:20:03.481755  329535 out.go:374] Setting ErrFile to fd 2...
I1227 09:20:03.481762  329535 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1227 09:20:03.482107  329535 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22344-301174/.minikube/bin
I1227 09:20:03.483440  329535 config.go:182] Loaded profile config "functional-725125": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
I1227 09:20:03.484510  329535 config.go:182] Loaded profile config "functional-725125": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
I1227 09:20:03.486384  329535 cli_runner.go:164] Run: docker container inspect functional-725125 --format={{.State.Status}}
I1227 09:20:03.511676  329535 ssh_runner.go:195] Run: systemctl --version
I1227 09:20:03.511727  329535 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-725125
I1227 09:20:03.530905  329535 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33151 SSHKeyPath:/home/jenkins/minikube-integration/22344-301174/.minikube/machines/functional-725125/id_rsa Username:docker}
I1227 09:20:03.648839  329535 build_images.go:162] Building image from path: /tmp/build.99295739.tar
I1227 09:20:03.648918  329535 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I1227 09:20:03.656879  329535 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.99295739.tar
I1227 09:20:03.660492  329535 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.99295739.tar: stat -c "%s %y" /var/lib/minikube/build/build.99295739.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.99295739.tar': No such file or directory
I1227 09:20:03.660523  329535 ssh_runner.go:362] scp /tmp/build.99295739.tar --> /var/lib/minikube/build/build.99295739.tar (3072 bytes)
I1227 09:20:03.678064  329535 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.99295739
I1227 09:20:03.686350  329535 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.99295739 -xf /var/lib/minikube/build/build.99295739.tar
I1227 09:20:03.695108  329535 crio.go:315] Building image: /var/lib/minikube/build/build.99295739
I1227 09:20:03.695208  329535 ssh_runner.go:195] Run: sudo podman build -t localhost/my-image:functional-725125 /var/lib/minikube/build/build.99295739 --cgroup-manager=cgroupfs
Trying to pull gcr.io/k8s-minikube/busybox:latest...
Getting image source signatures
Copying blob sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34
Copying config sha256:71a676dd070f4b701c3272e566d84951362f1326ea07d5bbad119d1c4f6b3d02
Writing manifest to image destination
Storing signatures
I1227 09:20:06.614944  329535 ssh_runner.go:235] Completed: sudo podman build -t localhost/my-image:functional-725125 /var/lib/minikube/build/build.99295739 --cgroup-manager=cgroupfs: (2.919704794s)
I1227 09:20:06.615017  329535 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.99295739
I1227 09:20:06.623093  329535 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.99295739.tar
I1227 09:20:06.631058  329535 build_images.go:218] Built localhost/my-image:functional-725125 from /tmp/build.99295739.tar
I1227 09:20:06.631099  329535 build_images.go:134] succeeded building to: functional-725125
I1227 09:20:06.631119  329535 build_images.go:135] failed building to: 
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-725125 image ls
E1227 09:20:06.815302  303043 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22344-301174/.minikube/profiles/addons-730938/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 09:20:06.820583  303043 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22344-301174/.minikube/profiles/addons-730938/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 09:20:06.830860  303043 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22344-301174/.minikube/profiles/addons-730938/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 09:20:06.851156  303043 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22344-301174/.minikube/profiles/addons-730938/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (3.84s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (0.64s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:357: (dbg) Run:  docker pull ghcr.io/medyagh/image-mirrors/kicbase/echo-server:1.0
functional_test.go:362: (dbg) Run:  docker tag ghcr.io/medyagh/image-mirrors/kicbase/echo-server:1.0 ghcr.io/medyagh/image-mirrors/kicbase/echo-server:functional-725125
--- PASS: TestFunctional/parallel/ImageCommands/Setup (0.64s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:370: (dbg) Run:  out/minikube-linux-arm64 -p functional-725125 image load --daemon ghcr.io/medyagh/image-mirrors/kicbase/echo-server:functional-725125 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-725125 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.13s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.89s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:380: (dbg) Run:  out/minikube-linux-arm64 -p functional-725125 image load --daemon ghcr.io/medyagh/image-mirrors/kicbase/echo-server:functional-725125 --alsologtostderr
2025/12/27 09:19:58 [DEBUG] GET http://127.0.0.1:40551/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-725125 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.89s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:250: (dbg) Run:  docker pull ghcr.io/medyagh/image-mirrors/kicbase/echo-server:latest
functional_test.go:255: (dbg) Run:  docker tag ghcr.io/medyagh/image-mirrors/kicbase/echo-server:latest ghcr.io/medyagh/image-mirrors/kicbase/echo-server:functional-725125
functional_test.go:260: (dbg) Run:  out/minikube-linux-arm64 -p functional-725125 image load --daemon ghcr.io/medyagh/image-mirrors/kicbase/echo-server:functional-725125 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-725125 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.39s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2129: (dbg) Run:  out/minikube-linux-arm64 -p functional-725125 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2129: (dbg) Run:  out/minikube-linux-arm64 -p functional-725125 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.19s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2129: (dbg) Run:  out/minikube-linux-arm64 -p functional-725125 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.94s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:395: (dbg) Run:  out/minikube-linux-arm64 -p functional-725125 image save ghcr.io/medyagh/image-mirrors/kicbase/echo-server:functional-725125 /home/jenkins/workspace/Docker_Linux_crio_arm64/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.94s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.64s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:407: (dbg) Run:  out/minikube-linux-arm64 -p functional-725125 image rm ghcr.io/medyagh/image-mirrors/kicbase/echo-server:functional-725125 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-725125 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.64s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.8s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:424: (dbg) Run:  out/minikube-linux-arm64 -p functional-725125 image load /home/jenkins/workspace/Docker_Linux_crio_arm64/echo-server-save.tar --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-725125 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.80s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.55s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:434: (dbg) Run:  docker rmi ghcr.io/medyagh/image-mirrors/kicbase/echo-server:functional-725125
functional_test.go:439: (dbg) Run:  out/minikube-linux-arm64 -p functional-725125 image save --daemon ghcr.io/medyagh/image-mirrors/kicbase/echo-server:functional-725125 --alsologtostderr
functional_test.go:447: (dbg) Run:  docker image inspect ghcr.io/medyagh/image-mirrors/kicbase/echo-server:functional-725125
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.55s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:205: (dbg) Run:  docker rmi -f ghcr.io/medyagh/image-mirrors/kicbase/echo-server:1.0
E1227 09:20:06.892171  303043 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22344-301174/.minikube/profiles/addons-730938/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:205: (dbg) Run:  docker rmi -f ghcr.io/medyagh/image-mirrors/kicbase/echo-server:functional-725125
--- PASS: TestFunctional/delete_echo-server_images (0.04s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:213: (dbg) Run:  docker rmi -f localhost/my-image:functional-725125
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:221: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-725125
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (137.07s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-arm64 -p ha-718691 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio
E1227 09:20:11.936643  303043 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22344-301174/.minikube/profiles/addons-730938/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 09:20:17.057778  303043 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22344-301174/.minikube/profiles/addons-730938/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 09:20:27.298841  303043 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22344-301174/.minikube/profiles/addons-730938/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 09:20:47.779316  303043 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22344-301174/.minikube/profiles/addons-730938/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 09:21:28.740093  303043 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22344-301174/.minikube/profiles/addons-730938/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:101: (dbg) Done: out/minikube-linux-arm64 -p ha-718691 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio: (2m16.222620177s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-arm64 -p ha-718691 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/StartCluster (137.07s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (6.33s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-arm64 -p ha-718691 kubectl -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-arm64 -p ha-718691 kubectl -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-arm64 -p ha-718691 kubectl -- rollout status deployment/busybox: (3.647768193s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-arm64 -p ha-718691 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-arm64 -p ha-718691 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 -p ha-718691 kubectl -- exec busybox-769dd8b7dd-qgb2q -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 -p ha-718691 kubectl -- exec busybox-769dd8b7dd-t2q5b -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 -p ha-718691 kubectl -- exec busybox-769dd8b7dd-v46sn -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p ha-718691 kubectl -- exec busybox-769dd8b7dd-qgb2q -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p ha-718691 kubectl -- exec busybox-769dd8b7dd-t2q5b -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p ha-718691 kubectl -- exec busybox-769dd8b7dd-v46sn -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 -p ha-718691 kubectl -- exec busybox-769dd8b7dd-qgb2q -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 -p ha-718691 kubectl -- exec busybox-769dd8b7dd-t2q5b -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 -p ha-718691 kubectl -- exec busybox-769dd8b7dd-v46sn -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (6.33s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.53s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-arm64 -p ha-718691 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 -p ha-718691 kubectl -- exec busybox-769dd8b7dd-qgb2q -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 -p ha-718691 kubectl -- exec busybox-769dd8b7dd-qgb2q -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 -p ha-718691 kubectl -- exec busybox-769dd8b7dd-t2q5b -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 -p ha-718691 kubectl -- exec busybox-769dd8b7dd-t2q5b -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 -p ha-718691 kubectl -- exec busybox-769dd8b7dd-v46sn -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 -p ha-718691 kubectl -- exec busybox-769dd8b7dd-v46sn -- sh -c "ping -c 1 192.168.49.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.53s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (31.64s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-arm64 -p ha-718691 node add --alsologtostderr -v 5
E1227 09:22:50.660823  303043 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22344-301174/.minikube/profiles/addons-730938/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:228: (dbg) Done: out/minikube-linux-arm64 -p ha-718691 node add --alsologtostderr -v 5: (30.541087754s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-arm64 -p ha-718691 status --alsologtostderr -v 5
ha_test.go:234: (dbg) Done: out/minikube-linux-arm64 -p ha-718691 status --alsologtostderr -v 5: (1.098801979s)
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (31.64s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.12s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-718691 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.12s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (1.08s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-linux-arm64 profile list --output json: (1.077403864s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (1.08s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (19.87s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:328: (dbg) Run:  out/minikube-linux-arm64 -p ha-718691 status --output json --alsologtostderr -v 5
ha_test.go:328: (dbg) Done: out/minikube-linux-arm64 -p ha-718691 status --output json --alsologtostderr -v 5: (1.023245118s)
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-718691 cp testdata/cp-test.txt ha-718691:/home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-718691 ssh -n ha-718691 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-718691 cp ha-718691:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile4111579691/001/cp-test_ha-718691.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-718691 ssh -n ha-718691 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-718691 cp ha-718691:/home/docker/cp-test.txt ha-718691-m02:/home/docker/cp-test_ha-718691_ha-718691-m02.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-718691 ssh -n ha-718691 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-718691 ssh -n ha-718691-m02 "sudo cat /home/docker/cp-test_ha-718691_ha-718691-m02.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-718691 cp ha-718691:/home/docker/cp-test.txt ha-718691-m03:/home/docker/cp-test_ha-718691_ha-718691-m03.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-718691 ssh -n ha-718691 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-718691 ssh -n ha-718691-m03 "sudo cat /home/docker/cp-test_ha-718691_ha-718691-m03.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-718691 cp ha-718691:/home/docker/cp-test.txt ha-718691-m04:/home/docker/cp-test_ha-718691_ha-718691-m04.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-718691 ssh -n ha-718691 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-718691 ssh -n ha-718691-m04 "sudo cat /home/docker/cp-test_ha-718691_ha-718691-m04.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-718691 cp testdata/cp-test.txt ha-718691-m02:/home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-718691 ssh -n ha-718691-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-718691 cp ha-718691-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile4111579691/001/cp-test_ha-718691-m02.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-718691 ssh -n ha-718691-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-718691 cp ha-718691-m02:/home/docker/cp-test.txt ha-718691:/home/docker/cp-test_ha-718691-m02_ha-718691.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-718691 ssh -n ha-718691-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-718691 ssh -n ha-718691 "sudo cat /home/docker/cp-test_ha-718691-m02_ha-718691.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-718691 cp ha-718691-m02:/home/docker/cp-test.txt ha-718691-m03:/home/docker/cp-test_ha-718691-m02_ha-718691-m03.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-718691 ssh -n ha-718691-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-718691 ssh -n ha-718691-m03 "sudo cat /home/docker/cp-test_ha-718691-m02_ha-718691-m03.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-718691 cp ha-718691-m02:/home/docker/cp-test.txt ha-718691-m04:/home/docker/cp-test_ha-718691-m02_ha-718691-m04.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-718691 ssh -n ha-718691-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-718691 ssh -n ha-718691-m04 "sudo cat /home/docker/cp-test_ha-718691-m02_ha-718691-m04.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-718691 cp testdata/cp-test.txt ha-718691-m03:/home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-718691 ssh -n ha-718691-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-718691 cp ha-718691-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile4111579691/001/cp-test_ha-718691-m03.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-718691 ssh -n ha-718691-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-718691 cp ha-718691-m03:/home/docker/cp-test.txt ha-718691:/home/docker/cp-test_ha-718691-m03_ha-718691.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-718691 ssh -n ha-718691-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-718691 ssh -n ha-718691 "sudo cat /home/docker/cp-test_ha-718691-m03_ha-718691.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-718691 cp ha-718691-m03:/home/docker/cp-test.txt ha-718691-m02:/home/docker/cp-test_ha-718691-m03_ha-718691-m02.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-718691 ssh -n ha-718691-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-718691 ssh -n ha-718691-m02 "sudo cat /home/docker/cp-test_ha-718691-m03_ha-718691-m02.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-718691 cp ha-718691-m03:/home/docker/cp-test.txt ha-718691-m04:/home/docker/cp-test_ha-718691-m03_ha-718691-m04.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-718691 ssh -n ha-718691-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-718691 ssh -n ha-718691-m04 "sudo cat /home/docker/cp-test_ha-718691-m03_ha-718691-m04.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-718691 cp testdata/cp-test.txt ha-718691-m04:/home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-718691 ssh -n ha-718691-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-718691 cp ha-718691-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile4111579691/001/cp-test_ha-718691-m04.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-718691 ssh -n ha-718691-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-718691 cp ha-718691-m04:/home/docker/cp-test.txt ha-718691:/home/docker/cp-test_ha-718691-m04_ha-718691.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-718691 ssh -n ha-718691-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-718691 ssh -n ha-718691 "sudo cat /home/docker/cp-test_ha-718691-m04_ha-718691.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-718691 cp ha-718691-m04:/home/docker/cp-test.txt ha-718691-m02:/home/docker/cp-test_ha-718691-m04_ha-718691-m02.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-718691 ssh -n ha-718691-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-718691 ssh -n ha-718691-m02 "sudo cat /home/docker/cp-test_ha-718691-m04_ha-718691-m02.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-718691 cp ha-718691-m04:/home/docker/cp-test.txt ha-718691-m03:/home/docker/cp-test_ha-718691-m04_ha-718691-m03.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-718691 ssh -n ha-718691-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-718691 ssh -n ha-718691-m03 "sudo cat /home/docker/cp-test_ha-718691-m04_ha-718691-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (19.87s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (12.97s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:365: (dbg) Run:  out/minikube-linux-arm64 -p ha-718691 node stop m02 --alsologtostderr -v 5
ha_test.go:365: (dbg) Done: out/minikube-linux-arm64 -p ha-718691 node stop m02 --alsologtostderr -v 5: (12.065586647s)
ha_test.go:371: (dbg) Run:  out/minikube-linux-arm64 -p ha-718691 status --alsologtostderr -v 5
ha_test.go:371: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ha-718691 status --alsologtostderr -v 5: exit status 7 (901.902204ms)

                                                
                                                
-- stdout --
	ha-718691
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-718691-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-718691-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-718691-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1227 09:23:39.351287  344465 out.go:360] Setting OutFile to fd 1 ...
	I1227 09:23:39.351482  344465 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 09:23:39.351509  344465 out.go:374] Setting ErrFile to fd 2...
	I1227 09:23:39.351527  344465 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 09:23:39.351802  344465 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22344-301174/.minikube/bin
	I1227 09:23:39.352081  344465 out.go:368] Setting JSON to false
	I1227 09:23:39.352156  344465 mustload.go:66] Loading cluster: ha-718691
	I1227 09:23:39.352240  344465 notify.go:221] Checking for updates...
	I1227 09:23:39.353295  344465 config.go:182] Loaded profile config "ha-718691": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1227 09:23:39.353343  344465 status.go:174] checking status of ha-718691 ...
	I1227 09:23:39.354870  344465 cli_runner.go:164] Run: docker container inspect ha-718691 --format={{.State.Status}}
	I1227 09:23:39.407267  344465 status.go:371] ha-718691 host status = "Running" (err=<nil>)
	I1227 09:23:39.407294  344465 host.go:66] Checking if "ha-718691" exists ...
	I1227 09:23:39.407603  344465 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-718691
	I1227 09:23:39.443028  344465 host.go:66] Checking if "ha-718691" exists ...
	I1227 09:23:39.443370  344465 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1227 09:23:39.443433  344465 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-718691
	I1227 09:23:39.470593  344465 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33156 SSHKeyPath:/home/jenkins/minikube-integration/22344-301174/.minikube/machines/ha-718691/id_rsa Username:docker}
	I1227 09:23:39.584072  344465 ssh_runner.go:195] Run: systemctl --version
	I1227 09:23:39.591322  344465 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1227 09:23:39.611180  344465 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1227 09:23:39.692777  344465 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:62 OomKillDisable:true NGoroutines:72 SystemTime:2025-12-27 09:23:39.682531982 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1227 09:23:39.693376  344465 kubeconfig.go:125] found "ha-718691" server: "https://192.168.49.254:8443"
	I1227 09:23:39.693415  344465 api_server.go:166] Checking apiserver status ...
	I1227 09:23:39.693460  344465 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 09:23:39.707103  344465 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1247/cgroup
	I1227 09:23:39.716105  344465 api_server.go:192] apiserver freezer: "8:freezer:/docker/b622d1aab8266c467d4d39dcdeed8bb7ce5ac19722089c23be0c33552c1e1fdc/crio/crio-36828722228ee72a8843a2764546209945f2b6149ffddf8a9a4753837b799fa9"
	I1227 09:23:39.716189  344465 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/b622d1aab8266c467d4d39dcdeed8bb7ce5ac19722089c23be0c33552c1e1fdc/crio/crio-36828722228ee72a8843a2764546209945f2b6149ffddf8a9a4753837b799fa9/freezer.state
	I1227 09:23:39.724095  344465 api_server.go:214] freezer state: "THAWED"
	I1227 09:23:39.724122  344465 api_server.go:299] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I1227 09:23:39.732298  344465 api_server.go:325] https://192.168.49.254:8443/healthz returned 200:
	ok
	I1227 09:23:39.732331  344465 status.go:463] ha-718691 apiserver status = Running (err=<nil>)
	I1227 09:23:39.732346  344465 status.go:176] ha-718691 status: &{Name:ha-718691 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1227 09:23:39.732401  344465 status.go:174] checking status of ha-718691-m02 ...
	I1227 09:23:39.732740  344465 cli_runner.go:164] Run: docker container inspect ha-718691-m02 --format={{.State.Status}}
	I1227 09:23:39.749904  344465 status.go:371] ha-718691-m02 host status = "Stopped" (err=<nil>)
	I1227 09:23:39.749928  344465 status.go:384] host is not running, skipping remaining checks
	I1227 09:23:39.749935  344465 status.go:176] ha-718691-m02 status: &{Name:ha-718691-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1227 09:23:39.749957  344465 status.go:174] checking status of ha-718691-m03 ...
	I1227 09:23:39.750326  344465 cli_runner.go:164] Run: docker container inspect ha-718691-m03 --format={{.State.Status}}
	I1227 09:23:39.776414  344465 status.go:371] ha-718691-m03 host status = "Running" (err=<nil>)
	I1227 09:23:39.776440  344465 host.go:66] Checking if "ha-718691-m03" exists ...
	I1227 09:23:39.776747  344465 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-718691-m03
	I1227 09:23:39.795231  344465 host.go:66] Checking if "ha-718691-m03" exists ...
	I1227 09:23:39.795625  344465 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1227 09:23:39.795672  344465 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-718691-m03
	I1227 09:23:39.813577  344465 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33166 SSHKeyPath:/home/jenkins/minikube-integration/22344-301174/.minikube/machines/ha-718691-m03/id_rsa Username:docker}
	I1227 09:23:39.911777  344465 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1227 09:23:39.924620  344465 kubeconfig.go:125] found "ha-718691" server: "https://192.168.49.254:8443"
	I1227 09:23:39.924652  344465 api_server.go:166] Checking apiserver status ...
	I1227 09:23:39.924695  344465 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 09:23:39.935406  344465 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1186/cgroup
	I1227 09:23:39.943780  344465 api_server.go:192] apiserver freezer: "8:freezer:/docker/e221f3f86c27d64203c1376269c05b779a2855e4c6d7d8512d55d7b10b6297d9/crio/crio-454bf3fa9ba1ba64f120b299577b817950fb54615d01be32a8dc2ddb08f55bc9"
	I1227 09:23:39.943852  344465 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/e221f3f86c27d64203c1376269c05b779a2855e4c6d7d8512d55d7b10b6297d9/crio/crio-454bf3fa9ba1ba64f120b299577b817950fb54615d01be32a8dc2ddb08f55bc9/freezer.state
	I1227 09:23:39.951589  344465 api_server.go:214] freezer state: "THAWED"
	I1227 09:23:39.951617  344465 api_server.go:299] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I1227 09:23:39.959847  344465 api_server.go:325] https://192.168.49.254:8443/healthz returned 200:
	ok
	I1227 09:23:39.959880  344465 status.go:463] ha-718691-m03 apiserver status = Running (err=<nil>)
	I1227 09:23:39.959891  344465 status.go:176] ha-718691-m03 status: &{Name:ha-718691-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1227 09:23:39.959907  344465 status.go:174] checking status of ha-718691-m04 ...
	I1227 09:23:39.960217  344465 cli_runner.go:164] Run: docker container inspect ha-718691-m04 --format={{.State.Status}}
	I1227 09:23:39.979563  344465 status.go:371] ha-718691-m04 host status = "Running" (err=<nil>)
	I1227 09:23:39.979591  344465 host.go:66] Checking if "ha-718691-m04" exists ...
	I1227 09:23:39.979901  344465 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-718691-m04
	I1227 09:23:40.035312  344465 host.go:66] Checking if "ha-718691-m04" exists ...
	I1227 09:23:40.035650  344465 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1227 09:23:40.035713  344465 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-718691-m04
	I1227 09:23:40.057542  344465 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33171 SSHKeyPath:/home/jenkins/minikube-integration/22344-301174/.minikube/machines/ha-718691-m04/id_rsa Username:docker}
	I1227 09:23:40.164302  344465 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1227 09:23:40.184298  344465 status.go:176] ha-718691-m04 status: &{Name:ha-718691-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (12.97s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.81s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:392: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.81s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (21.27s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:422: (dbg) Run:  out/minikube-linux-arm64 -p ha-718691 node start m02 --alsologtostderr -v 5
ha_test.go:422: (dbg) Done: out/minikube-linux-arm64 -p ha-718691 node start m02 --alsologtostderr -v 5: (19.632266187s)
ha_test.go:430: (dbg) Run:  out/minikube-linux-arm64 -p ha-718691 status --alsologtostderr -v 5
ha_test.go:430: (dbg) Done: out/minikube-linux-arm64 -p ha-718691 status --alsologtostderr -v 5: (1.488013948s)
ha_test.go:450: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (21.27s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (1.31s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-linux-arm64 profile list --output json: (1.307032916s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (1.31s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (110.19s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:458: (dbg) Run:  out/minikube-linux-arm64 -p ha-718691 node list --alsologtostderr -v 5
ha_test.go:464: (dbg) Run:  out/minikube-linux-arm64 -p ha-718691 stop --alsologtostderr -v 5
E1227 09:24:17.719130  303043 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22344-301174/.minikube/profiles/functional-725125/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 09:24:17.724568  303043 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22344-301174/.minikube/profiles/functional-725125/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 09:24:17.734813  303043 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22344-301174/.minikube/profiles/functional-725125/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 09:24:17.755074  303043 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22344-301174/.minikube/profiles/functional-725125/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 09:24:17.795428  303043 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22344-301174/.minikube/profiles/functional-725125/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 09:24:17.875717  303043 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22344-301174/.minikube/profiles/functional-725125/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 09:24:18.036144  303043 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22344-301174/.minikube/profiles/functional-725125/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 09:24:18.356759  303043 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22344-301174/.minikube/profiles/functional-725125/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 09:24:18.997665  303043 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22344-301174/.minikube/profiles/functional-725125/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 09:24:20.278350  303043 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22344-301174/.minikube/profiles/functional-725125/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 09:24:22.838575  303043 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22344-301174/.minikube/profiles/functional-725125/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 09:24:27.959665  303043 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22344-301174/.minikube/profiles/functional-725125/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 09:24:38.200797  303043 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22344-301174/.minikube/profiles/functional-725125/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:464: (dbg) Done: out/minikube-linux-arm64 -p ha-718691 stop --alsologtostderr -v 5: (37.867076547s)
ha_test.go:469: (dbg) Run:  out/minikube-linux-arm64 -p ha-718691 start --wait true --alsologtostderr -v 5
E1227 09:24:58.681816  303043 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22344-301174/.minikube/profiles/functional-725125/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 09:25:06.818492  303043 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22344-301174/.minikube/profiles/addons-730938/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 09:25:34.501894  303043 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22344-301174/.minikube/profiles/addons-730938/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 09:25:39.642914  303043 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22344-301174/.minikube/profiles/functional-725125/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:469: (dbg) Done: out/minikube-linux-arm64 -p ha-718691 start --wait true --alsologtostderr -v 5: (1m12.139593242s)
ha_test.go:474: (dbg) Run:  out/minikube-linux-arm64 -p ha-718691 node list --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/RestartClusterKeepsNodes (110.19s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (11.8s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:489: (dbg) Run:  out/minikube-linux-arm64 -p ha-718691 node delete m03 --alsologtostderr -v 5
ha_test.go:489: (dbg) Done: out/minikube-linux-arm64 -p ha-718691 node delete m03 --alsologtostderr -v 5: (10.825755076s)
ha_test.go:495: (dbg) Run:  out/minikube-linux-arm64 -p ha-718691 status --alsologtostderr -v 5
ha_test.go:513: (dbg) Run:  kubectl get nodes
ha_test.go:521: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (11.80s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.78s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:392: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.78s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (36.21s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:533: (dbg) Run:  out/minikube-linux-arm64 -p ha-718691 stop --alsologtostderr -v 5
ha_test.go:533: (dbg) Done: out/minikube-linux-arm64 -p ha-718691 stop --alsologtostderr -v 5: (36.085411135s)
ha_test.go:539: (dbg) Run:  out/minikube-linux-arm64 -p ha-718691 status --alsologtostderr -v 5
ha_test.go:539: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ha-718691 status --alsologtostderr -v 5: exit status 7 (123.259373ms)

                                                
                                                
-- stdout --
	ha-718691
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-718691-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-718691-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1227 09:26:42.477746  356143 out.go:360] Setting OutFile to fd 1 ...
	I1227 09:26:42.477874  356143 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 09:26:42.477885  356143 out.go:374] Setting ErrFile to fd 2...
	I1227 09:26:42.477890  356143 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 09:26:42.478426  356143 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22344-301174/.minikube/bin
	I1227 09:26:42.478674  356143 out.go:368] Setting JSON to false
	I1227 09:26:42.478700  356143 mustload.go:66] Loading cluster: ha-718691
	I1227 09:26:42.479370  356143 config.go:182] Loaded profile config "ha-718691": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1227 09:26:42.479389  356143 status.go:174] checking status of ha-718691 ...
	I1227 09:26:42.480062  356143 cli_runner.go:164] Run: docker container inspect ha-718691 --format={{.State.Status}}
	I1227 09:26:42.480468  356143 notify.go:221] Checking for updates...
	I1227 09:26:42.498950  356143 status.go:371] ha-718691 host status = "Stopped" (err=<nil>)
	I1227 09:26:42.498986  356143 status.go:384] host is not running, skipping remaining checks
	I1227 09:26:42.498993  356143 status.go:176] ha-718691 status: &{Name:ha-718691 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1227 09:26:42.499025  356143 status.go:174] checking status of ha-718691-m02 ...
	I1227 09:26:42.499343  356143 cli_runner.go:164] Run: docker container inspect ha-718691-m02 --format={{.State.Status}}
	I1227 09:26:42.525751  356143 status.go:371] ha-718691-m02 host status = "Stopped" (err=<nil>)
	I1227 09:26:42.525779  356143 status.go:384] host is not running, skipping remaining checks
	I1227 09:26:42.525786  356143 status.go:176] ha-718691-m02 status: &{Name:ha-718691-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1227 09:26:42.525804  356143 status.go:174] checking status of ha-718691-m04 ...
	I1227 09:26:42.526102  356143 cli_runner.go:164] Run: docker container inspect ha-718691-m04 --format={{.State.Status}}
	I1227 09:26:42.548453  356143 status.go:371] ha-718691-m04 host status = "Stopped" (err=<nil>)
	I1227 09:26:42.548475  356143 status.go:384] host is not running, skipping remaining checks
	I1227 09:26:42.548482  356143 status.go:176] ha-718691-m04 status: &{Name:ha-718691-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (36.21s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (69.73s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:562: (dbg) Run:  out/minikube-linux-arm64 -p ha-718691 start --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio
E1227 09:27:01.563782  303043 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22344-301174/.minikube/profiles/functional-725125/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:562: (dbg) Done: out/minikube-linux-arm64 -p ha-718691 start --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio: (1m8.757192316s)
ha_test.go:568: (dbg) Run:  out/minikube-linux-arm64 -p ha-718691 status --alsologtostderr -v 5
ha_test.go:586: (dbg) Run:  kubectl get nodes
ha_test.go:594: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (69.73s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.81s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:392: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.81s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (50.56s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:607: (dbg) Run:  out/minikube-linux-arm64 -p ha-718691 node add --control-plane --alsologtostderr -v 5
ha_test.go:607: (dbg) Done: out/minikube-linux-arm64 -p ha-718691 node add --control-plane --alsologtostderr -v 5: (49.504449836s)
ha_test.go:613: (dbg) Run:  out/minikube-linux-arm64 -p ha-718691 status --alsologtostderr -v 5
ha_test.go:613: (dbg) Done: out/minikube-linux-arm64 -p ha-718691 status --alsologtostderr -v 5: (1.051846056s)
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (50.56s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (1.08s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-linux-arm64 profile list --output json: (1.077325123s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (1.08s)

                                                
                                    
x
+
TestJSONOutput/start/Command (45.31s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-923177 --output=json --user=testUser --memory=3072 --wait=true --driver=docker  --container-runtime=crio
E1227 09:29:17.719397  303043 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22344-301174/.minikube/profiles/functional-725125/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 start -p json-output-923177 --output=json --user=testUser --memory=3072 --wait=true --driver=docker  --container-runtime=crio: (45.303840181s)
--- PASS: TestJSONOutput/start/Command (45.31s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (5.82s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 stop -p json-output-923177 --output=json --user=testUser
E1227 09:29:45.404382  303043 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22344-301174/.minikube/profiles/functional-725125/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 stop -p json-output-923177 --output=json --user=testUser: (5.824708536s)
--- PASS: TestJSONOutput/stop/Command (5.82s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.24s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-error-435851 --memory=3072 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p json-output-error-435851 --memory=3072 --output=json --wait=true --driver=fail: exit status 56 (88.278366ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"afc15293-d47f-447d-9b1f-b195c0a0228d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-435851] minikube v1.37.0 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"2890943a-0a35-48fe-9f1e-fc88c30f6c57","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=22344"}}
	{"specversion":"1.0","id":"d6df230b-8255-465d-9588-5fcd487d1ee6","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"19b3d514-6805-4ebb-aa04-178c6ed3c996","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/22344-301174/kubeconfig"}}
	{"specversion":"1.0","id":"810ad261-5544-45d8-b6bb-850d22e29b38","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/22344-301174/.minikube"}}
	{"specversion":"1.0","id":"8a9bb53f-7899-4f28-8cc3-379cd4a33003","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"560de579-f53d-4bea-9a32-f9a58a8f357a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"a3be88a5-ba26-41c6-81c7-8efd7ed0e14d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/arm64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:176: Cleaning up "json-output-error-435851" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p json-output-error-435851
--- PASS: TestErrorJSONOutput (0.24s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (32.37s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-187827 --network=
E1227 09:30:06.818340  303043 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22344-301174/.minikube/profiles/addons-730938/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-187827 --network=: (30.098273912s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:176: Cleaning up "docker-network-187827" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-187827
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-187827: (2.245791866s)
--- PASS: TestKicCustomNetwork/create_custom_network (32.37s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (30.13s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-240010 --network=bridge
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-240010 --network=bridge: (27.99185746s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:176: Cleaning up "docker-network-240010" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-240010
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-240010: (2.099817475s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (30.13s)

                                                
                                    
x
+
TestKicExistingNetwork (30.28s)

                                                
                                                
=== RUN   TestKicExistingNetwork
I1227 09:30:57.089650  303043 cli_runner.go:164] Run: docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
W1227 09:30:57.106448  303043 cli_runner.go:211] docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
I1227 09:30:57.107632  303043 network_create.go:284] running [docker network inspect existing-network] to gather additional debugging logs...
I1227 09:30:57.107668  303043 cli_runner.go:164] Run: docker network inspect existing-network
W1227 09:30:57.126989  303043 cli_runner.go:211] docker network inspect existing-network returned with exit code 1
I1227 09:30:57.127021  303043 network_create.go:287] error running [docker network inspect existing-network]: docker network inspect existing-network: exit status 1
stdout:
[]

                                                
                                                
stderr:
Error response from daemon: network existing-network not found
I1227 09:30:57.127035  303043 network_create.go:289] output of [docker network inspect existing-network]: -- stdout --
[]

                                                
                                                
-- /stdout --
** stderr ** 
Error response from daemon: network existing-network not found

                                                
                                                
** /stderr **
I1227 09:30:57.127138  303043 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I1227 09:30:57.144816  303043 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-cc45c0939b74 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:be:a9:eb:a6:c1:01} reservation:<nil>}
I1227 09:30:57.145195  303043 network.go:206] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001735760}
I1227 09:30:57.145223  303043 network_create.go:124] attempt to create docker network existing-network 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
I1227 09:30:57.145275  303043 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=existing-network existing-network
I1227 09:30:57.211987  303043 network_create.go:108] docker network existing-network 192.168.58.0/24 created
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-linux-arm64 start -p existing-network-010497 --network=existing-network
kic_custom_network_test.go:93: (dbg) Done: out/minikube-linux-arm64 start -p existing-network-010497 --network=existing-network: (28.004802918s)
helpers_test.go:176: Cleaning up "existing-network-010497" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p existing-network-010497
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p existing-network-010497: (2.121174951s)
I1227 09:31:27.354041  303043 cli_runner.go:164] Run: docker network ls --filter=label=existing-network --format {{.Name}}
--- PASS: TestKicExistingNetwork (30.28s)

                                                
                                    
x
+
TestKicCustomSubnet (30.92s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-subnet-790182 --subnet=192.168.60.0/24
kic_custom_network_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-subnet-790182 --subnet=192.168.60.0/24: (28.603895405s)
kic_custom_network_test.go:161: (dbg) Run:  docker network inspect custom-subnet-790182 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:176: Cleaning up "custom-subnet-790182" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p custom-subnet-790182
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p custom-subnet-790182: (2.276904127s)
--- PASS: TestKicCustomSubnet (30.92s)

                                                
                                    
x
+
TestKicStaticIP (31.84s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:132: (dbg) Run:  out/minikube-linux-arm64 start -p static-ip-031551 --static-ip=192.168.200.200
kic_custom_network_test.go:132: (dbg) Done: out/minikube-linux-arm64 start -p static-ip-031551 --static-ip=192.168.200.200: (29.427920154s)
kic_custom_network_test.go:138: (dbg) Run:  out/minikube-linux-arm64 -p static-ip-031551 ip
helpers_test.go:176: Cleaning up "static-ip-031551" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p static-ip-031551
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p static-ip-031551: (2.253917555s)
--- PASS: TestKicStaticIP (31.84s)

                                                
                                    
x
+
TestMainNoArgs (0.06s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:70: (dbg) Run:  out/minikube-linux-arm64
--- PASS: TestMainNoArgs (0.06s)

                                                
                                    
x
+
TestMinikubeProfile (63.51s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p first-983416 --driver=docker  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p first-983416 --driver=docker  --container-runtime=crio: (28.30123282s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p second-986164 --driver=docker  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p second-986164 --driver=docker  --container-runtime=crio: (29.339031653s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile first-983416
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile second-986164
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
helpers_test.go:176: Cleaning up "second-986164" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p second-986164
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p second-986164: (2.063242539s)
helpers_test.go:176: Cleaning up "first-983416" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p first-983416
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p first-983416: (2.361981719s)
--- PASS: TestMinikubeProfile (63.51s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (8.86s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-1-025871 --memory=3072 --mount-string /tmp/TestMountStartserial3901795772/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio
mount_start_test.go:118: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-1-025871 --memory=3072 --mount-string /tmp/TestMountStartserial3901795772/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio: (7.858715726s)
--- PASS: TestMountStart/serial/StartWithMountFirst (8.86s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.27s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-1-025871 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.27s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (8.66s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-027635 --memory=3072 --mount-string /tmp/TestMountStartserial3901795772/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio
mount_start_test.go:118: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-027635 --memory=3072 --mount-string /tmp/TestMountStartserial3901795772/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio: (7.660939019s)
--- PASS: TestMountStart/serial/StartWithMountSecond (8.66s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.27s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-027635 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.27s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (1.71s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-arm64 delete -p mount-start-1-025871 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-arm64 delete -p mount-start-1-025871 --alsologtostderr -v=5: (1.710014666s)
--- PASS: TestMountStart/serial/DeleteFirst (1.71s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.27s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-027635 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.27s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.3s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:196: (dbg) Run:  out/minikube-linux-arm64 stop -p mount-start-2-027635
mount_start_test.go:196: (dbg) Done: out/minikube-linux-arm64 stop -p mount-start-2-027635: (1.296817111s)
--- PASS: TestMountStart/serial/Stop (1.30s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (8.54s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:207: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-027635
mount_start_test.go:207: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-027635: (7.530040342s)
--- PASS: TestMountStart/serial/RestartStopped (8.54s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.27s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-027635 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.27s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (70.04s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-535956 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=docker  --container-runtime=crio
E1227 09:34:17.719379  303043 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22344-301174/.minikube/profiles/functional-725125/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 09:35:06.814326  303043 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22344-301174/.minikube/profiles/addons-730938/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:96: (dbg) Done: out/minikube-linux-arm64 start -p multinode-535956 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=docker  --container-runtime=crio: (1m9.512456468s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-arm64 -p multinode-535956 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (70.04s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (4.92s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-535956 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-535956 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-arm64 kubectl -p multinode-535956 -- rollout status deployment/busybox: (3.124360315s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-535956 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-535956 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-535956 -- exec busybox-769dd8b7dd-2l6qx -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-535956 -- exec busybox-769dd8b7dd-p4s72 -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-535956 -- exec busybox-769dd8b7dd-2l6qx -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-535956 -- exec busybox-769dd8b7dd-p4s72 -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-535956 -- exec busybox-769dd8b7dd-2l6qx -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-535956 -- exec busybox-769dd8b7dd-p4s72 -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (4.92s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.87s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-535956 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-535956 -- exec busybox-769dd8b7dd-2l6qx -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-535956 -- exec busybox-769dd8b7dd-2l6qx -- sh -c "ping -c 1 192.168.67.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-535956 -- exec busybox-769dd8b7dd-p4s72 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-535956 -- exec busybox-769dd8b7dd-p4s72 -- sh -c "ping -c 1 192.168.67.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.87s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (28.28s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-535956 -v=5 --alsologtostderr
multinode_test.go:121: (dbg) Done: out/minikube-linux-arm64 node add -p multinode-535956 -v=5 --alsologtostderr: (27.599084829s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-arm64 -p multinode-535956 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (28.28s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.09s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-535956 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.09s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.74s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.74s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (10.57s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-arm64 -p multinode-535956 status --output json --alsologtostderr
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p multinode-535956 cp testdata/cp-test.txt multinode-535956:/home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-535956 ssh -n multinode-535956 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p multinode-535956 cp multinode-535956:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile704151735/001/cp-test_multinode-535956.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-535956 ssh -n multinode-535956 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p multinode-535956 cp multinode-535956:/home/docker/cp-test.txt multinode-535956-m02:/home/docker/cp-test_multinode-535956_multinode-535956-m02.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-535956 ssh -n multinode-535956 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-535956 ssh -n multinode-535956-m02 "sudo cat /home/docker/cp-test_multinode-535956_multinode-535956-m02.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p multinode-535956 cp multinode-535956:/home/docker/cp-test.txt multinode-535956-m03:/home/docker/cp-test_multinode-535956_multinode-535956-m03.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-535956 ssh -n multinode-535956 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-535956 ssh -n multinode-535956-m03 "sudo cat /home/docker/cp-test_multinode-535956_multinode-535956-m03.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p multinode-535956 cp testdata/cp-test.txt multinode-535956-m02:/home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-535956 ssh -n multinode-535956-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p multinode-535956 cp multinode-535956-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile704151735/001/cp-test_multinode-535956-m02.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-535956 ssh -n multinode-535956-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p multinode-535956 cp multinode-535956-m02:/home/docker/cp-test.txt multinode-535956:/home/docker/cp-test_multinode-535956-m02_multinode-535956.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-535956 ssh -n multinode-535956-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-535956 ssh -n multinode-535956 "sudo cat /home/docker/cp-test_multinode-535956-m02_multinode-535956.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p multinode-535956 cp multinode-535956-m02:/home/docker/cp-test.txt multinode-535956-m03:/home/docker/cp-test_multinode-535956-m02_multinode-535956-m03.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-535956 ssh -n multinode-535956-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-535956 ssh -n multinode-535956-m03 "sudo cat /home/docker/cp-test_multinode-535956-m02_multinode-535956-m03.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p multinode-535956 cp testdata/cp-test.txt multinode-535956-m03:/home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-535956 ssh -n multinode-535956-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p multinode-535956 cp multinode-535956-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile704151735/001/cp-test_multinode-535956-m03.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-535956 ssh -n multinode-535956-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p multinode-535956 cp multinode-535956-m03:/home/docker/cp-test.txt multinode-535956:/home/docker/cp-test_multinode-535956-m03_multinode-535956.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-535956 ssh -n multinode-535956-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-535956 ssh -n multinode-535956 "sudo cat /home/docker/cp-test_multinode-535956-m03_multinode-535956.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p multinode-535956 cp multinode-535956-m03:/home/docker/cp-test.txt multinode-535956-m02:/home/docker/cp-test_multinode-535956-m03_multinode-535956-m02.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-535956 ssh -n multinode-535956-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-535956 ssh -n multinode-535956-m02 "sudo cat /home/docker/cp-test_multinode-535956-m03_multinode-535956-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (10.57s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.39s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-arm64 -p multinode-535956 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-arm64 -p multinode-535956 node stop m03: (1.318791261s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-arm64 -p multinode-535956 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-535956 status: exit status 7 (542.041947ms)

                                                
                                                
-- stdout --
	multinode-535956
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-535956-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-535956-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p multinode-535956 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-535956 status --alsologtostderr: exit status 7 (531.164402ms)

                                                
                                                
-- stdout --
	multinode-535956
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-535956-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-535956-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1227 09:36:03.168327  406680 out.go:360] Setting OutFile to fd 1 ...
	I1227 09:36:03.168520  406680 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 09:36:03.168553  406680 out.go:374] Setting ErrFile to fd 2...
	I1227 09:36:03.168583  406680 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 09:36:03.169024  406680 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22344-301174/.minikube/bin
	I1227 09:36:03.169317  406680 out.go:368] Setting JSON to false
	I1227 09:36:03.169379  406680 mustload.go:66] Loading cluster: multinode-535956
	I1227 09:36:03.170279  406680 config.go:182] Loaded profile config "multinode-535956": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1227 09:36:03.170332  406680 status.go:174] checking status of multinode-535956 ...
	I1227 09:36:03.171132  406680 cli_runner.go:164] Run: docker container inspect multinode-535956 --format={{.State.Status}}
	I1227 09:36:03.171651  406680 notify.go:221] Checking for updates...
	I1227 09:36:03.189921  406680 status.go:371] multinode-535956 host status = "Running" (err=<nil>)
	I1227 09:36:03.189951  406680 host.go:66] Checking if "multinode-535956" exists ...
	I1227 09:36:03.190371  406680 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-535956
	I1227 09:36:03.221066  406680 host.go:66] Checking if "multinode-535956" exists ...
	I1227 09:36:03.221404  406680 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1227 09:36:03.221462  406680 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-535956
	I1227 09:36:03.242856  406680 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33276 SSHKeyPath:/home/jenkins/minikube-integration/22344-301174/.minikube/machines/multinode-535956/id_rsa Username:docker}
	I1227 09:36:03.339575  406680 ssh_runner.go:195] Run: systemctl --version
	I1227 09:36:03.346450  406680 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1227 09:36:03.359363  406680 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1227 09:36:03.429934  406680 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:50 OomKillDisable:true NGoroutines:62 SystemTime:2025-12-27 09:36:03.420278413 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1227 09:36:03.430578  406680 kubeconfig.go:125] found "multinode-535956" server: "https://192.168.67.2:8443"
	I1227 09:36:03.430616  406680 api_server.go:166] Checking apiserver status ...
	I1227 09:36:03.430668  406680 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 09:36:03.442716  406680 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1221/cgroup
	I1227 09:36:03.451303  406680 api_server.go:192] apiserver freezer: "8:freezer:/docker/bdf6bae6ebf59ff0ebbfbfc719556711a7f537b11cc4d18ed05ba6767242b525/crio/crio-ba8119eb19db223b76f597b387bb1a4bac2ee952f394e096600c97c2ccd3f7ca"
	I1227 09:36:03.451376  406680 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/bdf6bae6ebf59ff0ebbfbfc719556711a7f537b11cc4d18ed05ba6767242b525/crio/crio-ba8119eb19db223b76f597b387bb1a4bac2ee952f394e096600c97c2ccd3f7ca/freezer.state
	I1227 09:36:03.459599  406680 api_server.go:214] freezer state: "THAWED"
	I1227 09:36:03.459628  406680 api_server.go:299] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I1227 09:36:03.467724  406680 api_server.go:325] https://192.168.67.2:8443/healthz returned 200:
	ok
	I1227 09:36:03.467751  406680 status.go:463] multinode-535956 apiserver status = Running (err=<nil>)
	I1227 09:36:03.467762  406680 status.go:176] multinode-535956 status: &{Name:multinode-535956 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1227 09:36:03.467779  406680 status.go:174] checking status of multinode-535956-m02 ...
	I1227 09:36:03.468095  406680 cli_runner.go:164] Run: docker container inspect multinode-535956-m02 --format={{.State.Status}}
	I1227 09:36:03.485534  406680 status.go:371] multinode-535956-m02 host status = "Running" (err=<nil>)
	I1227 09:36:03.485560  406680 host.go:66] Checking if "multinode-535956-m02" exists ...
	I1227 09:36:03.485873  406680 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-535956-m02
	I1227 09:36:03.504278  406680 host.go:66] Checking if "multinode-535956-m02" exists ...
	I1227 09:36:03.504597  406680 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1227 09:36:03.504641  406680 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-535956-m02
	I1227 09:36:03.522095  406680 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33281 SSHKeyPath:/home/jenkins/minikube-integration/22344-301174/.minikube/machines/multinode-535956-m02/id_rsa Username:docker}
	I1227 09:36:03.619306  406680 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1227 09:36:03.631685  406680 status.go:176] multinode-535956-m02 status: &{Name:multinode-535956-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I1227 09:36:03.631719  406680 status.go:174] checking status of multinode-535956-m03 ...
	I1227 09:36:03.632072  406680 cli_runner.go:164] Run: docker container inspect multinode-535956-m03 --format={{.State.Status}}
	I1227 09:36:03.648536  406680 status.go:371] multinode-535956-m03 host status = "Stopped" (err=<nil>)
	I1227 09:36:03.648556  406680 status.go:384] host is not running, skipping remaining checks
	I1227 09:36:03.648562  406680 status.go:176] multinode-535956-m03 status: &{Name:multinode-535956-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.39s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (8.63s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-arm64 -p multinode-535956 node start m03 -v=5 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-arm64 -p multinode-535956 node start m03 -v=5 --alsologtostderr: (7.840143813s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-arm64 -p multinode-535956 status -v=5 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (8.63s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (72.71s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-535956
multinode_test.go:321: (dbg) Run:  out/minikube-linux-arm64 stop -p multinode-535956
E1227 09:36:29.863114  303043 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22344-301174/.minikube/profiles/addons-730938/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:321: (dbg) Done: out/minikube-linux-arm64 stop -p multinode-535956: (25.121093903s)
multinode_test.go:326: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-535956 --wait=true -v=5 --alsologtostderr
multinode_test.go:326: (dbg) Done: out/minikube-linux-arm64 start -p multinode-535956 --wait=true -v=5 --alsologtostderr: (47.454700215s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-535956
--- PASS: TestMultiNode/serial/RestartKeepsNodes (72.71s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (5.68s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-arm64 -p multinode-535956 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-arm64 -p multinode-535956 node delete m03: (4.947310769s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-arm64 -p multinode-535956 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (5.68s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (23.99s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-arm64 -p multinode-535956 stop
multinode_test.go:345: (dbg) Done: out/minikube-linux-arm64 -p multinode-535956 stop: (23.79419309s)
multinode_test.go:351: (dbg) Run:  out/minikube-linux-arm64 -p multinode-535956 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-535956 status: exit status 7 (101.22226ms)

                                                
                                                
-- stdout --
	multinode-535956
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-535956-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-linux-arm64 -p multinode-535956 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-535956 status --alsologtostderr: exit status 7 (89.284774ms)

                                                
                                                
-- stdout --
	multinode-535956
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-535956-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1227 09:37:54.619682  414541 out.go:360] Setting OutFile to fd 1 ...
	I1227 09:37:54.619873  414541 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 09:37:54.619900  414541 out.go:374] Setting ErrFile to fd 2...
	I1227 09:37:54.619922  414541 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 09:37:54.620340  414541 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22344-301174/.minikube/bin
	I1227 09:37:54.620634  414541 out.go:368] Setting JSON to false
	I1227 09:37:54.620689  414541 mustload.go:66] Loading cluster: multinode-535956
	I1227 09:37:54.620982  414541 notify.go:221] Checking for updates...
	I1227 09:37:54.621506  414541 config.go:182] Loaded profile config "multinode-535956": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1227 09:37:54.621534  414541 status.go:174] checking status of multinode-535956 ...
	I1227 09:37:54.622120  414541 cli_runner.go:164] Run: docker container inspect multinode-535956 --format={{.State.Status}}
	I1227 09:37:54.640804  414541 status.go:371] multinode-535956 host status = "Stopped" (err=<nil>)
	I1227 09:37:54.640825  414541 status.go:384] host is not running, skipping remaining checks
	I1227 09:37:54.640832  414541 status.go:176] multinode-535956 status: &{Name:multinode-535956 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1227 09:37:54.640864  414541 status.go:174] checking status of multinode-535956-m02 ...
	I1227 09:37:54.641177  414541 cli_runner.go:164] Run: docker container inspect multinode-535956-m02 --format={{.State.Status}}
	I1227 09:37:54.658325  414541 status.go:371] multinode-535956-m02 host status = "Stopped" (err=<nil>)
	I1227 09:37:54.658343  414541 status.go:384] host is not running, skipping remaining checks
	I1227 09:37:54.658349  414541 status.go:176] multinode-535956-m02 status: &{Name:multinode-535956-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (23.99s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (50.98s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-535956 --wait=true -v=5 --alsologtostderr --driver=docker  --container-runtime=crio
multinode_test.go:376: (dbg) Done: out/minikube-linux-arm64 start -p multinode-535956 --wait=true -v=5 --alsologtostderr --driver=docker  --container-runtime=crio: (50.143121777s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-arm64 -p multinode-535956 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (50.98s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (30.24s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-535956
multinode_test.go:464: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-535956-m02 --driver=docker  --container-runtime=crio
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p multinode-535956-m02 --driver=docker  --container-runtime=crio: exit status 14 (86.900262ms)

                                                
                                                
-- stdout --
	* [multinode-535956-m02] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=22344
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22344-301174/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22344-301174/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-535956-m02' is duplicated with machine name 'multinode-535956-m02' in profile 'multinode-535956'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-535956-m03 --driver=docker  --container-runtime=crio
multinode_test.go:472: (dbg) Done: out/minikube-linux-arm64 start -p multinode-535956-m03 --driver=docker  --container-runtime=crio: (27.73768491s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-535956
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-arm64 node add -p multinode-535956: exit status 80 (319.929731ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-535956 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-535956-m03 already exists in multinode-535956-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-arm64 delete -p multinode-535956-m03
multinode_test.go:484: (dbg) Done: out/minikube-linux-arm64 delete -p multinode-535956-m03: (2.045618022s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (30.24s)

                                                
                                    
x
+
TestScheduledStopUnix (102.35s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-arm64 start -p scheduled-stop-172677 --memory=3072 --driver=docker  --container-runtime=crio
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-arm64 start -p scheduled-stop-172677 --memory=3072 --driver=docker  --container-runtime=crio: (26.755636877s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-172677 --schedule 5m -v=5 --alsologtostderr
minikube stop output:

                                                
                                                
** stderr ** 
	I1227 09:39:46.969613  422968 out.go:360] Setting OutFile to fd 1 ...
	I1227 09:39:46.969836  422968 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 09:39:46.969869  422968 out.go:374] Setting ErrFile to fd 2...
	I1227 09:39:46.969889  422968 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 09:39:46.970197  422968 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22344-301174/.minikube/bin
	I1227 09:39:46.970515  422968 out.go:368] Setting JSON to false
	I1227 09:39:46.970664  422968 mustload.go:66] Loading cluster: scheduled-stop-172677
	I1227 09:39:46.971046  422968 config.go:182] Loaded profile config "scheduled-stop-172677": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1227 09:39:46.971156  422968 profile.go:143] Saving config to /home/jenkins/minikube-integration/22344-301174/.minikube/profiles/scheduled-stop-172677/config.json ...
	I1227 09:39:46.971379  422968 mustload.go:66] Loading cluster: scheduled-stop-172677
	I1227 09:39:46.971535  422968 config.go:182] Loaded profile config "scheduled-stop-172677": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0

                                                
                                                
** /stderr **
scheduled_stop_test.go:204: (dbg) Run:  out/minikube-linux-arm64 status --format={{.TimeToStop}} -p scheduled-stop-172677 -n scheduled-stop-172677
scheduled_stop_test.go:172: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-172677 --schedule 15s -v=5 --alsologtostderr
minikube stop output:

                                                
                                                
** stderr ** 
	I1227 09:39:47.445953  423056 out.go:360] Setting OutFile to fd 1 ...
	I1227 09:39:47.446185  423056 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 09:39:47.446214  423056 out.go:374] Setting ErrFile to fd 2...
	I1227 09:39:47.446234  423056 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 09:39:47.446535  423056 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22344-301174/.minikube/bin
	I1227 09:39:47.446833  423056 out.go:368] Setting JSON to false
	I1227 09:39:47.447915  423056 daemonize_unix.go:73] killing process 422985 as it is an old scheduled stop
	I1227 09:39:47.450272  423056 mustload.go:66] Loading cluster: scheduled-stop-172677
	I1227 09:39:47.450751  423056 config.go:182] Loaded profile config "scheduled-stop-172677": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1227 09:39:47.450843  423056 profile.go:143] Saving config to /home/jenkins/minikube-integration/22344-301174/.minikube/profiles/scheduled-stop-172677/config.json ...
	I1227 09:39:47.451038  423056 mustload.go:66] Loading cluster: scheduled-stop-172677
	I1227 09:39:47.451163  423056 config.go:182] Loaded profile config "scheduled-stop-172677": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0

                                                
                                                
** /stderr **
scheduled_stop_test.go:172: signal error was:  os: process already finished
I1227 09:39:47.456909  303043 retry.go:84] will retry after 0s: open /home/jenkins/minikube-integration/22344-301174/.minikube/profiles/scheduled-stop-172677/pid: no such file or directory
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-172677 --cancel-scheduled
minikube stop output:

                                                
                                                
-- stdout --
	* All existing scheduled stops cancelled

                                                
                                                
-- /stdout --
E1227 09:40:06.822043  303043 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22344-301174/.minikube/profiles/addons-730938/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
scheduled_stop_test.go:189: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-172677 -n scheduled-stop-172677
scheduled_stop_test.go:218: (dbg) Run:  out/minikube-linux-arm64 status -p scheduled-stop-172677
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-172677 --schedule 15s -v=5 --alsologtostderr
minikube stop output:

                                                
                                                
** stderr ** 
	I1227 09:40:13.418742  423528 out.go:360] Setting OutFile to fd 1 ...
	I1227 09:40:13.418953  423528 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 09:40:13.418979  423528 out.go:374] Setting ErrFile to fd 2...
	I1227 09:40:13.418999  423528 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 09:40:13.419819  423528 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22344-301174/.minikube/bin
	I1227 09:40:13.420333  423528 out.go:368] Setting JSON to false
	I1227 09:40:13.420484  423528 mustload.go:66] Loading cluster: scheduled-stop-172677
	I1227 09:40:13.420854  423528 config.go:182] Loaded profile config "scheduled-stop-172677": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1227 09:40:13.420935  423528 profile.go:143] Saving config to /home/jenkins/minikube-integration/22344-301174/.minikube/profiles/scheduled-stop-172677/config.json ...
	I1227 09:40:13.421149  423528 mustload.go:66] Loading cluster: scheduled-stop-172677
	I1227 09:40:13.421271  423528 config.go:182] Loaded profile config "scheduled-stop-172677": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0

                                                
                                                
** /stderr **
scheduled_stop_test.go:172: signal error was:  os: process already finished
E1227 09:40:40.764705  303043 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22344-301174/.minikube/profiles/functional-725125/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
scheduled_stop_test.go:218: (dbg) Run:  out/minikube-linux-arm64 status -p scheduled-stop-172677
scheduled_stop_test.go:218: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p scheduled-stop-172677: exit status 7 (81.058705ms)

                                                
                                                
-- stdout --
	scheduled-stop-172677
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:189: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-172677 -n scheduled-stop-172677
scheduled_stop_test.go:189: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-172677 -n scheduled-stop-172677: exit status 7 (66.688148ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:189: status error: exit status 7 (may be ok)
helpers_test.go:176: Cleaning up "scheduled-stop-172677" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p scheduled-stop-172677
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p scheduled-stop-172677: (3.932859921s)
--- PASS: TestScheduledStopUnix (102.35s)

                                                
                                    
x
+
TestInsufficientStorage (12.62s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-linux-arm64 start -p insufficient-storage-641667 --memory=3072 --output=json --wait=true --driver=docker  --container-runtime=crio
status_test.go:50: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p insufficient-storage-641667 --memory=3072 --output=json --wait=true --driver=docker  --container-runtime=crio: exit status 26 (10.041261761s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"5a473959-b472-4205-a6bf-82b694e92546","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-641667] minikube v1.37.0 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"1d72eeb1-07b2-46b4-afd3-28cacbbac52e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=22344"}}
	{"specversion":"1.0","id":"b3ab1b6c-31f0-4227-8a9a-ed900b342fc3","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"198e16fc-74cd-4e73-9a5a-55e5dca302a4","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/22344-301174/kubeconfig"}}
	{"specversion":"1.0","id":"d7504266-0f0e-4efc-9c3f-8b9706e40ee6","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/22344-301174/.minikube"}}
	{"specversion":"1.0","id":"156bc716-59ab-47d5-b38e-552f7effaafb","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"f08540c1-98cf-496d-91ab-294fb85c4cb3","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"e9d69920-09ee-45c3-9ad8-0ea2370f5abd","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"5c403656-5919-44ed-9d02-855dc6a20791","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"62744c53-0a27-4ad0-85e1-f575be120d3c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"7ad9c9c5-a7d5-4e5e-9038-ee5bada604c7","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker driver with root privileges"}}
	{"specversion":"1.0","id":"f650cae8-9781-4f84-9828-e9a50bf17ca8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"insufficient-storage-641667\" primary control-plane node in \"insufficient-storage-641667\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"2e658d06-9ebc-44b4-ab1e-452fea15d367","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image v0.0.48-1766570851-22316 ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"a88560ec-3a5b-4fa5-a28d-f397106ab827","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=3072MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"b465313f-8281-4c88-8245-b2092e4b87d2","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100% of capacity). You can pass '--force' to skip this check.","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-641667 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-641667 --output=json --layout=cluster: exit status 7 (303.149931ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-641667","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=3072MB) ...","BinaryVersion":"v1.37.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-641667","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E1227 09:41:12.837693  425382 status.go:458] kubeconfig endpoint: get endpoint: "insufficient-storage-641667" does not appear in /home/jenkins/minikube-integration/22344-301174/kubeconfig

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-641667 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-641667 --output=json --layout=cluster: exit status 7 (307.139135ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-641667","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.37.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-641667","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E1227 09:41:13.145486  425448 status.go:458] kubeconfig endpoint: get endpoint: "insufficient-storage-641667" does not appear in /home/jenkins/minikube-integration/22344-301174/kubeconfig
	E1227 09:41:13.155915  425448 status.go:258] unable to read event log: stat: stat /home/jenkins/minikube-integration/22344-301174/.minikube/profiles/insufficient-storage-641667/events.json: no such file or directory

                                                
                                                
** /stderr **
helpers_test.go:176: Cleaning up "insufficient-storage-641667" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p insufficient-storage-641667
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p insufficient-storage-641667: (1.964885612s)
--- PASS: TestInsufficientStorage (12.62s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (307.5s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.35.0.1654681543 start -p running-upgrade-193962 --memory=3072 --vm-driver=docker  --container-runtime=crio
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.35.0.1654681543 start -p running-upgrade-193962 --memory=3072 --vm-driver=docker  --container-runtime=crio: (35.390246824s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-arm64 start -p running-upgrade-193962 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-arm64 start -p running-upgrade-193962 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (4m28.401567939s)
helpers_test.go:176: Cleaning up "running-upgrade-193962" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p running-upgrade-193962
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p running-upgrade-193962: (2.020093719s)
--- PASS: TestRunningBinaryUpgrade (307.50s)

                                                
                                    
x
+
TestKubernetesUpgrade (186.5s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-830516 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:222: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-830516 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (40.472231838s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-arm64 stop -p kubernetes-upgrade-830516 --alsologtostderr
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-arm64 stop -p kubernetes-upgrade-830516 --alsologtostderr: (3.721492514s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-arm64 -p kubernetes-upgrade-830516 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-arm64 -p kubernetes-upgrade-830516 status --format={{.Host}}: exit status 7 (180.498023ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-830516 --memory=3072 --kubernetes-version=v1.35.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-830516 --memory=3072 --kubernetes-version=v1.35.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (1m59.631370856s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-830516 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-830516 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p kubernetes-upgrade-830516 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio: exit status 106 (110.266208ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-830516] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=22344
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22344-301174/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22344-301174/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.35.0 cluster to v1.28.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.28.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-830516
	    minikube start -p kubernetes-upgrade-830516 --kubernetes-version=v1.28.0
	    
	    2) Create a second cluster with Kubernetes 1.28.0, by running:
	    
	    minikube start -p kubernetes-upgrade-8305162 --kubernetes-version=v1.28.0
	    
	    3) Use the existing cluster at version Kubernetes 1.35.0, by running:
	    
	    minikube start -p kubernetes-upgrade-830516 --kubernetes-version=v1.35.0
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-830516 --memory=3072 --kubernetes-version=v1.35.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-830516 --memory=3072 --kubernetes-version=v1.35.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (19.601946487s)
helpers_test.go:176: Cleaning up "kubernetes-upgrade-830516" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p kubernetes-upgrade-830516
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p kubernetes-upgrade-830516: (2.63452876s)
--- PASS: TestKubernetesUpgrade (186.50s)

                                                
                                    
x
+
TestMissingContainerUpgrade (119.26s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:309: (dbg) Run:  /tmp/minikube-v1.35.0.3410475964 start -p missing-upgrade-080776 --memory=3072 --driver=docker  --container-runtime=crio
version_upgrade_test.go:309: (dbg) Done: /tmp/minikube-v1.35.0.3410475964 start -p missing-upgrade-080776 --memory=3072 --driver=docker  --container-runtime=crio: (1m1.045614004s)
version_upgrade_test.go:318: (dbg) Run:  docker stop missing-upgrade-080776
version_upgrade_test.go:323: (dbg) Run:  docker rm missing-upgrade-080776
version_upgrade_test.go:329: (dbg) Run:  out/minikube-linux-arm64 start -p missing-upgrade-080776 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:329: (dbg) Done: out/minikube-linux-arm64 start -p missing-upgrade-080776 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (53.416785414s)
helpers_test.go:176: Cleaning up "missing-upgrade-080776" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p missing-upgrade-080776
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p missing-upgrade-080776: (3.326457583s)
--- PASS: TestMissingContainerUpgrade (119.26s)

                                                
                                    
x
+
TestPause/serial/Start (57.24s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-arm64 start -p pause-212930 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=crio
pause_test.go:80: (dbg) Done: out/minikube-linux-arm64 start -p pause-212930 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=crio: (57.237280466s)
--- PASS: TestPause/serial/Start (57.24s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (16s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-arm64 start -p pause-212930 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
pause_test.go:92: (dbg) Done: out/minikube-linux-arm64 start -p pause-212930 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (15.985176514s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (16.00s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (0.83s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (0.83s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (309.71s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.35.0.412254326 start -p stopped-upgrade-984433 --memory=3072 --vm-driver=docker  --container-runtime=crio
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.35.0.412254326 start -p stopped-upgrade-984433 --memory=3072 --vm-driver=docker  --container-runtime=crio: (39.309752019s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.35.0.412254326 -p stopped-upgrade-984433 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.35.0.412254326 -p stopped-upgrade-984433 stop: (1.349660379s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-arm64 start -p stopped-upgrade-984433 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
E1227 09:44:17.719390  303043 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22344-301174/.minikube/profiles/functional-725125/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 09:45:06.814904  303043 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22344-301174/.minikube/profiles/addons-730938/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-arm64 start -p stopped-upgrade-984433 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (4m29.051879509s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (309.71s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (1.94s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-arm64 logs -p stopped-upgrade-984433
version_upgrade_test.go:206: (dbg) Done: out/minikube-linux-arm64 logs -p stopped-upgrade-984433: (1.935479581s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (1.94s)

                                                
                                    
x
+
TestPreload/Start-NoPreload-PullImage (65.11s)

                                                
                                                
=== RUN   TestPreload/Start-NoPreload-PullImage
preload_test.go:49: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-677058 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio
E1227 09:49:17.719267  303043 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22344-301174/.minikube/profiles/functional-725125/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
preload_test.go:49: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-677058 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio: (58.276539247s)
preload_test.go:56: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-677058 image pull ghcr.io/medyagh/image-mirrors/busybox:latest
preload_test.go:62: (dbg) Run:  out/minikube-linux-arm64 stop -p test-preload-677058
preload_test.go:62: (dbg) Done: out/minikube-linux-arm64 stop -p test-preload-677058: (5.959636326s)
--- PASS: TestPreload/Start-NoPreload-PullImage (65.11s)

                                                
                                    
x
+
TestPreload/Restart-With-Preload-Check-User-Image (51.29s)

                                                
                                                
=== RUN   TestPreload/Restart-With-Preload-Check-User-Image
preload_test.go:71: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-677058 --preload=true --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=crio
E1227 09:50:06.814711  303043 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22344-301174/.minikube/profiles/addons-730938/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
preload_test.go:71: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-677058 --preload=true --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=crio: (50.985775401s)
preload_test.go:76: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-677058 image list
--- PASS: TestPreload/Restart-With-Preload-Check-User-Image (51.29s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.11s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:108: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-620940 --no-kubernetes --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:108: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p NoKubernetes-620940 --no-kubernetes --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio: exit status 14 (105.238995ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-620940] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=22344
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22344-301174/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22344-301174/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.11s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (27.76s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:120: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-620940 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:120: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-620940 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (27.4179125s)
no_kubernetes_test.go:225: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-620940 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (27.76s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (17.63s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:137: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-620940 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:137: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-620940 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (15.343896031s)
no_kubernetes_test.go:225: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-620940 status -o json
no_kubernetes_test.go:225: (dbg) Non-zero exit: out/minikube-linux-arm64 -p NoKubernetes-620940 status -o json: exit status 2 (297.730168ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-620940","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:149: (dbg) Run:  out/minikube-linux-arm64 delete -p NoKubernetes-620940
no_kubernetes_test.go:149: (dbg) Done: out/minikube-linux-arm64 delete -p NoKubernetes-620940: (1.983496351s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (17.63s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (7.75s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:161: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-620940 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:161: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-620940 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (7.749545817s)
--- PASS: TestNoKubernetes/serial/Start (7.75s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads (0s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads
no_kubernetes_test.go:89: Checking cache directory: /home/jenkins/minikube-integration/22344-301174/.minikube/cache/linux/arm64/v0.0.0
--- PASS: TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads (0.00s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.31s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:172: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-620940 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:172: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-620940 "sudo systemctl is-active --quiet service kubelet": exit status 1 (308.877377ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.31s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (1.06s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:194: (dbg) Run:  out/minikube-linux-arm64 profile list
no_kubernetes_test.go:204: (dbg) Run:  out/minikube-linux-arm64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (1.06s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.3s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:183: (dbg) Run:  out/minikube-linux-arm64 stop -p NoKubernetes-620940
no_kubernetes_test.go:183: (dbg) Done: out/minikube-linux-arm64 stop -p NoKubernetes-620940: (1.295379986s)
--- PASS: TestNoKubernetes/serial/Stop (1.30s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (7.05s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:216: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-620940 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:216: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-620940 --driver=docker  --container-runtime=crio: (7.049457971s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (7.05s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.31s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:172: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-620940 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:172: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-620940 "sudo systemctl is-active --quiet service kubelet": exit status 1 (308.938932ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (3.62s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-arm64 start -p false-246753 --memory=3072 --alsologtostderr --cni=false --driver=docker  --container-runtime=crio
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p false-246753 --memory=3072 --alsologtostderr --cni=false --driver=docker  --container-runtime=crio: exit status 14 (175.252582ms)

                                                
                                                
-- stdout --
	* [false-246753] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=22344
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22344-301174/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22344-301174/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1227 09:52:00.793440  478085 out.go:360] Setting OutFile to fd 1 ...
	I1227 09:52:00.793610  478085 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 09:52:00.793639  478085 out.go:374] Setting ErrFile to fd 2...
	I1227 09:52:00.793661  478085 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 09:52:00.793924  478085 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22344-301174/.minikube/bin
	I1227 09:52:00.794415  478085 out.go:368] Setting JSON to false
	I1227 09:52:00.795291  478085 start.go:133] hostinfo: {"hostname":"ip-172-31-31-251","uptime":9270,"bootTime":1766819851,"procs":173,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1227 09:52:00.795387  478085 start.go:143] virtualization:  
	I1227 09:52:00.798818  478085 out.go:179] * [false-246753] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1227 09:52:00.802677  478085 notify.go:221] Checking for updates...
	I1227 09:52:00.803563  478085 out.go:179]   - MINIKUBE_LOCATION=22344
	I1227 09:52:00.806460  478085 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1227 09:52:00.809653  478085 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22344-301174/kubeconfig
	I1227 09:52:00.813038  478085 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22344-301174/.minikube
	I1227 09:52:00.815952  478085 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1227 09:52:00.818872  478085 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1227 09:52:00.822377  478085 config.go:182] Loaded profile config "force-systemd-env-029895": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1227 09:52:00.822506  478085 driver.go:422] Setting default libvirt URI to qemu:///system
	I1227 09:52:00.851763  478085 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1227 09:52:00.851880  478085 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1227 09:52:00.904537  478085 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-27 09:52:00.895189923 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1227 09:52:00.904643  478085 docker.go:319] overlay module found
	I1227 09:52:00.907762  478085 out.go:179] * Using the docker driver based on user configuration
	I1227 09:52:00.910625  478085 start.go:309] selected driver: docker
	I1227 09:52:00.910647  478085 start.go:928] validating driver "docker" against <nil>
	I1227 09:52:00.910661  478085 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1227 09:52:00.914271  478085 out.go:203] 
	W1227 09:52:00.917202  478085 out.go:285] X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	I1227 09:52:00.920063  478085 out.go:203] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-246753 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-246753

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-246753

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-246753

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-246753

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-246753

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-246753

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-246753

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-246753

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-246753

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-246753

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-246753" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-246753"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-246753" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-246753"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-246753" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-246753"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-246753

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-246753" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-246753"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-246753" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-246753"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-246753" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-246753" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-246753" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-246753" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-246753" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-246753" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-246753" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-246753" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-246753" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-246753"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-246753" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-246753"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-246753" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-246753"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-246753" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-246753"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-246753" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-246753"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-246753" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-246753" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-246753" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-246753" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-246753"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-246753" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-246753"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-246753" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-246753"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-246753" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-246753"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-246753" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-246753"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-246753

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-246753" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-246753"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-246753" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-246753"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-246753" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-246753"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-246753" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-246753"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-246753" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-246753"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-246753" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-246753"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-246753" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-246753"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-246753" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-246753"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-246753" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-246753"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-246753" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-246753"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-246753" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-246753"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-246753" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-246753"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-246753" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-246753"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-246753" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-246753"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-246753" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-246753"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-246753" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-246753"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-246753" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-246753"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-246753" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-246753"

                                                
                                                
----------------------- debugLogs end: false-246753 [took: 3.29610426s] --------------------------------
helpers_test.go:176: Cleaning up "false-246753" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p false-246753
--- PASS: TestNetworkPlugins/group/false (3.62s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (64.04s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p old-k8s-version-156305 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0
E1227 10:00:06.814508  303043 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22344-301174/.minikube/profiles/addons-730938/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-arm64 start -p old-k8s-version-156305 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0: (1m4.043567946s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (64.04s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (8.4s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-156305 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:353: "busybox" [dfb2f88f-5b2b-4d4e-947d-54a4743f76e3] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:353: "busybox" [dfb2f88f-5b2b-4d4e-947d-54a4743f76e3] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 8.003754826s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-156305 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (8.40s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (12.03s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p old-k8s-version-156305 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p old-k8s-version-156305 --alsologtostderr -v=3: (12.024077892s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (12.03s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-156305 -n old-k8s-version-156305
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-156305 -n old-k8s-version-156305: exit status 7 (81.929696ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p old-k8s-version-156305 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.20s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (53.73s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p old-k8s-version-156305 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-arm64 start -p old-k8s-version-156305 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0: (53.356853423s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-156305 -n old-k8s-version-156305
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (53.73s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:353: "kubernetes-dashboard-8694d4445c-b6nzn" [14248e6a-3981-4785-b2a6-b5c3128b97dd] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003854536s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.12s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:353: "kubernetes-dashboard-8694d4445c-b6nzn" [14248e6a-3981-4785-b2a6-b5c3128b97dd] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004302886s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context old-k8s-version-156305 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.12s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p old-k8s-version-156305 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20230511-dc714da8
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20251212-v0.29.0-alpha-105-g20ccfc88
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.24s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (55.81s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p no-preload-021144 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-arm64 start -p no-preload-021144 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0: (55.809402483s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (55.81s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (8.3s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-021144 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:353: "busybox" [7e543378-18ad-4c55-8879-0efffa9bdb70] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:353: "busybox" [7e543378-18ad-4c55-8879-0efffa9bdb70] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 8.003927052s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-021144 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (8.30s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (12.03s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p no-preload-021144 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p no-preload-021144 --alsologtostderr -v=3: (12.034605704s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (12.03s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-021144 -n no-preload-021144
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-021144 -n no-preload-021144: exit status 7 (82.701667ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p no-preload-021144 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.19s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (50.81s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p no-preload-021144 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-arm64 start -p no-preload-021144 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0: (50.412511267s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-021144 -n no-preload-021144
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (50.81s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:353: "kubernetes-dashboard-b84665fb8-khhmw" [bbd8214e-d371-4400-9cbc-7d9e3fa0ba40] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003424338s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (54.06s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p embed-certs-017122 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0
E1227 10:04:17.719577  303043 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22344-301174/.minikube/profiles/functional-725125/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-arm64 start -p embed-certs-017122 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0: (54.061377781s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (54.06s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.12s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:353: "kubernetes-dashboard-b84665fb8-khhmw" [bbd8214e-d371-4400-9cbc-7d9e3fa0ba40] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.0042329s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context no-preload-021144 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.12s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.28s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p no-preload-021144 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20251212-v0.29.0-alpha-105-g20ccfc88
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.28s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (49.53s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p default-k8s-diff-port-681744 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0
E1227 10:05:06.814609  303043 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22344-301174/.minikube/profiles/addons-730938/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-arm64 start -p default-k8s-diff-port-681744 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0: (49.525753382s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (49.53s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (9.36s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-017122 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:353: "busybox" [b1105fa6-5257-4bdf-a5ef-08d24fc959ae] Pending
helpers_test.go:353: "busybox" [b1105fa6-5257-4bdf-a5ef-08d24fc959ae] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:353: "busybox" [b1105fa6-5257-4bdf-a5ef-08d24fc959ae] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 9.003037182s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-017122 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (9.36s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (12.11s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p embed-certs-017122 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p embed-certs-017122 --alsologtostderr -v=3: (12.107146214s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (12.11s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (9.31s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-681744 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:353: "busybox" [009d55c6-9295-4db0-86af-fd454f83cf65] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
E1227 10:05:29.710348  303043 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22344-301174/.minikube/profiles/old-k8s-version-156305/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 10:05:29.715663  303043 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22344-301174/.minikube/profiles/old-k8s-version-156305/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 10:05:29.726033  303043 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22344-301174/.minikube/profiles/old-k8s-version-156305/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 10:05:29.746350  303043 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22344-301174/.minikube/profiles/old-k8s-version-156305/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 10:05:29.787016  303043 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22344-301174/.minikube/profiles/old-k8s-version-156305/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 10:05:29.867433  303043 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22344-301174/.minikube/profiles/old-k8s-version-156305/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 10:05:30.028136  303043 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22344-301174/.minikube/profiles/old-k8s-version-156305/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 10:05:30.348737  303043 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22344-301174/.minikube/profiles/old-k8s-version-156305/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 10:05:30.989384  303043 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22344-301174/.minikube/profiles/old-k8s-version-156305/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:353: "busybox" [009d55c6-9295-4db0-86af-fd454f83cf65] Running
E1227 10:05:32.270608  303043 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22344-301174/.minikube/profiles/old-k8s-version-156305/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 10:05:34.831557  303043 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22344-301174/.minikube/profiles/old-k8s-version-156305/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 9.00287256s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-681744 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (9.31s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-017122 -n embed-certs-017122
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-017122 -n embed-certs-017122: exit status 7 (86.283709ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p embed-certs-017122 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.21s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (51.54s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p embed-certs-017122 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-arm64 start -p embed-certs-017122 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0: (51.138875791s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-017122 -n embed-certs-017122
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (51.54s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (12.31s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p default-k8s-diff-port-681744 --alsologtostderr -v=3
E1227 10:05:50.193397  303043 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22344-301174/.minikube/profiles/old-k8s-version-156305/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p default-k8s-diff-port-681744 --alsologtostderr -v=3: (12.314205621s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (12.31s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.26s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-681744 -n default-k8s-diff-port-681744
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-681744 -n default-k8s-diff-port-681744: exit status 7 (102.309632ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p default-k8s-diff-port-681744 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.26s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (55.04s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p default-k8s-diff-port-681744 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0
E1227 10:06:10.674031  303043 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22344-301174/.minikube/profiles/old-k8s-version-156305/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-arm64 start -p default-k8s-diff-port-681744 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0: (54.616034291s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-681744 -n default-k8s-diff-port-681744
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (55.04s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:353: "kubernetes-dashboard-b84665fb8-zzkkj" [cf8789ad-7311-4a2e-af96-825b4d95b422] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003051269s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.13s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:353: "kubernetes-dashboard-b84665fb8-zzkkj" [cf8789ad-7311-4a2e-af96-825b4d95b422] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004877788s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context embed-certs-017122 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.13s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.25s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p embed-certs-017122 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20251212-v0.29.0-alpha-105-g20ccfc88
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.25s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:353: "kubernetes-dashboard-b84665fb8-rmdxj" [ecc0e3fd-a251-4e74-9eac-8bb2a1189811] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.002772926s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (36.02s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p newest-cni-133340 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0
E1227 10:06:51.635211  303043 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22344-301174/.minikube/profiles/old-k8s-version-156305/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-arm64 start -p newest-cni-133340 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0: (36.016171013s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (36.02s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.12s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:353: "kubernetes-dashboard-b84665fb8-rmdxj" [ecc0e3fd-a251-4e74-9eac-8bb2a1189811] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004751489s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context default-k8s-diff-port-681744 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.12s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.28s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p default-k8s-diff-port-681744 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20251212-v0.29.0-alpha-105-g20ccfc88
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.28s)

                                                
                                    
x
+
TestPreload/PreloadSrc/gcs (4.56s)

                                                
                                                
=== RUN   TestPreload/PreloadSrc/gcs
preload_test.go:110: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-dl-gcs-425359 --download-only --kubernetes-version v1.34.0-rc.1 --preload-source=gcs --alsologtostderr --v=1 --driver=docker  --container-runtime=crio
preload_test.go:110: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-dl-gcs-425359 --download-only --kubernetes-version v1.34.0-rc.1 --preload-source=gcs --alsologtostderr --v=1 --driver=docker  --container-runtime=crio: (4.310752882s)
helpers_test.go:176: Cleaning up "test-preload-dl-gcs-425359" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p test-preload-dl-gcs-425359
--- PASS: TestPreload/PreloadSrc/gcs (4.56s)

                                                
                                    
x
+
TestPreload/PreloadSrc/github (3.94s)

                                                
                                                
=== RUN   TestPreload/PreloadSrc/github
preload_test.go:110: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-dl-github-343343 --download-only --kubernetes-version v1.34.0-rc.2 --preload-source=github --alsologtostderr --v=1 --driver=docker  --container-runtime=crio
preload_test.go:110: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-dl-github-343343 --download-only --kubernetes-version v1.34.0-rc.2 --preload-source=github --alsologtostderr --v=1 --driver=docker  --container-runtime=crio: (3.749550279s)
helpers_test.go:176: Cleaning up "test-preload-dl-github-343343" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p test-preload-dl-github-343343
--- PASS: TestPreload/PreloadSrc/github (3.94s)

                                                
                                    
x
+
TestPreload/PreloadSrc/gcs-cached (0.55s)

                                                
                                                
=== RUN   TestPreload/PreloadSrc/gcs-cached
preload_test.go:110: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-dl-gcs-cached-955830 --download-only --kubernetes-version v1.34.0-rc.2 --preload-source=gcs --alsologtostderr --v=1 --driver=docker  --container-runtime=crio
helpers_test.go:176: Cleaning up "test-preload-dl-gcs-cached-955830" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p test-preload-dl-gcs-cached-955830
--- PASS: TestPreload/PreloadSrc/gcs-cached (0.55s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (54.44s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p auto-246753 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p auto-246753 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio: (54.439456637s)
--- PASS: TestNetworkPlugins/group/auto/Start (54.44s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (1.68s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p newest-cni-133340 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p newest-cni-133340 --alsologtostderr -v=3: (1.677680182s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (1.68s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-133340 -n newest-cni-133340
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-133340 -n newest-cni-133340: exit status 7 (75.226348ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p newest-cni-133340 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.19s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (17.11s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p newest-cni-133340 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-arm64 start -p newest-cni-133340 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0: (16.484824906s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-133340 -n newest-cni-133340
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (17.11s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:271: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:282: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.33s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p newest-cni-133340 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20251212-v0.29.0-alpha-105-g20ccfc88
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.33s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (47.89s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p kindnet-246753 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=crio
E1227 10:08:03.227446  303043 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22344-301174/.minikube/profiles/no-preload-021144/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 10:08:03.232771  303043 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22344-301174/.minikube/profiles/no-preload-021144/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 10:08:03.243191  303043 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22344-301174/.minikube/profiles/no-preload-021144/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 10:08:03.264264  303043 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22344-301174/.minikube/profiles/no-preload-021144/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 10:08:03.304634  303043 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22344-301174/.minikube/profiles/no-preload-021144/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 10:08:03.384990  303043 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22344-301174/.minikube/profiles/no-preload-021144/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 10:08:03.545487  303043 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22344-301174/.minikube/profiles/no-preload-021144/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 10:08:03.866121  303043 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22344-301174/.minikube/profiles/no-preload-021144/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 10:08:04.507088  303043 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22344-301174/.minikube/profiles/no-preload-021144/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 10:08:05.787671  303043 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22344-301174/.minikube/profiles/no-preload-021144/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 10:08:08.347906  303043 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22344-301174/.minikube/profiles/no-preload-021144/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 10:08:13.468997  303043 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22344-301174/.minikube/profiles/no-preload-021144/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 10:08:13.555648  303043 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22344-301174/.minikube/profiles/old-k8s-version-156305/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p kindnet-246753 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=crio: (47.887176462s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (47.89s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.46s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p auto-246753 "pgrep -a kubelet"
I1227 10:08:14.916538  303043 config.go:182] Loaded profile config "auto-246753": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.46s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (12.34s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-246753 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:353: "netcat-5dd4ccdc4b-pndl9" [6342c5b6-1e58-40c0-8bdd-675dd9c14aa0] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:353: "netcat-5dd4ccdc4b-pndl9" [6342c5b6-1e58-40c0-8bdd-675dd9c14aa0] Running
E1227 10:08:23.709308  303043 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22344-301174/.minikube/profiles/no-preload-021144/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 12.003960084s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (12.34s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-246753 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-246753 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-246753 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:353: "kindnet-g6bft" [a3fb0e66-af09-4777-bf29-cf637dc47216] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.004236275s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (71.94s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p calico-246753 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p calico-246753 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=crio: (1m11.944691918s)
--- PASS: TestNetworkPlugins/group/calico/Start (71.94s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.38s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p kindnet-246753 "pgrep -a kubelet"
I1227 10:08:52.749875  303043 config.go:182] Loaded profile config "kindnet-246753": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.38s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (12.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-246753 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:353: "netcat-5dd4ccdc4b-spk4j" [c27914cc-167c-4b06-856e-dc4c25bfe216] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:353: "netcat-5dd4ccdc4b-spk4j" [c27914cc-167c-4b06-856e-dc4c25bfe216] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 12.004354475s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (12.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-246753 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-246753 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-246753 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (55.89s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-flannel-246753 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=crio
E1227 10:09:49.865850  303043 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22344-301174/.minikube/profiles/addons-730938/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-flannel-246753 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=crio: (55.891119778s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (55.89s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:353: "calico-node-7bmzn" [77f247e1-b434-4240-a01d-dac406620be4] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.004420867s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.37s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p calico-246753 "pgrep -a kubelet"
E1227 10:10:06.814649  303043 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22344-301174/.minikube/profiles/addons-730938/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
I1227 10:10:07.075549  303043 config.go:182] Loaded profile config "calico-246753": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.37s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (11.43s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-246753 replace --force -f testdata/netcat-deployment.yaml
I1227 10:10:07.487198  303043 kapi.go:136] Waiting for deployment netcat to stabilize, generation 1 observed generation 1 spec.replicas 1 status.replicas 0
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:353: "netcat-5dd4ccdc4b-dt7vn" [28e661b7-df96-4fb9-876c-5fdd9a76a2a7] Pending
helpers_test.go:353: "netcat-5dd4ccdc4b-dt7vn" [28e661b7-df96-4fb9-876c-5fdd9a76a2a7] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 11.005149866s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (11.43s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-246753 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-246753 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-246753 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.41s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p custom-flannel-246753 "pgrep -a kubelet"
I1227 10:10:27.439486  303043 config.go:182] Loaded profile config "custom-flannel-246753": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.41s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (12.4s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-246753 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:353: "netcat-5dd4ccdc4b-tsmz8" [2d417cfc-1dc7-44ac-b6b3-6cc4e26248dd] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E1227 10:10:28.702071  303043 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22344-301174/.minikube/profiles/default-k8s-diff-port-681744/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 10:10:28.707797  303043 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22344-301174/.minikube/profiles/default-k8s-diff-port-681744/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 10:10:28.718099  303043 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22344-301174/.minikube/profiles/default-k8s-diff-port-681744/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 10:10:28.738761  303043 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22344-301174/.minikube/profiles/default-k8s-diff-port-681744/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 10:10:28.779801  303043 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22344-301174/.minikube/profiles/default-k8s-diff-port-681744/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 10:10:28.860956  303043 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22344-301174/.minikube/profiles/default-k8s-diff-port-681744/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 10:10:29.021396  303043 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22344-301174/.minikube/profiles/default-k8s-diff-port-681744/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 10:10:29.341597  303043 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22344-301174/.minikube/profiles/default-k8s-diff-port-681744/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 10:10:29.709616  303043 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22344-301174/.minikube/profiles/old-k8s-version-156305/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 10:10:29.982637  303043 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22344-301174/.minikube/profiles/default-k8s-diff-port-681744/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 10:10:31.263531  303043 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22344-301174/.minikube/profiles/default-k8s-diff-port-681744/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 10:10:33.823845  303043 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22344-301174/.minikube/profiles/default-k8s-diff-port-681744/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:353: "netcat-5dd4ccdc4b-tsmz8" [2d417cfc-1dc7-44ac-b6b3-6cc4e26248dd] Running
E1227 10:10:38.944110  303043 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22344-301174/.minikube/profiles/default-k8s-diff-port-681744/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 12.005520876s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (12.40s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-246753 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-246753 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-246753 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (50.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p enable-default-cni-246753 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=crio
E1227 10:10:47.071887  303043 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22344-301174/.minikube/profiles/no-preload-021144/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 10:10:49.184765  303043 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22344-301174/.minikube/profiles/default-k8s-diff-port-681744/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 10:10:57.396679  303043 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22344-301174/.minikube/profiles/old-k8s-version-156305/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p enable-default-cni-246753 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=crio: (50.008218277s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (50.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (56.96s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p flannel-246753 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=crio
E1227 10:11:09.665032  303043 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22344-301174/.minikube/profiles/default-k8s-diff-port-681744/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p flannel-246753 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=crio: (56.960394051s)
--- PASS: TestNetworkPlugins/group/flannel/Start (56.96s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.37s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p enable-default-cni-246753 "pgrep -a kubelet"
I1227 10:11:33.707333  303043 config.go:182] Loaded profile config "enable-default-cni-246753": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.37s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (11.35s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-246753 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:353: "netcat-5dd4ccdc4b-lxbql" [0e042ec9-4fea-4e75-abd6-4bbc69a579d3] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:353: "netcat-5dd4ccdc4b-lxbql" [0e042ec9-4fea-4e75-abd6-4bbc69a579d3] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 11.010623114s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (11.35s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-246753 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-246753 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-246753 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:353: "kube-flannel-ds-cfhxl" [bc5bde00-754c-40ec-af68-9c3f43cf3b6d] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.003139227s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (47.48s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p bridge-246753 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p bridge-246753 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=crio: (47.482896376s)
--- PASS: TestNetworkPlugins/group/bridge/Start (47.48s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.38s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p flannel-246753 "pgrep -a kubelet"
I1227 10:12:10.366564  303043 config.go:182] Loaded profile config "flannel-246753": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.38s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (10.42s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-246753 replace --force -f testdata/netcat-deployment.yaml
I1227 10:12:10.648955  303043 kapi.go:136] Waiting for deployment netcat to stabilize, generation 1 observed generation 1 spec.replicas 1 status.replicas 0
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:353: "netcat-5dd4ccdc4b-ztdc7" [1b04c811-7a79-4572-8331-87921ede6f8c] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:353: "netcat-5dd4ccdc4b-ztdc7" [1b04c811-7a79-4572-8331-87921ede6f8c] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 10.004756504s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (10.42s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-246753 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-246753 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-246753 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p bridge-246753 "pgrep -a kubelet"
I1227 10:12:54.202940  303043 config.go:182] Loaded profile config "bridge-246753": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (9.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-246753 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:353: "netcat-5dd4ccdc4b-22wb8" [41c4f5c8-d814-4803-9f77-4dcf054af73a] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:353: "netcat-5dd4ccdc4b-22wb8" [41c4f5c8-d814-4803-9f77-4dcf054af73a] Running
E1227 10:13:03.228115  303043 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22344-301174/.minikube/profiles/no-preload-021144/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 9.003643495s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (9.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-246753 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-246753 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-246753 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.13s)

                                                
                                    

Test skip (31/332)

x
+
TestDownloadOnly/v1.28.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.28.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.28.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.28.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.35.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.35.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.35.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0.43s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:231: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p download-docker-726041 --alsologtostderr --driver=docker  --container-runtime=crio
aaa_download_only_test.go:248: Skip for arm64 platform. See https://github.com/kubernetes/minikube/issues/10144
helpers_test.go:176: Cleaning up "download-docker-726041" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p download-docker-726041
--- SKIP: TestDownloadOnlyKic (0.43s)

                                                
                                    
x
+
TestOffline (0s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:35: skipping TestOffline - only docker runtime supported on arm64. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestOffline (0.00s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/RealCredentials (0s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/RealCredentials
addons_test.go:761: This test requires a GCE instance (excluding Cloud Shell) with a container based driver
--- SKIP: TestAddons/serial/GCPAuth/RealCredentials (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:485: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestAddons/parallel/AmdGpuDevicePlugin (0s)

                                                
                                                
=== RUN   TestAddons/parallel/AmdGpuDevicePlugin
=== PAUSE TestAddons/parallel/AmdGpuDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/AmdGpuDevicePlugin
addons_test.go:1035: skip amd gpu test on all but docker driver and amd64 platform
--- SKIP: TestAddons/parallel/AmdGpuDevicePlugin (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing crio
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with crio true linux arm64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:37: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:101: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1797: arm64 is not supported by mysql. Skip the test. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestFunctional/parallel/MySQL (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:478: only validate docker env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:565: only validate podman env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes
functional_test.go:82: 
--- SKIP: TestFunctionalNewestKubernetes (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestISOImage (0s)

                                                
                                                
=== RUN   TestISOImage
iso_test.go:36: This test requires a VM driver
--- SKIP: TestISOImage (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing crio container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:101: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:176: Cleaning up "disable-driver-mounts-242374" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p disable-driver-mounts-242374
--- SKIP: TestStartStop/group/disable-driver-mounts (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (3.76s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as crio container runtimes requires CNI
panic.go:615: 
----------------------- debugLogs start: kubenet-246753 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-246753

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-246753

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-246753

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-246753

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-246753

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-246753

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-246753

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-246753

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-246753

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-246753

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-246753" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-246753"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-246753" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-246753"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-246753" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-246753"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-246753

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-246753" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-246753"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-246753" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-246753"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-246753" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-246753" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-246753" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-246753" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-246753" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-246753" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-246753" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-246753" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-246753" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-246753"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-246753" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-246753"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-246753" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-246753"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-246753" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-246753"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-246753" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-246753"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-246753" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-246753" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-246753" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-246753" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-246753"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-246753" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-246753"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-246753" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-246753"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-246753" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-246753"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-246753" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-246753"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-246753

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-246753" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-246753"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-246753" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-246753"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-246753" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-246753"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-246753" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-246753"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-246753" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-246753"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-246753" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-246753"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-246753" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-246753"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-246753" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-246753"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-246753" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-246753"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-246753" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-246753"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-246753" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-246753"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-246753" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-246753"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-246753" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-246753"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-246753" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-246753"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-246753" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-246753"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-246753" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-246753"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-246753" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-246753"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-246753" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-246753"

                                                
                                                
----------------------- debugLogs end: kubenet-246753 [took: 3.600922183s] --------------------------------
helpers_test.go:176: Cleaning up "kubenet-246753" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p kubenet-246753
--- SKIP: TestNetworkPlugins/group/kubenet (3.76s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (3.91s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:615: 
----------------------- debugLogs start: cilium-246753 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-246753

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-246753

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-246753

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-246753

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-246753

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-246753

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-246753

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-246753

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-246753

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-246753

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-246753" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-246753"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-246753" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-246753"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-246753" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-246753"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-246753

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-246753" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-246753"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-246753" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-246753"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-246753" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-246753" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-246753" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-246753" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-246753" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-246753" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-246753" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-246753" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-246753" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-246753"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-246753" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-246753"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-246753" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-246753"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-246753" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-246753"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-246753" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-246753"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-246753

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-246753

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-246753" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-246753" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-246753

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-246753

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-246753" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-246753" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-246753" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-246753" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-246753" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-246753" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-246753"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-246753" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-246753"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-246753" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-246753"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-246753" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-246753"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-246753" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-246753"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-246753

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-246753" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-246753"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-246753" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-246753"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-246753" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-246753"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-246753" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-246753"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-246753" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-246753"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-246753" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-246753"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-246753" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-246753"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-246753" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-246753"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-246753" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-246753"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-246753" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-246753"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-246753" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-246753"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-246753" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-246753"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-246753" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-246753"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-246753" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-246753"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-246753" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-246753"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-246753" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-246753"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-246753" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-246753"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-246753" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-246753"

                                                
                                                
----------------------- debugLogs end: cilium-246753 [took: 3.767427661s] --------------------------------
helpers_test.go:176: Cleaning up "cilium-246753" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p cilium-246753
--- SKIP: TestNetworkPlugins/group/cilium (3.91s)

                                                
                                    
Copied to clipboard